Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-beta1
    • Component/s: None
    • Target Version/s:
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Persist all built-in erasure coding policies and user defined erasure coding policies into NameNode fsImage and editlog reliably, so that all erasure coding policies remain consistent after NameNode restart.

      Description

      In meetup discussion with Zhe Zhang and Jing Zhao, it's suggested that we persist EC schemas in NameNode centrally and reliably, so that EC zones can reference them by name efficiently.

      1. HDFS-7859.001.patch
        32 kB
        Xinwei Qin
      2. HDFS-7859.002.patch
        35 kB
        Xinwei Qin
      3. HDFS-7859.004.patch
        29 kB
        Xinwei Qin
      4. HDFS-7859.005.patch
        36 kB
        Xinwei Qin
      5. HDFS-7859.006.patch
        36 kB
        Xinwei Qin
      6. HDFS-7859.007.patch
        34 kB
        Xinwei Qin
      7. HDFS-7859.008.patch
        36 kB
        Surendra Singh Lilhore
      8. HDFS-7859.009.patch
        88 kB
        Surendra Singh Lilhore
      9. HDFS-7859.010.patch
        86 kB
        SammiChen
      10. HDFS-7859.011.patch
        53 kB
        SammiChen
      11. HDFS-7859.012.patch
        66 kB
        SammiChen
      12. HDFS-7859.013.patch
        34 kB
        SammiChen
      13. HDFS-7859.014.patch
        24 kB
        SammiChen
      14. HDFS-7859.015.patch
        20 kB
        SammiChen
      15. HDFS-7859.016.patch
        19 kB
        SammiChen
      16. HDFS-7859.017.patch
        19 kB
        SammiChen
      17. HDFS-7859.018.patch
        19 kB
        SammiChen
      18. HDFS-7859.019.patch
        19 kB
        SammiChen
      19. HDFS-7859-HDFS-7285.002.patch
        35 kB
        Zhe Zhang
      20. HDFS-7859-HDFS-7285.002.patch
        35 kB
        Allen Wittenauer
      21. HDFS-7859-HDFS-7285.003.patch
        27 kB
        Xinwei Qin

        Issue Links

          Activity

          Hide
          Sammi SammiChen added a comment -

          Sure. Release note is ready. Thanks Kai Zheng, Lei (Eddy) Xu, Andrew Wang, Xinwei Qin , Tsz Wo Nicholas Sze, Zhe Zhang and Jing Zhao for all your contribution and effort!

          Show
          Sammi SammiChen added a comment - Sure. Release note is ready. Thanks Kai Zheng , Lei (Eddy) Xu , Andrew Wang , Xinwei Qin , Tsz Wo Nicholas Sze , Zhe Zhang and Jing Zhao for all your contribution and effort!
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12878 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12878/)
          HDFS-7859. Erasure Coding: Persist erasure coding policies in NameNode. (kai.zheng: rev ae8f55b93243560bd891962d6c64320ddc62a7d7)

          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepType.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12878 (See https://builds.apache.org/job/Hadoop-trunk-Commit/12878/ ) HDFS-7859 . Erasure Coding: Persist erasure coding policies in NameNode. (kai.zheng: rev ae8f55b93243560bd891962d6c64320ddc62a7d7) (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ErasureCodingPolicyManager.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestErasureCodingPolicies.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/startupprogress/StepType.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatProtobuf.java
          Hide
          drankye Kai Zheng added a comment -

          SammiChen, would you help with the release notes? Thanks again!

          Show
          drankye Kai Zheng added a comment - SammiChen , would you help with the release notes? Thanks again!
          Hide
          drankye Kai Zheng added a comment -

          Committed to trunk and branch 3.0. Thanks SammiChen for making the final patch, Lei (Eddy) Xu for the review, and Andrew Wang for the driving!

          This was a long time effort. Many thanks to Xinwei Qin for the initial taking and patch, Tsz Wo Nicholas Sze for the guiding, Zhe Zhang and Jing Zhao for the ideas.

          With HDFS-12395 also in soon, I'm happy we can make it for the BETA 1 release, as many people asked in the past like some requests we can see in HDFS-7337.

          Show
          drankye Kai Zheng added a comment - Committed to trunk and branch 3.0. Thanks SammiChen for making the final patch, Lei (Eddy) Xu for the review, and Andrew Wang for the driving! This was a long time effort. Many thanks to Xinwei Qin for the initial taking and patch, Tsz Wo Nicholas Sze for the guiding, Zhe Zhang and Jing Zhao for the ideas. With HDFS-12395 also in soon, I'm happy we can make it for the BETA 1 release, as many people asked in the past like some requests we can see in HDFS-7337 .
          Hide
          drankye Kai Zheng added a comment -

          The building looks good. Will commit it shortly.

          Show
          drankye Kai Zheng added a comment - The building looks good. Will commit it shortly.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 47s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
                trunk Compile Tests
          +1 mvninstall 16m 59s trunk passed
          +1 compile 1m 6s trunk passed
          +1 checkstyle 0m 46s trunk passed
          +1 mvnsite 1m 11s trunk passed
          +1 findbugs 2m 3s trunk passed
          +1 javadoc 0m 50s trunk passed
                Patch Compile Tests
          +1 mvninstall 1m 7s the patch passed
          +1 compile 0m 57s the patch passed
          +1 cc 0m 57s the patch passed
          +1 javac 0m 57s the patch passed
          +1 checkstyle 0m 41s the patch passed
          +1 mvnsite 1m 6s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 1s the patch passed
          +1 javadoc 0m 47s the patch passed
                Other Tests
          -1 unit 107m 8s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          139m 32s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.TestClientProtocolForPipelineRecovery
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.TestFileChecksum
            hadoop.hdfs.TestFileAppendRestart
            hadoop.hdfs.TestEncryptedTransfer
            hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
            hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887052/HDFS-7859.019.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 71aa94a63c5d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 390c2b5
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21147/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21147/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21147/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 47s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.       trunk Compile Tests +1 mvninstall 16m 59s trunk passed +1 compile 1m 6s trunk passed +1 checkstyle 0m 46s trunk passed +1 mvnsite 1m 11s trunk passed +1 findbugs 2m 3s trunk passed +1 javadoc 0m 50s trunk passed       Patch Compile Tests +1 mvninstall 1m 7s the patch passed +1 compile 0m 57s the patch passed +1 cc 0m 57s the patch passed +1 javac 0m 57s the patch passed +1 checkstyle 0m 41s the patch passed +1 mvnsite 1m 6s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 1s the patch passed +1 javadoc 0m 47s the patch passed       Other Tests -1 unit 107m 8s hadoop-hdfs in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 139m 32s Reason Tests Failed junit tests hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestFileChecksum   hadoop.hdfs.TestFileAppendRestart   hadoop.hdfs.TestEncryptedTransfer   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887052/HDFS-7859.019.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 71aa94a63c5d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 390c2b5 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21147/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21147/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21147/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Manually triggered, see if it works.

          Show
          drankye Kai Zheng added a comment - Manually triggered, see if it works.
          Hide
          drankye Kai Zheng added a comment -

          Some issues like this and HDFS-12395 don't build, not sure what's wrong.

          Show
          drankye Kai Zheng added a comment - Some issues like this and HDFS-12395 don't build, not sure what's wrong.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Sammi for the update. The latest patch LGTM. +1 pending on the building.

          Show
          drankye Kai Zheng added a comment - Thanks Sammi for the update. The latest patch LGTM. +1 pending on the building.
          Hide
          Sammi SammiChen added a comment -

          Upload 019.patch after offline discussion with Kai

          Show
          Sammi SammiChen added a comment - Upload 019.patch after offline discussion with Kai
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                trunk Compile Tests
          +1 mvninstall 16m 13s trunk passed
          +1 compile 1m 1s trunk passed
          +1 checkstyle 0m 42s trunk passed
          +1 mvnsite 1m 5s trunk passed
          +1 findbugs 2m 4s trunk passed
          +1 javadoc 0m 47s trunk passed
                Patch Compile Tests
          +1 mvninstall 0m 59s the patch passed
          +1 compile 0m 54s the patch passed
          +1 cc 0m 54s the patch passed
          +1 javac 0m 54s the patch passed
          +1 checkstyle 0m 39s the patch passed
          +1 mvnsite 1m 0s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 57s the patch passed
          +1 javadoc 0m 42s the patch passed
                Other Tests
          -1 unit 87m 41s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          117m 49s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestErasureCodingPolicies
            hadoop.hdfs.TestClientProtocolForPipelineRecovery
            hadoop.hdfs.server.namenode.TestReencryption
            hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestReencryptionWithKMS
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887024/HDFS-7859.018.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 3ee6bb90b993 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e0b3c64
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21132/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21132/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21132/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 16m 13s trunk passed +1 compile 1m 1s trunk passed +1 checkstyle 0m 42s trunk passed +1 mvnsite 1m 5s trunk passed +1 findbugs 2m 4s trunk passed +1 javadoc 0m 47s trunk passed       Patch Compile Tests +1 mvninstall 0m 59s the patch passed +1 compile 0m 54s the patch passed +1 cc 0m 54s the patch passed +1 javac 0m 54s the patch passed +1 checkstyle 0m 39s the patch passed +1 mvnsite 1m 0s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 57s the patch passed +1 javadoc 0m 42s the patch passed       Other Tests -1 unit 87m 41s hadoop-hdfs in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 117m 49s Reason Tests Failed junit tests hadoop.hdfs.TestErasureCodingPolicies   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.server.namenode.TestReencryption   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12887024/HDFS-7859.018.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 3ee6bb90b993 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e0b3c64 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21132/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21132/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21132/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          Rebase the patch.

          Show
          Sammi SammiChen added a comment - Rebase the patch.
          Hide
          drankye Kai Zheng added a comment -

          Sammi, please rebase this. Applying to the latest trunk failed.

          Show
          drankye Kai Zheng added a comment - Sammi, please rebase this. Applying to the latest trunk failed.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 15m 35s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                trunk Compile Tests
          +1 mvninstall 15m 49s trunk passed
          +1 compile 0m 51s trunk passed
          +1 checkstyle 0m 37s trunk passed
          +1 mvnsite 0m 55s trunk passed
          +1 findbugs 1m 44s trunk passed
          +1 javadoc 0m 41s trunk passed
                Patch Compile Tests
          +1 mvninstall 0m 56s the patch passed
          +1 compile 0m 58s the patch passed
          +1 cc 0m 58s the patch passed
          +1 javac 0m 58s the patch passed
          +1 checkstyle 0m 36s the patch passed
          +1 mvnsite 0m 56s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 3s the patch passed
          +1 javadoc 0m 39s the patch passed
                Other Tests
          -1 unit 107m 58s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          152m 5s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
            hadoop.hdfs.TestReadStripedFileWithMissingBlocks
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.server.namenode.TestAuditLogs
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.TestClientProtocolForPipelineRecovery
            hadoop.hdfs.TestLeaseRecoveryStriped
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile
            org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886857/HDFS-7859.017.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 33c219834ca8 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / fa6cc43
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21117/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21117/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21117/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 15m 35s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 15m 49s trunk passed +1 compile 0m 51s trunk passed +1 checkstyle 0m 37s trunk passed +1 mvnsite 0m 55s trunk passed +1 findbugs 1m 44s trunk passed +1 javadoc 0m 41s trunk passed       Patch Compile Tests +1 mvninstall 0m 56s the patch passed +1 compile 0m 58s the patch passed +1 cc 0m 58s the patch passed +1 javac 0m 58s the patch passed +1 checkstyle 0m 36s the patch passed +1 mvnsite 0m 56s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 3s the patch passed +1 javadoc 0m 39s the patch passed       Other Tests -1 unit 107m 58s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 152m 5s Reason Tests Failed junit tests hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks   hadoop.hdfs.TestReadStripedFileWithMissingBlocks   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.server.namenode.TestAuditLogs   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.TestLeaseRecoveryStriped Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886857/HDFS-7859.017.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 33c219834ca8 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / fa6cc43 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21117/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21117/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21117/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          1. fix style issues
          2. change section name to ErasureCodingSection
          3. remove the conflicting check in loadPolicy

          Show
          Sammi SammiChen added a comment - 1. fix style issues 2. change section name to ErasureCodingSection 3. remove the conflicting check in loadPolicy
          Hide
          Sammi SammiChen added a comment -

          Thanks Lei (Eddy) Xu for the comments! If a policy is disabled because of codec not supported or cell size not support, it will be checked again at the enable time to make sure every enabled policy really works. The check will be part of HDFS-12399 with dedicated unit test.

          Show
          Sammi SammiChen added a comment - Thanks Lei (Eddy) Xu for the comments! If a policy is disabled because of codec not supported or cell size not support, it will be checked again at the enable time to make sure every enabled policy really works. The check will be part of HDFS-12399 with dedicated unit test.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Thanks for the updates, SammiChen. LGTM overall. +1 pending.

          Some small issues:

          // loadPolicy()
          if (!CodecUtil.hasCodec(policy.getCodecName()) ||
             policy.getCellSize() > maxCellSize) {
             // If policy is not supported in current system, set the policy state to
             // DISABLED;
             policy.setState(ErasureCodingPolicyState.DISABLED);
          }
          

          Does it mean that user can still enable this policy later via enablePolicy()? It is not necessary to be addressed in this patch, but do we have a way to guard what policy can be enabled.

          Show
          eddyxu Lei (Eddy) Xu added a comment - Thanks for the updates, SammiChen . LGTM overall. +1 pending. Some small issues: // loadPolicy() if (!CodecUtil.hasCodec(policy.getCodecName()) || policy.getCellSize() > maxCellSize) { // If policy is not supported in current system, set the policy state to // DISABLED; policy.setState(ErasureCodingPolicyState.DISABLED); } Does it mean that user can still enable this policy later via enablePolicy() ? It is not necessary to be addressed in this patch, but do we have a way to guard what policy can be enabled.
          Hide
          drankye Kai Zheng added a comment -

          Thanks Sammi for the update and great work. In addition to the off-line discussion points regarding how to move on this:

          There are existing "CacheManagerSection" saves cache directives for CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC policies for ErasureCodingPolicyManager.

          Good point. However, if you'd like to see all the existing sections, you can see all are concise and not so verbose. For the fsimage definition, the brevity should make sense unless it introduces ambiguity. I'd prefer to do the change ErasureCodingPolicyManagerSection => ErasureCodingSection along with related methods (even more lengthy).

          Show
          drankye Kai Zheng added a comment - Thanks Sammi for the update and great work. In addition to the off-line discussion points regarding how to move on this: There are existing "CacheManagerSection" saves cache directives for CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC policies for ErasureCodingPolicyManager. Good point. However, if you'd like to see all the existing sections, you can see all are concise and not so verbose. For the fsimage definition, the brevity should make sense unless it introduces ambiguity. I'd prefer to do the change ErasureCodingPolicyManagerSection => ErasureCodingSection along with related methods (even more lengthy).
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 23s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                trunk Compile Tests
          +1 mvninstall 16m 50s trunk passed
          +1 compile 1m 14s trunk passed
          +1 checkstyle 0m 47s trunk passed
          +1 mvnsite 1m 16s trunk passed
          -1 findbugs 2m 10s hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings.
          +1 javadoc 0m 52s trunk passed
                Patch Compile Tests
          +1 mvninstall 1m 8s the patch passed
          +1 compile 1m 7s the patch passed
          +1 cc 1m 7s the patch passed
          +1 javac 1m 7s the patch passed
          -0 checkstyle 0m 41s hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 60 unchanged - 0 fixed = 62 total (was 60)
          +1 mvnsite 1m 5s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 3s the patch passed
          +1 javadoc 0m 39s the patch passed
                Other Tests
          -1 unit 108m 1s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          140m 6s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
            hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.TestFileAppend4
            hadoop.hdfs.TestClientProtocolForPipelineRecovery
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
            hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
            hadoop.hdfs.TestDecommissionWithStriped
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
            hadoop.hdfs.TestDatanodeLayoutUpgrade
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.server.blockmanagement.TestBlockManager
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
            hadoop.hdfs.server.namenode.TestReencryptionWithKMS
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
            hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886040/HDFS-7859.016.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 6dac7e1d01ff 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / ab8368d
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21050/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21050/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 23s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 16m 50s trunk passed +1 compile 1m 14s trunk passed +1 checkstyle 0m 47s trunk passed +1 mvnsite 1m 16s trunk passed -1 findbugs 2m 10s hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. +1 javadoc 0m 52s trunk passed       Patch Compile Tests +1 mvninstall 1m 8s the patch passed +1 compile 1m 7s the patch passed +1 cc 1m 7s the patch passed +1 javac 1m 7s the patch passed -0 checkstyle 0m 41s hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 60 unchanged - 0 fixed = 62 total (was 60) +1 mvnsite 1m 5s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 3s the patch passed +1 javadoc 0m 39s the patch passed       Other Tests -1 unit 108m 1s hadoop-hdfs in the patch failed. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 140m 6s Reason Tests Failed junit tests hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.TestFileAppend4   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030   hadoop.hdfs.TestDecommissionWithStriped   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090   hadoop.hdfs.TestDatanodeLayoutUpgrade   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.server.blockmanagement.TestBlockManager   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070   hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12886040/HDFS-7859.016.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 6dac7e1d01ff 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / ab8368d Default Java 1.8.0_144 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/21050/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21050/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21050/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          Thanks Lei (Eddy) Xu and Kai Zheng for review the patch and provide very detail suggestions!

          Could you consider to use:
          message ErasureCodingPolicyManagerSection {
          repeated ErasureCodingPolicyProto policies = 1;
          }

          Sure.

          // dd new erasure coding policy
          ECSchema newSchema = new ECSchema("rs", 5, 3);

          The comments actually is "add new erasure coding policy". It's a typo.

          Checking file / directory that is using this particular policy is a potentially O operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O.

          HDFS-12405 is created to track the permanently delete the policy from system at Namenode restart time. Will start to working on it after beta1.

          Regarding to the policy ID design, are there general rules for customize EC policy design? My question is, what is the ID value range can be chosen for a customized policy. Currently the system EC policies use values up to 5. If a customer / vender provides a new EC policy with ID=6, when the next version of Hadoop adding a new EC policy, how do we handle the conflicts (i.e, ID=6 has been used), in fsimage and INode. Or a customer using policies from two vendors, who accidentally use the same IDs. SammiChen could you add some test cases like this as future work?

          Here are the general rules for customized EC policy,
          1. when user add customized EC policy, user specify codec name, data units number, parity units number, cell size. Policy ID and policy name are automatically generated by system. customized EC policy ID starts from 64, atomic incremented. So generally there will not have 2 policies in the same system has the same policy ID.
          2. system built-in policy ID starts from 1 to 63. system policy and customized policy will have different ID range.

          Question to Kai Zheng:
          I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy.
          Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies?

          like to have inputs from Andrew Wang. I'm fine with the thought.

          Again, could we make the change: ErasureCodingPolicyManagerSection => ErasureCodingSection. Also check related names like loadErasureCodingPolicyManagerSection, saveErasureCodingPolicyManagerSection.

          There are existing "CacheManagerSection" saves cache directives for CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC policies for ErasureCodingPolicyManager.

          All other comments will be taken care in next patch.

          Show
          Sammi SammiChen added a comment - Thanks Lei (Eddy) Xu and Kai Zheng for review the patch and provide very detail suggestions! Could you consider to use: message ErasureCodingPolicyManagerSection { repeated ErasureCodingPolicyProto policies = 1; } Sure. // dd new erasure coding policy ECSchema newSchema = new ECSchema("rs", 5, 3); The comments actually is "add new erasure coding policy". It's a typo. Checking file / directory that is using this particular policy is a potentially O operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O . HDFS-12405 is created to track the permanently delete the policy from system at Namenode restart time. Will start to working on it after beta1. Regarding to the policy ID design, are there general rules for customize EC policy design? My question is, what is the ID value range can be chosen for a customized policy. Currently the system EC policies use values up to 5. If a customer / vender provides a new EC policy with ID=6, when the next version of Hadoop adding a new EC policy, how do we handle the conflicts (i.e, ID=6 has been used), in fsimage and INode. Or a customer using policies from two vendors, who accidentally use the same IDs. SammiChen could you add some test cases like this as future work? Here are the general rules for customized EC policy, 1. when user add customized EC policy, user specify codec name, data units number, parity units number, cell size. Policy ID and policy name are automatically generated by system. customized EC policy ID starts from 64, atomic incremented. So generally there will not have 2 policies in the same system has the same policy ID. 2. system built-in policy ID starts from 1 to 63. system policy and customized policy will have different ID range. Question to Kai Zheng: I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy. Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies? like to have inputs from Andrew Wang . I'm fine with the thought. Again, could we make the change: ErasureCodingPolicyManagerSection => ErasureCodingSection. Also check related names like loadErasureCodingPolicyManagerSection, saveErasureCodingPolicyManagerSection. There are existing "CacheManagerSection" saves cache directives for CacheManager, "SecretManagerSection" saves secrets for SecretManager. So its better follow the style, use "ErasureCodingPolicyManagerSection" to save the EC policies for ErasureCodingPolicyManager. All other comments will be taken care in next patch.
          Hide
          drankye Kai Zheng added a comment -

          Checking file / directory that is using this particular policy is a potentially O operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O.

          Discussed with Sammi offline before, we can do this very lightly like below:

          # all used polices by files/directories
          usedPoliciesSet = ();
          
          # while loading inodes from fsimage, add the following two lines
          foreach (in: inodes) {
            policyId = getPolicyIdFromInode(in) # a bitwise op, very minor
            usedPoliciesSet.add(policyId)
          }
          
          # when inodes all loaded, add the following post step
          ErasureCodingPolicyManager.getInstance().updateWithUsedPolices(usedPoliciesSet)
          
          # in ErasureCodingPolicyManager.updateWithUsedPolices, it's a simple step to clean up removed policies with the used polices set. 
          

          Here or elsewhere, please ensure no policy can be DISABLED/REMOVED if it's used by files, with necessary tests.

          Let me correct myself. We should allow to disable/remove polices regardless they're used or not. It would be too much overhead to track policy usages while NN is running along with lots of files being operated. We can just do a post clean up as above illustrated.

          I'm fine to leave the policies clean up work as a future work to do, but if sounds good maybe we can get it done before 3.0 GA. It should be OK since it doesn't involve API change.

          Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies?

          I agree, we should get it done this time. Actually, IIRC, this was already done but Sammi may need some double check and clean up if any.

          Show
          drankye Kai Zheng added a comment - Checking file / directory that is using this particular policy is a potentially O operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O . Discussed with Sammi offline before, we can do this very lightly like below: # all used polices by files/directories usedPoliciesSet = (); # while loading inodes from fsimage, add the following two lines foreach (in: inodes) { policyId = getPolicyIdFromInode(in) # a bitwise op, very minor usedPoliciesSet.add(policyId) } # when inodes all loaded, add the following post step ErasureCodingPolicyManager.getInstance().updateWithUsedPolices(usedPoliciesSet) # in ErasureCodingPolicyManager.updateWithUsedPolices, it's a simple step to clean up removed policies with the used polices set. Here or elsewhere, please ensure no policy can be DISABLED/REMOVED if it's used by files, with necessary tests. Let me correct myself. We should allow to disable/remove polices regardless they're used or not. It would be too much overhead to track policy usages while NN is running along with lots of files being operated. We can just do a post clean up as above illustrated. I'm fine to leave the policies clean up work as a future work to do, but if sounds good maybe we can get it done before 3.0 GA. It should be OK since it doesn't involve API change. Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies? I agree, we should get it done this time. Actually, IIRC, this was already done but Sammi may need some double check and clean up if any.
          Hide
          eddyxu Lei (Eddy) Xu added a comment - - edited

          Hi, SammiChen

          Thanks for the newly updated patch. It is much more concise.

          Some comments:

          message ErasureCodingPolicyManagerSection {
          // number of erasure coding policies, include built-in and user defined
          required uint32 numPolicies = 1;
          // repeated ErasureCodingPolicyProto policies;
          }
          

          Could you consider to use:

          message ErasureCodingPolicyManagerSection {
             repeated ErasureCodingPolicyProto policies = 1;
          }
          

          The total number of policies should be a very limited number (i.e., 64?). Putting them into one single protobuf message should be sufficient.

          // dd new erasure coding policy
          ECSchema newSchema = new ECSchema("rs", 5, 3);
          

          The comment above is not clear to me. Could you clarify a little bit?

          MiniDFSCluster cluster = null;
          try {
          ...
          } finally {
          if (cluster != null) {
            cluster.shutdown();
          }
          }
          

          MiniDFSCluster can be used in try-resource now.

          In test testChangeErasureCodingPolicyState()

           // 3. remove an erasure coding policy
          +    try {
          +      fs.removeErasureCodingPolicy(ecPolicy.getName());
          +      targetPolicy.setState(ErasureCodingPolicyState.REMOVED);
          +      // Save namespace and restart NameNode
          +      fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER);
          +      fs.saveNamespace();
          +      fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE);
          +
          +      cluster.restartNameNodes();
          +      cluster.waitActive();
          +      ecPolicy = ErasureCodingPolicyManager.getInstance().getByID(
          +          targetPolicy.getId());
          +      assertEquals("The erasure coding policy is not found",
          +          targetPolicy, ecPolicy);
          +      assertEquals("The erasure coding policy should be of removed state",
          +          ErasureCodingPolicyState.REMOVED, ecPolicy.getState());
          +    } catch (RemoteException e) {
          +      assertTrue("Built-in policy cannot be removed",
          +          ecPolicy.isSystemPolicy());
          +      assertExceptionContains("System erasure coding policy", e);
          +    }
          

          The assertEquals() at the end of this section are not happening if the RemoteException is thrown? or vise versa. Do these tests work as expected?

          +    /*
          +     * TODO check if there is any file or directory using this policy.
          +     * If no, remove this erasure coding policy from system permanently.
          +     */
          

          Checking file / directory that is using this particular policy is a potentially O(n) operation, where n = # of inodes. I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O(n).

          Regarding to the policy ID design, are there general rules for customize EC policy design? My question is, what is the ID value range can be chosen for a customized policy. Currently the system EC policies use values up to 5. If a customer / vender provides a new EC policy with ID=6, when the next version of Hadoop adding a new EC policy, how do we handle the conflicts (i.e, ID=6 has been used), in fsimage and INode. Or a customer using policies from two vendors, who accidentally use the same IDs. SammiChen could you add some test cases like this as future work?

          Question to Kai Zheng:

          I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy.

          Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies?

          Show
          eddyxu Lei (Eddy) Xu added a comment - - edited Hi, SammiChen Thanks for the newly updated patch. It is much more concise. Some comments: message ErasureCodingPolicyManagerSection { // number of erasure coding policies, include built-in and user defined required uint32 numPolicies = 1; // repeated ErasureCodingPolicyProto policies; } Could you consider to use: message ErasureCodingPolicyManagerSection { repeated ErasureCodingPolicyProto policies = 1; } The total number of policies should be a very limited number (i.e., 64?). Putting them into one single protobuf message should be sufficient. // dd new erasure coding policy ECSchema newSchema = new ECSchema( "rs" , 5, 3); The comment above is not clear to me. Could you clarify a little bit? MiniDFSCluster cluster = null ; try { ... } finally { if (cluster != null ) { cluster.shutdown(); } } MiniDFSCluster can be used in try-resource now. In test testChangeErasureCodingPolicyState() // 3. remove an erasure coding policy + try { + fs.removeErasureCodingPolicy(ecPolicy.getName()); + targetPolicy.setState(ErasureCodingPolicyState.REMOVED); + // Save namespace and restart NameNode + fs.setSafeMode(SafeModeAction.SAFEMODE_ENTER); + fs.saveNamespace(); + fs.setSafeMode(SafeModeAction.SAFEMODE_LEAVE); + + cluster.restartNameNodes(); + cluster.waitActive(); + ecPolicy = ErasureCodingPolicyManager.getInstance().getByID( + targetPolicy.getId()); + assertEquals( "The erasure coding policy is not found" , + targetPolicy, ecPolicy); + assertEquals( "The erasure coding policy should be of removed state" , + ErasureCodingPolicyState.REMOVED, ecPolicy.getState()); + } catch (RemoteException e) { + assertTrue( "Built-in policy cannot be removed" , + ecPolicy.isSystemPolicy()); + assertExceptionContains( " System erasure coding policy" , e); + } The assertEquals() at the end of this section are not happening if the RemoteException is thrown? or vise versa. Do these tests work as expected? + /* + * TODO check if there is any file or directory using this policy. + * If no, remove this erasure coding policy from system permanently. + */ Checking file / directory that is using this particular policy is a potentially O(n) operation, where n = # of inodes . I feel that it is OK to leave it in fsimage as garbage for now. In the future, we can let the fsimage loading process to handling this garbage, as it is O(n) . Regarding to the policy ID design, are there general rules for customize EC policy design? My question is, what is the ID value range can be chosen for a customized policy. Currently the system EC policies use values up to 5. If a customer / vender provides a new EC policy with ID=6, when the next version of Hadoop adding a new EC policy, how do we handle the conflicts (i.e, ID=6 has been used), in fsimage and INode. Or a customer using policies from two vendors, who accidentally use the same IDs. SammiChen could you add some test cases like this as future work? Question to Kai Zheng : I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy. Could this happen before BETA 1? it seems to be a breaking change. If not , do we have a plan to preserve both this key and the capability of adding/removing policies?
          Hide
          drankye Kai Zheng added a comment -

          Thanks Sammi for the update. The patch seems in much better form.

          1. Do you have any issue to track the TODO work? Will you plan to do it shortly for BETA1?

          +    /*
          +     * TODO check if there is any file or directory using this policy.
          +     * If no, remove this erasure coding policy from system permanently.
          +     * */
          

          2. Hmm, I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy. Please checkout the relevant discussions and issues to ensure this if it hasn't happened already.

          HDFS configuration file means "hdfs-site.xml". For example, user can define enabled policies through property "dfs.namenode.ec.policies.enabled"

          Copy the relevant codes again:

          +    for (ErasureCodingPolicy p : getPolicies()) {
          +      if (p.getName().equals(policyName) ||
          +          (p.getSchema().equals(policy.getSchema()) &&
          +          p.getCellSize() == policy.getCellSize())) {
          +        // If the same policy loaded from fsImage override policy loaded based
          +        // on HDFS configuration file
          +        LOG.info("The erasure coding policy name " + policy + " loaded from " +
          +            "fsImage overrides the one loaded according to HDFS hdfs-site.xml");
          +      }
          +    }
          

          3. Again, could we make the change: ErasureCodingPolicyManagerSection => ErasureCodingSection. Also check related names like loadErasureCodingPolicyManagerSection, saveErasureCodingPolicyManagerSection.

          4. The assert message would be "The erasure coding policy saved into and loaded from fsimage was bad" to make more sense.

          +      assertEquals("The erasure coding policy is not found",
          +          targetPolicy, ecPolicy);
          

          5. Here or elsewhere, please ensure no policy can be DISABLED/REMOVED if it's used by files, with necessary tests.

          Thanks!

          Show
          drankye Kai Zheng added a comment - Thanks Sammi for the update. The patch seems in much better form. 1. Do you have any issue to track the TODO work? Will you plan to do it shortly for BETA1? + /* + * TODO check if there is any file or directory using this policy. + * If no, remove this erasure coding policy from system permanently. + * */ 2. Hmm, I thought "dfs.namenode.ec.policies.enabled" should have been removed when adding the API to enable/disable policy. Please checkout the relevant discussions and issues to ensure this if it hasn't happened already. HDFS configuration file means "hdfs-site.xml". For example, user can define enabled policies through property "dfs.namenode.ec.policies.enabled" Copy the relevant codes again: + for (ErasureCodingPolicy p : getPolicies()) { + if (p.getName().equals(policyName) || + (p.getSchema().equals(policy.getSchema()) && + p.getCellSize() == policy.getCellSize())) { + // If the same policy loaded from fsImage override policy loaded based + // on HDFS configuration file + LOG.info( "The erasure coding policy name " + policy + " loaded from " + + "fsImage overrides the one loaded according to HDFS hdfs-site.xml" ); + } + } 3. Again, could we make the change: ErasureCodingPolicyManagerSection => ErasureCodingSection. Also check related names like loadErasureCodingPolicyManagerSection, saveErasureCodingPolicyManagerSection. 4. The assert message would be "The erasure coding policy saved into and loaded from fsimage was bad" to make more sense. + assertEquals( "The erasure coding policy is not found" , + targetPolicy, ecPolicy); 5. Here or elsewhere, please ensure no policy can be DISABLED/REMOVED if it's used by files, with necessary tests. Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 16s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
                trunk Compile Tests
          +1 mvninstall 13m 48s trunk passed
          +1 compile 0m 49s trunk passed
          +1 checkstyle 0m 36s trunk passed
          +1 mvnsite 0m 54s trunk passed
          +1 findbugs 1m 48s trunk passed
          +1 javadoc 0m 43s trunk passed
                Patch Compile Tests
          +1 mvninstall 1m 2s the patch passed
          +1 compile 0m 58s the patch passed
          +1 cc 0m 58s the patch passed
          +1 javac 0m 58s the patch passed
          +1 checkstyle 0m 36s the patch passed
          +1 mvnsite 1m 7s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 5s the patch passed
          +1 javadoc 0m 43s the patch passed
                Other Tests
          -1 unit 111m 53s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 23s The patch does not generate ASF License warnings.
          139m 21s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.blockmanagement.TestPendingReconstruction
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
            hadoop.hdfs.TestClientProtocolForPipelineRecovery
            hadoop.hdfs.server.namenode.TestReencryption
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
            hadoop.hdfs.TestSafeMode
            hadoop.hdfs.server.namenode.TestReencryptionWithKMS
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
            hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile
            org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885744/HDFS-7859.015.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux c1017ca5634e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / b6e7d13
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21032/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21032/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21032/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 16s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.       trunk Compile Tests +1 mvninstall 13m 48s trunk passed +1 compile 0m 49s trunk passed +1 checkstyle 0m 36s trunk passed +1 mvnsite 0m 54s trunk passed +1 findbugs 1m 48s trunk passed +1 javadoc 0m 43s trunk passed       Patch Compile Tests +1 mvninstall 1m 2s the patch passed +1 compile 0m 58s the patch passed +1 cc 0m 58s the patch passed +1 javac 0m 58s the patch passed +1 checkstyle 0m 36s the patch passed +1 mvnsite 1m 7s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 5s the patch passed +1 javadoc 0m 43s the patch passed       Other Tests -1 unit 111m 53s hadoop-hdfs in the patch failed. +1 asflicense 0m 23s The patch does not generate ASF License warnings. 139m 21s Reason Tests Failed junit tests hadoop.hdfs.server.blockmanagement.TestPendingReconstruction   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy   hadoop.hdfs.TestClientProtocolForPipelineRecovery   hadoop.hdfs.server.namenode.TestReencryption   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110   hadoop.hdfs.TestSafeMode   hadoop.hdfs.server.namenode.TestReencryptionWithKMS   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885744/HDFS-7859.015.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux c1017ca5634e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / b6e7d13 Default Java 1.8.0_144 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/21032/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21032/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21032/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          Refined the patch after offline discussion with Kai.

          Show
          Sammi SammiChen added a comment - Refined the patch after offline discussion with Kai.
          Hide
          drankye Kai Zheng added a comment -

          HDFS-12395 handles edit log related changes as part of this work.

          Show
          drankye Kai Zheng added a comment - HDFS-12395 handles edit log related changes as part of this work.
          Hide
          drankye Kai Zheng added a comment -

          Unfortunately I still found lots of not-so-relevant changes here. The changes looks good to have but should be in separate task(s) under HDFS-7337, instead of being mixed here, this jira should focus on persisting EC policies in NN and nothing more, so that some folks would be easier to have a quick glance at what changes we introduced to NN/fsimage. Please proceed one more time, thanks.

          Show
          drankye Kai Zheng added a comment - Unfortunately I still found lots of not-so-relevant changes here. The changes looks good to have but should be in separate task(s) under HDFS-7337 , instead of being mixed here, this jira should focus on persisting EC policies in NN and nothing more, so that some folks would be easier to have a quick glance at what changes we introduced to NN/fsimage. Please proceed one more time, thanks.
          Hide
          drankye Kai Zheng added a comment -

          Thanks SammiChen for the update! I'm happy to see the patch is very concise (24k) now and will give it another round of review later, today.

          Show
          drankye Kai Zheng added a comment - Thanks SammiChen for the update! I'm happy to see the patch is very concise (24k) now and will give it another round of review later, today.
          Hide
          Sammi SammiChen added a comment - - edited

          Thanks Kai Zheng for the review! Upload a new patch,
          1. fix style issues
          2. fix related failed unit test
          3. remove testChangeErasureCodingCodec

          Show
          Sammi SammiChen added a comment - - edited Thanks Kai Zheng for the review! Upload a new patch, 1. fix style issues 2. fix related failed unit test 3. remove testChangeErasureCodingCodec
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 24s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
                trunk Compile Tests
          0 mvndep 0m 17s Maven dependency ordering for branch
          +1 mvninstall 14m 3s trunk passed
          +1 compile 15m 38s trunk passed
          +1 checkstyle 1m 59s trunk passed
          +1 mvnsite 2m 5s trunk passed
          +1 findbugs 3m 18s trunk passed
          +1 javadoc 1m 36s trunk passed
                Patch Compile Tests
          0 mvndep 0m 16s Maven dependency ordering for patch
          +1 mvninstall 1m 29s the patch passed
          +1 compile 11m 59s the patch passed
          +1 cc 11m 59s the patch passed
          +1 javac 11m 59s the patch passed
          -0 checkstyle 2m 2s root: The patch generated 2 new + 68 unchanged - 0 fixed = 70 total (was 68)
          +1 mvnsite 2m 0s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 37s the patch passed
          +1 javadoc 1m 40s the patch passed
                Other Tests
          +1 unit 8m 55s hadoop-common in the patch passed.
          -1 unit 117m 18s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 32s The patch does not generate ASF License warnings.
          190m 25s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
            hadoop.hdfs.server.namenode.TestFSImage
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestDistributedFileSystem
            hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.TestReconstructStripedFile
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100
            hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
            hadoop.hdfs.TestFileAppendRestart
            hadoop.hdfs.TestReadStripedFileWithMissingBlocks
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
            hadoop.hdfs.TestEncryptedTransfer
            hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
            hadoop.hdfs.TestFileCreationDelete
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile
            org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885364/HDFS-7859.013.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 41286d822d1f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 5dba545
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21002/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/21002/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21002/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21002/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 24s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.       trunk Compile Tests 0 mvndep 0m 17s Maven dependency ordering for branch +1 mvninstall 14m 3s trunk passed +1 compile 15m 38s trunk passed +1 checkstyle 1m 59s trunk passed +1 mvnsite 2m 5s trunk passed +1 findbugs 3m 18s trunk passed +1 javadoc 1m 36s trunk passed       Patch Compile Tests 0 mvndep 0m 16s Maven dependency ordering for patch +1 mvninstall 1m 29s the patch passed +1 compile 11m 59s the patch passed +1 cc 11m 59s the patch passed +1 javac 11m 59s the patch passed -0 checkstyle 2m 2s root: The patch generated 2 new + 68 unchanged - 0 fixed = 70 total (was 68) +1 mvnsite 2m 0s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 37s the patch passed +1 javadoc 1m 40s the patch passed       Other Tests +1 unit 8m 55s hadoop-common in the patch passed. -1 unit 117m 18s hadoop-hdfs in the patch failed. +1 asflicense 0m 32s The patch does not generate ASF License warnings. 190m 25s Reason Tests Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130   hadoop.hdfs.server.namenode.TestFSImage   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestDistributedFileSystem   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.TestReconstructStripedFile   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover   hadoop.hdfs.TestFileAppendRestart   hadoop.hdfs.TestReadStripedFileWithMissingBlocks   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160   hadoop.hdfs.TestEncryptedTransfer   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070   hadoop.hdfs.TestFileCreationDelete   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile   org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885364/HDFS-7859.013.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 41286d822d1f 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 5dba545 Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/21002/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/21002/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/21002/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/21002/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          I would prefer to keep the unit test. More test, better quality.

          I want most of the changes attached here are relevant to this, if you feel strong to have the changes, please do it separately. Thanks.

          Show
          drankye Kai Zheng added a comment - I would prefer to keep the unit test. More test, better quality. I want most of the changes attached here are relevant to this, if you feel strong to have the changes, please do it separately. Thanks.
          Hide
          Sammi SammiChen added a comment -

          Thanks Kai Zheng for quick review!

          1. Did you notice some failure or error for the current behavior of ElasticByteBufferPool->getBuffer(boolean direct, int length)? It looks reasonable to me that it can return a ByteBuffer of larger capacity than required; it can the caller's responsibility to use it well. Anyway, it's not relevant to this issue, so would you please handle it separately? Thanks.

          HDFS-12392 is opened for this. Double checked all failed unit tests, they are not relative to this code change.

          3. What did you mean by policies loaded HDFS configuration file? There isn't any such file to configure and load EC policies. User may provide one in client, but it's forgotten after used.

          HDFS configuration file means "hdfs-site.xml". For example, user can define enabled policies through property "dfs.namenode.ec.policies.enabled", later user may disable the policy dynamically through API, then save the namespace to fsimage. Next time, when Namenode starts, it will get a "enable" state policy from "dfs.namenode.ec.policies.enabled" and a "disable" state policy from fsImage for the same policy. In this case, the policy state from fsImage will override the policy state from "dfs.namenode.ec.policies.enabled".

          6. Could we have a separate issue to refactor the existing codes, renaming addErasureCodePolicy to addErasureCodingPolicy and so on ...

          HDFS-12395 is opened to address the refactor and add edit log for all erasure coding policy operations.

          2. Similarly, please also do the large portion of changes in CodecRegistry/CodecUtil separately. It's really not very relevant to this.

          8. You have tests like testChangeErasureCodingCodec, AddNewErasureCodingCodec and etc., but I don't think we need such as codec/coder/algorithms are part of the runtime binary packages and are meant to be loaded during startup. Let's avoid the complexity here and focus on the EC policy persisting stuff.

          The change in CodecRegistry/CodecUtil are for unit test testChangeErasureCodingCodec. The unit test testChangeErasureCodingCodec demonstrate the robustness of HDFS in handling HDFS-7337 "Configurable and pluggable Erasure Codec and schema". I would prefer to keep the unit test. More test, better quality.

          Other suggestions will be well addressed in next patch.

          Show
          Sammi SammiChen added a comment - Thanks Kai Zheng for quick review! 1. Did you notice some failure or error for the current behavior of ElasticByteBufferPool->getBuffer(boolean direct, int length)? It looks reasonable to me that it can return a ByteBuffer of larger capacity than required; it can the caller's responsibility to use it well. Anyway, it's not relevant to this issue, so would you please handle it separately? Thanks. HDFS-12392 is opened for this. Double checked all failed unit tests, they are not relative to this code change. 3. What did you mean by policies loaded HDFS configuration file? There isn't any such file to configure and load EC policies. User may provide one in client, but it's forgotten after used. HDFS configuration file means "hdfs-site.xml". For example, user can define enabled policies through property "dfs.namenode.ec.policies.enabled", later user may disable the policy dynamically through API, then save the namespace to fsimage. Next time, when Namenode starts, it will get a "enable" state policy from "dfs.namenode.ec.policies.enabled" and a "disable" state policy from fsImage for the same policy. In this case, the policy state from fsImage will override the policy state from "dfs.namenode.ec.policies.enabled". 6. Could we have a separate issue to refactor the existing codes, renaming addErasureCodePolicy to addErasureCodingPolicy and so on ... HDFS-12395 is opened to address the refactor and add edit log for all erasure coding policy operations. 2. Similarly, please also do the large portion of changes in CodecRegistry/CodecUtil separately. It's really not very relevant to this. 8. You have tests like testChangeErasureCodingCodec, AddNewErasureCodingCodec and etc., but I don't think we need such as codec/coder/algorithms are part of the runtime binary packages and are meant to be loaded during startup. Let's avoid the complexity here and focus on the EC policy persisting stuff. The change in CodecRegistry/CodecUtil are for unit test testChangeErasureCodingCodec . The unit test testChangeErasureCodingCodec demonstrate the robustness of HDFS in handling HDFS-7337 "Configurable and pluggable Erasure Codec and schema". I would prefer to keep the unit test. More test, better quality. Other suggestions will be well addressed in next patch.
          Hide
          drankye Kai Zheng added a comment -

          7. "HDFS-8140" should be HDFS-11467 instead, please update the comments.

          +    @Override
          +    protected void toXml(ContentHandler contentHandler) throws SAXException {
          +      // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file
          +    }
          +
          +    @Override
          +    void fromXml(Stanza st) throws InvalidXmlException {
          +      // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file
          +    }
          

          8. You have tests like testChangeErasureCodingCodec, AddNewErasureCodingCodec and etc., but I don't think we need such as codec/coder/algorithms are part of the runtime binary packages and are meant to be loaded during startup. Let's avoid the complexity here and focus on the EC policy persisting stuff.

          Show
          drankye Kai Zheng added a comment - 7. " HDFS-8140 " should be HDFS-11467 instead, please update the comments. + @Override + protected void toXml(ContentHandler contentHandler) throws SAXException { + // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file + } + + @Override + void fromXml(Stanza st) throws InvalidXmlException { + // TODO: HDFS-8140 Support for offline EditsVistor over an OEV XML file + } 8. You have tests like testChangeErasureCodingCodec , AddNewErasureCodingCodec and etc., but I don't think we need such as codec/coder/algorithms are part of the runtime binary packages and are meant to be loaded during startup. Let's avoid the complexity here and focus on the EC policy persisting stuff.
          Hide
          drankye Kai Zheng added a comment -

          6. Could we have a separate issue to refactor the existing codes, renaming addErasureCodePolicy to addErasureCodingPolicy and so on ...

          --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
          +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java
          @@ -214,7 +214,10 @@ static FileStatus unsetErasureCodingPolicy(final FSNamesystem fsn,
             static ErasureCodingPolicy addErasureCodePolicy(final FSNamesystem fsn,
                 ErasureCodingPolicy policy) throws IllegalECPolicyException {
          ...
          ...
          
          Show
          drankye Kai Zheng added a comment - 6. Could we have a separate issue to refactor the existing codes, renaming addErasureCodePolicy to addErasureCodingPolicy and so on ... --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirErasureCodingOp.java @@ -214,7 +214,10 @@ static FileStatus unsetErasureCodingPolicy( final FSNamesystem fsn, static ErasureCodingPolicy addErasureCodePolicy( final FSNamesystem fsn, ErasureCodingPolicy policy) throws IllegalECPolicyException { ... ...
          Hide
          drankye Kai Zheng added a comment -

          Continued.

          1. Better to fix the existing text by the way. "does not exists" => "doesn't exist"

          -      throw new IllegalArgumentException("The policy name " +
          +      throw new HadoopIllegalArgumentException("The policy name " +
                     name + " does not exists");
          

          2. PROHIBITED => DISABLED

          +    if (!CodecUtil.hasCodec(policy.getCodecName()) ||
          +        policy.getCellSize() > maxCellSize) {
          +      // If policy is not supported in current system, set the policy state to
          +      // PROHIBITED;
          +      policy.setState(ErasureCodingPolicyState.DISABLED);
          +    }
          

          3. What did you mean by policies loaded HDFS configuration file? There isn't any such file to configure and load EC policies. User may provide one in client, but it's forgotten after used.

          +    String policyName = policy.getName();
          +    for (ErasureCodingPolicy p : getPolicies()) {
          +      if (p.getName().equals(policyName) ||
          +          (p.getSchema().equals(policy.getSchema()) &&
          +          p.getCellSize() == policy.getCellSize())) {
          +        // If the same policy loaded from fsImage override policy loaded based
          +        // on HDFS configuration file
          +        LOG.info("The erasure coding policy name " + policy + " loaded from " +
          +            "fsImage override the one loaded according to HDFS " +
          +            "configuration file");
          +      }
          

          4. How about ErasureCodingPolicyManagerSection => ErasureCodingSection?

          5. Could we get rid of allPolicies or avoid repeatedly creating the array in the for loop from the map?

          +  public synchronized void reloadPolicy(ErasureCodingPolicy policy) {
          ...
          +    allPolicies = policiesByName.values().toArray(new ErasureCodingPolicy[0]);
          ...
          +  }
          
          +  public synchronized void loadState(PersistState state) {
          ...
          +    for (ErasureCodingPolicy p : state.getPolicies()) {
          +      reloadPolicy(p);
          +    }
          +  }
          
          Show
          drankye Kai Zheng added a comment - Continued. 1. Better to fix the existing text by the way. "does not exists" => "doesn't exist" - throw new IllegalArgumentException( "The policy name " + + throw new HadoopIllegalArgumentException( "The policy name " + name + " does not exists" ); 2. PROHIBITED => DISABLED + if (!CodecUtil.hasCodec(policy.getCodecName()) || + policy.getCellSize() > maxCellSize) { + // If policy is not supported in current system, set the policy state to + // PROHIBITED; + policy.setState(ErasureCodingPolicyState.DISABLED); + } 3. What did you mean by policies loaded HDFS configuration file? There isn't any such file to configure and load EC policies. User may provide one in client, but it's forgotten after used. + String policyName = policy.getName(); + for (ErasureCodingPolicy p : getPolicies()) { + if (p.getName().equals(policyName) || + (p.getSchema().equals(policy.getSchema()) && + p.getCellSize() == policy.getCellSize())) { + // If the same policy loaded from fsImage override policy loaded based + // on HDFS configuration file + LOG.info( "The erasure coding policy name " + policy + " loaded from " + + "fsImage override the one loaded according to HDFS " + + "configuration file" ); + } 4. How about ErasureCodingPolicyManagerSection => ErasureCodingSection? 5. Could we get rid of allPolicies or avoid repeatedly creating the array in the for loop from the map? + public synchronized void reloadPolicy(ErasureCodingPolicy policy) { ... + allPolicies = policiesByName.values().toArray( new ErasureCodingPolicy[0]); ... + } + public synchronized void loadState(PersistState state) { ... + for (ErasureCodingPolicy p : state.getPolicies()) { + reloadPolicy(p); + } + }
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
                Prechecks
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
                trunk Compile Tests
          0 mvndep 1m 0s Maven dependency ordering for branch
          +1 mvninstall 16m 4s trunk passed
          +1 compile 17m 13s trunk passed
          +1 checkstyle 2m 27s trunk passed
          +1 mvnsite 2m 44s trunk passed
          +1 findbugs 4m 42s trunk passed
          +1 javadoc 2m 4s trunk passed
                Patch Compile Tests
          0 mvndep 0m 15s Maven dependency ordering for patch
          +1 mvninstall 2m 2s the patch passed
          +1 compile 11m 2s the patch passed
          +1 cc 11m 2s the patch passed
          +1 javac 11m 2s the patch passed
          -0 checkstyle 2m 11s root: The patch generated 30 new + 766 unchanged - 1 fixed = 796 total (was 767)
          +1 mvnsite 3m 2s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 5m 54s the patch passed
          +1 javadoc 2m 13s the patch passed
                Other Tests
          +1 unit 8m 39s hadoop-common in the patch passed.
          +1 unit 1m 26s hadoop-hdfs-client in the patch passed.
          -1 unit 99m 6s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 47s The patch does not generate ASF License warnings.
          184m 42s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
            hadoop.hdfs.server.namenode.TestReencryption
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
            hadoop.hdfs.tools.TestDebugAdmin
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
            hadoop.hdfs.TestLeaseRecoveryStriped
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestReadStripedFileWithDecoding
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010
            hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
            hadoop.hdfs.TestDFSInotifyEventInputStream
          Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:71bbb86
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885245/HDFS-7859.012.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux a04b3e167349 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 633c1ea
          Default Java 1.8.0_144
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20991/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/20991/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20991/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20991/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated.       Prechecks +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.       trunk Compile Tests 0 mvndep 1m 0s Maven dependency ordering for branch +1 mvninstall 16m 4s trunk passed +1 compile 17m 13s trunk passed +1 checkstyle 2m 27s trunk passed +1 mvnsite 2m 44s trunk passed +1 findbugs 4m 42s trunk passed +1 javadoc 2m 4s trunk passed       Patch Compile Tests 0 mvndep 0m 15s Maven dependency ordering for patch +1 mvninstall 2m 2s the patch passed +1 compile 11m 2s the patch passed +1 cc 11m 2s the patch passed +1 javac 11m 2s the patch passed -0 checkstyle 2m 11s root: The patch generated 30 new + 766 unchanged - 1 fixed = 796 total (was 767) +1 mvnsite 3m 2s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 5m 54s the patch passed +1 javadoc 2m 13s the patch passed       Other Tests +1 unit 8m 39s hadoop-common in the patch passed. +1 unit 1m 26s hadoop-hdfs-client in the patch passed. -1 unit 99m 6s hadoop-hdfs in the patch failed. +1 asflicense 0m 47s The patch does not generate ASF License warnings. 184m 42s Reason Tests Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130   hadoop.hdfs.server.namenode.TestReencryption   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050   hadoop.hdfs.tools.TestDebugAdmin   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090   hadoop.hdfs.TestLeaseRecoveryStriped   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestReadStripedFileWithDecoding   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150   hadoop.hdfs.TestDFSInotifyEventInputStream Timed out junit tests org.apache.hadoop.hdfs.TestWriteReadStripedFile Subsystem Report/Notes Docker Image:yetus/hadoop:71bbb86 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12885245/HDFS-7859.012.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux a04b3e167349 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 633c1ea Default Java 1.8.0_144 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/20991/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/20991/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20991/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20991/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Thanks SammiChen for the update!

          Some comments so far:
          1. Did you notice some failure or error for the current behavior of ElasticByteBufferPool->getBuffer(boolean direct, int length)? It looks reasonable to me that it can return a ByteBuffer of larger capacity than required; it can the caller's responsibility to use it well. Anyway, it's not relevant to this issue, so would you please handle it separately? Thanks.

          --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
          +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java
          @@ -96,7 +96,8 @@ public synchronized ByteBuffer getBuffer(boolean direct, int length) {
                                 ByteBuffer.allocate(length);
               }
               tree.remove(entry.getKey());
          -    return entry.getValue();
          +    // The reused ByteBuffer may have more capacity than required(length)
          +    return (ByteBuffer) entry.getValue().limit(length);
             }
          

          2. Similarly, please also do the large portion of changes in CodecRegistry/CodecUtil separately. It's really not very relevant to this.

          3. Again, in DFSStripedOutputStream please do it elsewhere (better with some test), I wish this could focus on the NN side changes when possible.

               private void clear() {
                 for (int i = 0; i< numAllBlocks; i++) {
                   buffers[i].clear();
          +        buffers[i] = (ByteBuffer) buffers[i].limit(cellSize);
                 }
               }
          
          Show
          drankye Kai Zheng added a comment - Thanks SammiChen for the update! Some comments so far: 1. Did you notice some failure or error for the current behavior of ElasticByteBufferPool->getBuffer(boolean direct, int length) ? It looks reasonable to me that it can return a ByteBuffer of larger capacity than required; it can the caller's responsibility to use it well. Anyway, it's not relevant to this issue, so would you please handle it separately? Thanks. --- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java +++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java @@ -96,7 +96,8 @@ public synchronized ByteBuffer getBuffer( boolean direct, int length) { ByteBuffer.allocate(length); } tree.remove(entry.getKey()); - return entry.getValue(); + // The reused ByteBuffer may have more capacity than required(length) + return (ByteBuffer) entry.getValue().limit(length); } 2. Similarly, please also do the large portion of changes in CodecRegistry/CodecUtil separately. It's really not very relevant to this. 3. Again, in DFSStripedOutputStream please do it elsewhere (better with some test), I wish this could focus on the NN side changes when possible. private void clear() { for ( int i = 0; i< numAllBlocks; i++) { buffers[i].clear(); + buffers[i] = (ByteBuffer) buffers[i].limit(cellSize); } }
          Hide
          Sammi SammiChen added a comment -

          Rebase patch against trunk code

          Show
          Sammi SammiChen added a comment - Rebase patch against trunk code
          Hide
          drankye Kai Zheng added a comment -

          We want to stabilize the API before BETA 1, that includes API names. I'm not so comfortable in methods like getEcPoliciesOnDir, because in most cases in existing codes we preferred to use EC or ErasureCoding over Ec.

          Show
          drankye Kai Zheng added a comment - We want to stabilize the API before BETA 1, that includes API names. I'm not so comfortable in methods like getEcPoliciesOnDir , because in most cases in existing codes we preferred to use EC or ErasureCoding over Ec .
          Hide
          drankye Kai Zheng added a comment -

          Are only the number of policies persisted? It looks off to me. It will depends on the order of system pre-defined policies. So when the pluggable EC policy being merged, would that impact the correctness of loading / saving fsimage? It might also make upgrade / downgrade difficult.

          I agree with Eddy on this and have the same concern. We need to persist all the system policies and user defined policies including their info (name, id, cell size and EC schema) along with their status (lenabled/disabled, removed). We need to ensure all the persisted info can be used to recover/export/import/convert data and do the upgrading/downgrading stuffs.

          Lei (Eddy) Xu mentioned upgrade and downgrade, it's a good question. Not only user defined ec policy, but also built-in ec policy will face this issue. The major problem is if a codec is no longer supported after upgrade or downgrade, how to handle these type of ec policies in the new cluster, also how to handle the files/directories which used these no long supported files?

          It should be a rare case we need to consider that an EC codec/coder/algorithm will not be supported and removed from the code base. If user adds some pluggable codec but then remove it from binary, it's their call. So let's not worry about this at this time.

          Let's focus on the basic use cases and requirements, and move on not being too overloaded.

          Show
          drankye Kai Zheng added a comment - Are only the number of policies persisted? It looks off to me. It will depends on the order of system pre-defined policies. So when the pluggable EC policy being merged, would that impact the correctness of loading / saving fsimage? It might also make upgrade / downgrade difficult. I agree with Eddy on this and have the same concern. We need to persist all the system policies and user defined policies including their info (name, id, cell size and EC schema) along with their status (lenabled/disabled, removed). We need to ensure all the persisted info can be used to recover/export/import/convert data and do the upgrading/downgrading stuffs. Lei (Eddy) Xu mentioned upgrade and downgrade, it's a good question. Not only user defined ec policy, but also built-in ec policy will face this issue. The major problem is if a codec is no longer supported after upgrade or downgrade, how to handle these type of ec policies in the new cluster, also how to handle the files/directories which used these no long supported files? It should be a rare case we need to consider that an EC codec/coder/algorithm will not be supported and removed from the code base. If user adds some pluggable codec but then remove it from binary, it's their call. So let's not worry about this at this time. Let's focus on the basic use cases and requirements, and move on not being too overloaded.
          Hide
          Sammi SammiChen added a comment -

          Lei (Eddy) Xu mentioned upgrade and downgrade, it's a good question. Not only user defined ec policy, but also built-in ec policy will face this issue. The major problem is if a codec is no longer supported after upgrade or downgrade, how to handle these type of ec policies in the new cluster, also how to handle the files/directories which used these no long supported files? I tend to keep these ec policies in ErasureCodingPolicyManager, while mark it as obsolete, and also load these files/directories into the namespace so that user can go through the tree structures(read file cannot be supported). I will do some experiment to see if it works.
          Any other suggestions are welcome. Andrew Wang, Kai Zheng, Lei (Eddy) Xu

          Show
          Sammi SammiChen added a comment - Lei (Eddy) Xu mentioned upgrade and downgrade, it's a good question. Not only user defined ec policy, but also built-in ec policy will face this issue. The major problem is if a codec is no longer supported after upgrade or downgrade, how to handle these type of ec policies in the new cluster, also how to handle the files/directories which used these no long supported files? I tend to keep these ec policies in ErasureCodingPolicyManager , while mark it as obsolete, and also load these files/directories into the namespace so that user can go through the tree structures(read file cannot be supported). I will do some experiment to see if it works. Any other suggestions are welcome. Andrew Wang , Kai Zheng , Lei (Eddy) Xu
          Hide
          Sammi SammiChen added a comment -

          Hi Lei (Eddy) Xu, thanks for reviewing the patch! Persist erasure coding policies in NameNode is a critical part of the "provide support for user to customize EC policy" feature. Without this JIRA, it cannot say that customized EC policy feature is completed. So it's better to get this in beta1. For detail comments part, add state to EC policy is covered by HDFS-12258. I will upload a new patch shortly after HDFS-12258 commitment.

          Show
          Sammi SammiChen added a comment - Hi Lei (Eddy) Xu , thanks for reviewing the patch! Persist erasure coding policies in NameNode is a critical part of the "provide support for user to customize EC policy" feature. Without this JIRA, it cannot say that customized EC policy feature is completed. So it's better to get this in beta1. For detail comments part, add state to EC policy is covered by HDFS-12258 . I will upload a new patch shortly after HDFS-12258 commitment.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 8s HDFS-7859 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868929/HDFS-7859.011.patch
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20914/console
          Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 8s HDFS-7859 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868929/HDFS-7859.011.patch Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20914/console Powered by Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          eddyxu Lei (Eddy) Xu added a comment -

          Hi, SammiChen

          Thanks for the latest patch. I have some questions regarding the latest patch.

          private byte state;   // 0x01 - enabled, 0x02 - deleted
          

          Can we use enum here to be more explicitly?

          message ErasureCodingPolicyManagerSection {
          351	    // number of enabled EC policies
          352	    required uint32 numEnabledPolicies = 1;
          353	    // number of not enabled policies, including system and user defined policy
          354	    required uint32 numOtherPolicies = 2;
          355	    // repeated ErasureCodingPolicyProto enabledPolicies;
          356	    // repeated ErasureCodingPolicyProto otherPolicies;
          357	}
          

          Are only the number of policies persisted? It looks off to me. It will depends on the order of system pre-defined policies. So when the pluggable EC policy being merged, would that impact the correctness of loading / saving fsimage? It might also make upgrade / downgrade difficult.

          public HashSet<String> getEcPoliciesOnDir() {
          239	      return ecPoliciesOnDir;
          240	    }
          241	
          242	    public HashSet<Byte> getEcPoliciesOnFile() {
          243	      return ecPoliciesOnFile;
          244	    }
          245	
          246	    public void setCountEcPolicies(Boolean enabled) {
          247	      countEcPolicies = enabled;
          248	    }
          249	
          

          What are the consumers of these functions? lso

          public static final class PersistState {
          246	    public final ErasureCodingPolicyManagerSection section;
          247	    public final List<ErasureCodingPolicy> enabledPolicies;
          248	    public final List<ErasureCodingPolicy> otherPolicies;
          

          Can these fields be private final?

          // FSImage.java
          import org.apache.hadoop.hdfs.protocol.IllegalECPolicyException;
          

          Seems not necessary? also in FSImageFormat.java

          private INode loadINode(INodeSection.INode n) throws IOException 
          

          Is this IOE necessary?

          // TestFSEditLogLoader.java
                cluster.shutdown();
          754	      cluster = null;
          755	    } finally {
          756	      if (cluster != null) {
          757	        cluster.shutdown();
          758	      }
          759	    }
          

          In general, you can use try...resource for MiniDFSCluster .

          A more general question to SammiChen, do you think whether it is possible to get this in beta1?

          Thanks!

          Show
          eddyxu Lei (Eddy) Xu added a comment - Hi, SammiChen Thanks for the latest patch. I have some questions regarding the latest patch. private byte state; // 0x01 - enabled, 0x02 - deleted Can we use enum here to be more explicitly? message ErasureCodingPolicyManagerSection { 351 // number of enabled EC policies 352 required uint32 numEnabledPolicies = 1; 353 // number of not enabled policies, including system and user defined policy 354 required uint32 numOtherPolicies = 2; 355 // repeated ErasureCodingPolicyProto enabledPolicies; 356 // repeated ErasureCodingPolicyProto otherPolicies; 357 } Are only the number of policies persisted? It looks off to me. It will depends on the order of system pre-defined policies. So when the pluggable EC policy being merged, would that impact the correctness of loading / saving fsimage? It might also make upgrade / downgrade difficult. public HashSet< String > getEcPoliciesOnDir() { 239 return ecPoliciesOnDir; 240 } 241 242 public HashSet< Byte > getEcPoliciesOnFile() { 243 return ecPoliciesOnFile; 244 } 245 246 public void setCountEcPolicies( Boolean enabled) { 247 countEcPolicies = enabled; 248 } 249 What are the consumers of these functions? lso public static final class PersistState { 246 public final ErasureCodingPolicyManagerSection section; 247 public final List<ErasureCodingPolicy> enabledPolicies; 248 public final List<ErasureCodingPolicy> otherPolicies; Can these fields be private final ? // FSImage.java import org.apache.hadoop.hdfs.protocol.IllegalECPolicyException; Seems not necessary? also in FSImageFormat.java private INode loadINode(INodeSection.INode n) throws IOException Is this IOE necessary? // TestFSEditLogLoader.java cluster.shutdown(); 754 cluster = null ; 755 } finally { 756 if (cluster != null ) { 757 cluster.shutdown(); 758 } 759 } In general, you can use try...resource for MiniDFSCluster . A more general question to SammiChen , do you think whether it is possible to get this in beta1? Thanks!
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 1s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 29s Maven dependency ordering for branch
          +1 mvninstall 15m 7s trunk passed
          +1 compile 1m 38s trunk passed
          +1 checkstyle 0m 54s trunk passed
          +1 mvnsite 1m 37s trunk passed
          +1 mvneclipse 0m 29s trunk passed
          +1 findbugs 3m 16s trunk passed
          +1 javadoc 1m 12s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 43s the patch passed
          +1 compile 1m 41s the patch passed
          +1 cc 1m 41s the patch passed
          +1 javac 1m 41s the patch passed
          -0 checkstyle 0m 58s hadoop-hdfs-project: The patch generated 28 new + 1030 unchanged - 3 fixed = 1058 total (was 1033)
          +1 mvnsite 1m 39s the patch passed
          +1 mvneclipse 0m 26s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 52s the patch passed
          +1 javadoc 1m 7s the patch passed
          +1 unit 1m 20s hadoop-hdfs-client in the patch passed.
          -1 unit 75m 34s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          115m 33s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestErasureCodingPolicies
            hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868929/HDFS-7859.011.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 38c941908285 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 009b9f3
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19509/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19509/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19509/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19509/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 1s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 29s Maven dependency ordering for branch +1 mvninstall 15m 7s trunk passed +1 compile 1m 38s trunk passed +1 checkstyle 0m 54s trunk passed +1 mvnsite 1m 37s trunk passed +1 mvneclipse 0m 29s trunk passed +1 findbugs 3m 16s trunk passed +1 javadoc 1m 12s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 43s the patch passed +1 compile 1m 41s the patch passed +1 cc 1m 41s the patch passed +1 javac 1m 41s the patch passed -0 checkstyle 0m 58s hadoop-hdfs-project: The patch generated 28 new + 1030 unchanged - 3 fixed = 1058 total (was 1033) +1 mvnsite 1m 39s the patch passed +1 mvneclipse 0m 26s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 52s the patch passed +1 javadoc 1m 7s the patch passed +1 unit 1m 20s hadoop-hdfs-client in the patch passed. -1 unit 75m 34s hadoop-hdfs in the patch failed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 115m 33s Reason Tests Failed junit tests hadoop.hdfs.TestErasureCodingPolicies   hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12868929/HDFS-7859.011.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 38c941908285 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 009b9f3 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19509/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19509/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19509/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19509/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          Refine the patch, add more unit test

          Show
          Sammi SammiChen added a comment - Refine the patch, add more unit test
          Hide
          drankye Kai Zheng added a comment -

          As there are particular interests and requirements to support more codecs and coders, like in HDFS-11503; and we received user queries regarding how to try their own EC policies like using different cell size. So accordingly we want to solve this and make a further step in HDFS-7337 to support pluggable and configurable codecs. The design doc was updated with latest discussions and related work, which I hope can address the design level questions that's already raised or will be raised here.

          Tsz Wo Nicholas Sze, per your request this should be proceeded with design since it's to change the fsimage, which is why I want to ping you. Would you let me know if you have any concerns or would confirm. I know you're very busy with Ozone, so thanks for your time !

          Show
          drankye Kai Zheng added a comment - As there are particular interests and requirements to support more codecs and coders, like in HDFS-11503 ; and we received user queries regarding how to try their own EC policies like using different cell size. So accordingly we want to solve this and make a further step in HDFS-7337 to support pluggable and configurable codecs. The design doc was updated with latest discussions and related work, which I hope can address the design level questions that's already raised or will be raised here. Tsz Wo Nicholas Sze , per your request this should be proceeded with design since it's to change the fsimage, which is why I want to ping you. Would you let me know if you have any concerns or would confirm. I know you're very busy with Ozone, so thanks for your time !
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 1s The patch appears to include 4 new or modified test files.
          0 mvndep 0m 15s Maven dependency ordering for branch
          +1 mvninstall 13m 45s trunk passed
          +1 compile 18m 32s trunk passed
          +1 checkstyle 2m 10s trunk passed
          +1 mvnsite 3m 3s trunk passed
          +1 mvneclipse 1m 2s trunk passed
          -1 findbugs 1m 25s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings.
          -1 findbugs 1m 32s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings.
          -1 findbugs 1m 49s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings.
          +1 javadoc 2m 5s trunk passed
          0 mvndep 0m 13s Maven dependency ordering for patch
          +1 mvninstall 2m 3s the patch passed
          +1 compile 14m 26s the patch passed
          +1 cc 14m 26s the patch passed
          +1 javac 14m 26s the patch passed
          -0 checkstyle 2m 12s root: The patch generated 26 new + 1534 unchanged - 4 fixed = 1560 total (was 1538)
          +1 mvnsite 2m 42s the patch passed
          +1 mvneclipse 1m 2s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 5m 2s the patch passed
          +1 javadoc 2m 2s the patch passed
          -1 unit 6m 59s hadoop-common in the patch failed.
          +1 unit 1m 18s hadoop-hdfs-client in the patch passed.
          -1 unit 99m 43s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 42s The patch does not generate ASF License warnings.
          185m 55s



          Reason Tests
          Failed junit tests hadoop.io.erasurecode.TestCodecRegistry
            hadoop.hdfs.TestErasureCodingPolicies
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
            hadoop.hdfs.server.datanode.TestDirectoryScanner
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
            hadoop.hdfs.TestDFSInotifyEventInputStream
          Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2
            org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867098/HDFS-7859.010.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 776d7adfdd98 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / c60164f
          Default Java 1.8.0_121
          findbugs v3.1.0-RC1
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/diff-checkstyle-root.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19360/testReport/
          modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: .
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19360/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 1s The patch appears to include 4 new or modified test files. 0 mvndep 0m 15s Maven dependency ordering for branch +1 mvninstall 13m 45s trunk passed +1 compile 18m 32s trunk passed +1 checkstyle 2m 10s trunk passed +1 mvnsite 3m 3s trunk passed +1 mvneclipse 1m 2s trunk passed -1 findbugs 1m 25s hadoop-common-project/hadoop-common in trunk has 19 extant Findbugs warnings. -1 findbugs 1m 32s hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 extant Findbugs warnings. -1 findbugs 1m 49s hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant Findbugs warnings. +1 javadoc 2m 5s trunk passed 0 mvndep 0m 13s Maven dependency ordering for patch +1 mvninstall 2m 3s the patch passed +1 compile 14m 26s the patch passed +1 cc 14m 26s the patch passed +1 javac 14m 26s the patch passed -0 checkstyle 2m 12s root: The patch generated 26 new + 1534 unchanged - 4 fixed = 1560 total (was 1538) +1 mvnsite 2m 42s the patch passed +1 mvneclipse 1m 2s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 5m 2s the patch passed +1 javadoc 2m 2s the patch passed -1 unit 6m 59s hadoop-common in the patch failed. +1 unit 1m 18s hadoop-hdfs-client in the patch passed. -1 unit 99m 43s hadoop-hdfs in the patch failed. +1 asflicense 0m 42s The patch does not generate ASF License warnings. 185m 55s Reason Tests Failed junit tests hadoop.io.erasurecode.TestCodecRegistry   hadoop.hdfs.TestErasureCodingPolicies   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA   hadoop.hdfs.server.datanode.TestDirectoryScanner   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure   hadoop.hdfs.TestDFSInotifyEventInputStream Timed out junit tests org.apache.hadoop.hdfs.TestLeaseRecovery2   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12867098/HDFS-7859.010.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 776d7adfdd98 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / c60164f Default Java 1.8.0_121 findbugs v3.1.0-RC1 findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/diff-checkstyle-root.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19360/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19360/testReport/ modules C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19360/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          drankye Kai Zheng added a comment -

          Thanks SammiChen for taking this and updating the patch. I submitted it and let's see something. This should be a major one for the alpha3 release related to EC. Will review and try to help with it in the following days.

          Show
          drankye Kai Zheng added a comment - Thanks SammiChen for taking this and updating the patch. I submitted it and let's see something. This should be a major one for the alpha3 release related to EC. Will review and try to help with it in the following days.
          Hide
          Sammi SammiChen added a comment -

          Patch includes persistent system EC policy and user provided EC policy to fsimage and load from fsimage.

          Show
          Sammi SammiChen added a comment - Patch includes persistent system EC policy and user provided EC policy to fsimage and load from fsimage.
          Hide
          Sammi SammiChen added a comment -

          This JIRA will focus on persist erasure coding policies into fsImage, load erasure coding polices from fsImage and editLog when NameNode startup. Add and remove erasure coding policy will be covered in HDFS-11605 and HDFS-11606.

          Show
          Sammi SammiChen added a comment - This JIRA will focus on persist erasure coding policies into fsImage, load erasure coding polices from fsImage and editLog when NameNode startup. Add and remove erasure coding policy will be covered in HDFS-11605 and HDFS-11606 .
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 0s Docker mode activated.
          -1 patch 0m 9s HDFS-7859 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843464/HDFS-7859.009.patch
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19039/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 0s Docker mode activated. -1 patch 0m 9s HDFS-7859 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843464/HDFS-7859.009.patch Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19039/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Filed HDFS-11467 for OIV/OEV improvement.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Filed HDFS-11467 for OIV/OEV improvement.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Sure. I'll file OIV/OEV improvement jiras.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Sure. I'll file OIV/OEV improvement jiras.
          Hide
          andrew.wang Andrew Wang added a comment -

          I think we've tabled this JIRA for now, so not very high urgency. We can file an OIV follow-on though for completeness.

          Show
          andrew.wang Andrew Wang added a comment - I think we've tabled this JIRA for now, so not very high urgency. We can file an OIV follow-on though for completeness.
          Hide
          jojochuang Wei-Chiu Chuang added a comment - - edited

          Hello Andrew Wang Kai Zheng Xinwei Qin Zhe Zhang
          Appreciate your work in this big patch which I quickly skimmed through it.
          Now that a new fsimage section is added, we should also add support for OIV XML/ReverseXML. And add OEV XML output for edits. How about we file follow-up jiras for them? They are not as high priority as this patch itself.

          Show
          jojochuang Wei-Chiu Chuang added a comment - - edited Hello Andrew Wang Kai Zheng Xinwei Qin Zhe Zhang Appreciate your work in this big patch which I quickly skimmed through it. Now that a new fsimage section is added, we should also add support for OIV XML/ReverseXML. And add OEV XML output for edits. How about we file follow-up jiras for them? They are not as high priority as this patch itself.
          Hide
          andrew.wang Andrew Wang added a comment -

          Appreciate the comments Kai, inline:

          Making the policy parameter optional could be friendly considering they may have no idea before the list of available policies promoted to them.

          Agree, though I think most of the time these policies are going to be set up once by the admin.

          To support friendly usage, we could have a client-side configuration similar to "dfs.replication" that specifies the default EC policy. Or we could have a NN default when no EC policy is specified. I think this can be added compatibly later though.

          If EC policy info get persisted and stay along with data, users might feel more confident and comfortable to do data validation and transformation in system upgrading.

          Right, that's essentially how it works now with the hardcoded policies. I think if/when we introduce pluggable policies, we should revive this JIRA so we aren't attaching an entire EC policy to every EC file that uses a pluggable policy, but until then it's not needed.

          I went ahead and filed HDFS-11416 for refactoring out the system default policy, thanks again for the discussion!

          Show
          andrew.wang Andrew Wang added a comment - Appreciate the comments Kai, inline: Making the policy parameter optional could be friendly considering they may have no idea before the list of available policies promoted to them. Agree, though I think most of the time these policies are going to be set up once by the admin. To support friendly usage, we could have a client-side configuration similar to "dfs.replication" that specifies the default EC policy. Or we could have a NN default when no EC policy is specified. I think this can be added compatibly later though. If EC policy info get persisted and stay along with data, users might feel more confident and comfortable to do data validation and transformation in system upgrading. Right, that's essentially how it works now with the hardcoded policies. I think if/when we introduce pluggable policies, we should revive this JIRA so we aren't attaching an entire EC policy to every EC file that uses a pluggable policy, but until then it's not needed. I went ahead and filed HDFS-11416 for refactoring out the system default policy, thanks again for the discussion!
          Hide
          drankye Kai Zheng added a comment -

          Thanks Andrew for working on this and bringing up the discussion.

          1. For the system default EC policy, I agree its importance becomes weak now. A question is, when users set a EC policy to a folder, do they want to or have to specify a policy in most time? Making the policy parameter optional could be friendly considering they may have no idea before the list of available policies promoted to them.

          2. EC policy and schema could be a nice fit and thought of file meta data. EC is another form of replica, I guess replica factor is recorded per file. If EC policy info get persisted and stay along with data, users might feel more confident and comfortable to do data validation and transformation in system upgrading. I may echo what's said in previous comments by others, configuration and codes may change and evolve.

          Show
          drankye Kai Zheng added a comment - Thanks Andrew for working on this and bringing up the discussion. 1. For the system default EC policy, I agree its importance becomes weak now. A question is, when users set a EC policy to a folder, do they want to or have to specify a policy in most time? Making the policy parameter optional could be friendly considering they may have no idea before the list of available policies promoted to them. 2. EC policy and schema could be a nice fit and thought of file meta data. EC is another form of replica, I guess replica factor is recorded per file. If EC policy info get persisted and stay along with data, users might feel more confident and comfortable to do data validation and transformation in system upgrading. I may echo what's said in previous comments by others, configuration and codes may change and evolve.
          Hide
          andrew.wang Andrew Wang added a comment - - edited

          I thought about this JIRA some more, and had two questions I wanted to bring up for discussion:

          Do we need a system default EC policy?

          AFAICT, the system default policy dates from when we only supported a single policy for HDFS. Now, we've pretty clearly defined the API for EC policies, and for most uses, the EC policy is automatically inherited from a dir-level policy. The setErasureCodingPolicy API already requires an EC policy to be specified, so I think the default EC policy is basically vestigal and can be removed.

          Can we use configuration instead of persistence for the set of enabled policies?

          I'm wondering if there is actually any benefit to persisting the set of allowed policies. In the past, we've enabled and disabled features via configuration keys, and this is basically the same idea. There's no danger of data corruption from two NNs having different sets of enabled policies, so it's safe in that sense. IMO we have a key like dfs.namenode.erasure.coding.policies.enabled and specify from the list of hardcoded policies there.

          If the above sounds good, I can file a new JIRA for refactoring out the system default policies, and do the configuration key over on HDFS-11314.

          Show
          andrew.wang Andrew Wang added a comment - - edited I thought about this JIRA some more, and had two questions I wanted to bring up for discussion: Do we need a system default EC policy? AFAICT, the system default policy dates from when we only supported a single policy for HDFS. Now, we've pretty clearly defined the API for EC policies, and for most uses, the EC policy is automatically inherited from a dir-level policy. The setErasureCodingPolicy API already requires an EC policy to be specified, so I think the default EC policy is basically vestigal and can be removed. Can we use configuration instead of persistence for the set of enabled policies? I'm wondering if there is actually any benefit to persisting the set of allowed policies. In the past, we've enabled and disabled features via configuration keys, and this is basically the same idea. There's no danger of data corruption from two NNs having different sets of enabled policies, so it's safe in that sense. IMO we have a key like dfs.namenode.erasure.coding.policies.enabled and specify from the list of hardcoded policies there. If the above sounds good, I can file a new JIRA for refactoring out the system default policies, and do the configuration key over on HDFS-11314 .
          Hide
          surendrasingh Surendra Singh Lilhore added a comment -

          Thanks Andrew Wang.

          are you interested in working on this? If not, I can pick it up.

          Sure, you can assign to youself

          Show
          surendrasingh Surendra Singh Lilhore added a comment - Thanks Andrew Wang . are you interested in working on this? If not, I can pick it up. Sure, you can assign to youself
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for the comments Zhe, inline,

          Other than "ID/name that we already use, or an ID/name we might want to hardcode later", what other validations do you have in mind?

          I think it's problematic if user-defined policies are in the same id-space/namespace as system-defined policies. Separating these spaces would eliminate the possibility of overlap and the need for validation.

          If we do decide to add pluggable EC policies in 3.0 GA, can we add an on-off config option for the entire pluggable logic and default to off?

          Yea, I think that works. IMO it's unlikely we get to pluggable policies at all in 3.0. The goal as I understand it was to support something like LRC as a pluggable policy, but that requires hooking into blockplacement and recovery. We haven't 100% finished those yet even for our built-in policies, and making these interfaces pluggable will be tough.

          So, I'd like to co-opt this JIRA to add APIs for defining allowed policies as well as the default policy for a cluster. We'll need new shell commands for doing this too.

          Xinwei Qin / Surendra Singh Lilhore are you interested in working on this? If not, I can pick it up.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for the comments Zhe, inline, Other than "ID/name that we already use, or an ID/name we might want to hardcode later", what other validations do you have in mind? I think it's problematic if user-defined policies are in the same id-space/namespace as system-defined policies. Separating these spaces would eliminate the possibility of overlap and the need for validation. If we do decide to add pluggable EC policies in 3.0 GA, can we add an on-off config option for the entire pluggable logic and default to off? Yea, I think that works. IMO it's unlikely we get to pluggable policies at all in 3.0. The goal as I understand it was to support something like LRC as a pluggable policy, but that requires hooking into blockplacement and recovery. We haven't 100% finished those yet even for our built-in policies, and making these interfaces pluggable will be tough. So, I'd like to co-opt this JIRA to add APIs for defining allowed policies as well as the default policy for a cluster. We'll need new shell commands for doing this too. Xinwei Qin / Surendra Singh Lilhore are you interested in working on this? If not, I can pick it up.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks for the thoughts Andrew. I think HDFS-11314 is indeed necessary to filter out unsuitable policies. Other than "ID/name that we already use, or an ID/name we might want to hardcode later", what other validations do you have in mind?

          From our current production perspective, the built-in EC policies are sufficient. We prefer a lower complexity implementation at least in the initial 3.0 GA. If we do decide to add pluggable EC policies in 3.0 GA, can we add an on-off config option for the entire pluggable logic and default to off?

          Show
          zhz Zhe Zhang added a comment - Thanks for the thoughts Andrew. I think HDFS-11314 is indeed necessary to filter out unsuitable policies. Other than "ID/name that we already use, or an ID/name we might want to hardcode later", what other validations do you have in mind? From our current production perspective, the built-in EC policies are sufficient. We prefer a lower complexity implementation at least in the initial 3.0 GA. If we do decide to add pluggable EC policies in 3.0 GA, can we add an on-off config option for the entire pluggable logic and default to off?
          Hide
          andrew.wang Andrew Wang added a comment -

          I re-read through the history of this JIRA, and it seems like we've debated a couple times whether it's useful to persist this information, with the current status of the patch being to only persist user-added policies.

          I think this is not very useful as is, and potentially dangerous. We let the user specify any ECPolicy they want, without much field validation. This means the user could specify an ID/name that we already use, or an ID/name we might want to hardcode later. Even with validation, this makes upgrade difficult.

          Given that we haven't finished the pluggable EC policy work, we also don't know what fields might be required to fully specify an EC policy. This patch does let the user configure different parameters for Reed Solomon, but we already provide what we think are a good set of hardcoded policies to choose from.

          IMO where some persistence would be useful is for HDFS-11314. We'd like to restrict the set of EC policies that can be used on a cluster, since fault tolerance depends on the # of nodes and racks. This would be limiting from the set of hardcoded policies though, rather than adding new policies.

          Kai Zheng, Zhe Zhang, thoughts on this?

          Show
          andrew.wang Andrew Wang added a comment - I re-read through the history of this JIRA, and it seems like we've debated a couple times whether it's useful to persist this information, with the current status of the patch being to only persist user-added policies. I think this is not very useful as is, and potentially dangerous. We let the user specify any ECPolicy they want, without much field validation. This means the user could specify an ID/name that we already use, or an ID/name we might want to hardcode later. Even with validation, this makes upgrade difficult. Given that we haven't finished the pluggable EC policy work, we also don't know what fields might be required to fully specify an EC policy. This patch does let the user configure different parameters for Reed Solomon, but we already provide what we think are a good set of hardcoded policies to choose from. IMO where some persistence would be useful is for HDFS-11314 . We'd like to restrict the set of EC policies that can be used on a cluster, since fault tolerance depends on the # of nodes and racks. This would be limiting from the set of hardcoded policies though, rather than adding new policies. Kai Zheng , Zhe Zhang , thoughts on this?
          Hide
          andrew.wang Andrew Wang added a comment -

          Marking ec must do's as blockers for alpha3

          Show
          andrew.wang Andrew Wang added a comment - Marking ec must do's as blockers for alpha3
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 17m 53s Docker mode activated.
          0 shelldocs 0m 0s Shelldocs was not available.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 5 new or modified test files.
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 13m 26s trunk passed
          +1 compile 1m 41s trunk passed
          +1 checkstyle 0m 56s trunk passed
          +1 mvnsite 1m 34s trunk passed
          +1 mvneclipse 0m 31s trunk passed
          +1 findbugs 3m 34s trunk passed
          +1 javadoc 1m 11s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 20s the patch passed
          +1 compile 1m 19s the patch passed
          +1 cc 1m 19s the patch passed
          +1 javac 1m 19s the patch passed
          -0 checkstyle 0m 47s hadoop-hdfs-project: The patch generated 2 new + 1197 unchanged - 2 fixed = 1199 total (was 1199)
          +1 mvnsite 1m 37s the patch passed
          +1 mvneclipse 0m 25s the patch passed
          +1 shellcheck 0m 12s The patch generated 0 new + 116 unchanged - 1 fixed = 116 total (was 117)
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 xml 0m 1s The patch has no ill-formed XML file.
          -1 findbugs 1m 55s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 9s the patch passed
          +1 unit 1m 0s hadoop-hdfs-client in the patch passed.
          -1 unit 68m 28s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          123m 30s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843464/HDFS-7859.009.patch
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle shellcheck shelldocs xml
          uname Linux 8f4801ff850c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / f92913c
          Default Java 1.8.0_111
          shellcheck v0.4.5
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17869/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17869/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 17m 53s Docker mode activated. 0 shelldocs 0m 0s Shelldocs was not available. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 5 new or modified test files. 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 13m 26s trunk passed +1 compile 1m 41s trunk passed +1 checkstyle 0m 56s trunk passed +1 mvnsite 1m 34s trunk passed +1 mvneclipse 0m 31s trunk passed +1 findbugs 3m 34s trunk passed +1 javadoc 1m 11s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 20s the patch passed +1 compile 1m 19s the patch passed +1 cc 1m 19s the patch passed +1 javac 1m 19s the patch passed -0 checkstyle 0m 47s hadoop-hdfs-project: The patch generated 2 new + 1197 unchanged - 2 fixed = 1199 total (was 1199) +1 mvnsite 1m 37s the patch passed +1 mvneclipse 0m 25s the patch passed +1 shellcheck 0m 12s The patch generated 0 new + 116 unchanged - 1 fixed = 116 total (was 117) +1 whitespace 0m 0s The patch has no whitespace issues. +1 xml 0m 1s The patch has no ill-formed XML file. -1 findbugs 1m 55s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 9s the patch passed +1 unit 1m 0s hadoop-hdfs-client in the patch passed. -1 unit 68m 28s hadoop-hdfs in the patch failed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 123m 30s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12843464/HDFS-7859.009.patch Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle shellcheck shelldocs xml uname Linux 8f4801ff850c 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / f92913c Default Java 1.8.0_111 shellcheck v0.4.5 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/17869/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17869/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17869/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          surendrasingh Surendra Singh Lilhore added a comment -

          v9:
          1. Fixed TestOfflineEditsViewer (Other failed UT's are not related to this patch)
          2. Removed white space

          Findbugs and checkstyles are not related to this patch..

          Please review...

          Show
          surendrasingh Surendra Singh Lilhore added a comment - v9: 1. Fixed TestOfflineEditsViewer (Other failed UT's are not related to this patch) 2. Removed white space Findbugs and checkstyles are not related to this patch.. Please review...
          Hide
          surendrasingh Surendra Singh Lilhore added a comment -

          sure, I will update the patch..

          Show
          surendrasingh Surendra Singh Lilhore added a comment - sure, I will update the patch..
          Hide
          Sammi SammiChen added a comment -

          Thanks Surendra Singh Lilhore for contributing to this task. Would you please check the findbugs, whitespace and failed test cases?

          Show
          Sammi SammiChen added a comment - Thanks Surendra Singh Lilhore for contributing to this task. Would you please check the findbugs, whitespace and failed test cases?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 24s Maven dependency ordering for branch
          +1 mvninstall 6m 58s trunk passed
          +1 compile 1m 19s trunk passed
          +1 checkstyle 0m 46s trunk passed
          +1 mvnsite 1m 24s trunk passed
          +1 mvneclipse 0m 25s trunk passed
          +1 findbugs 3m 5s trunk passed
          +1 javadoc 0m 59s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 13s the patch passed
          +1 compile 1m 16s the patch passed
          +1 cc 1m 16s the patch passed
          +1 javac 1m 16s the patch passed
          -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 2 new + 1197 unchanged - 2 fixed = 1199 total (was 1199)
          +1 mvnsite 1m 20s the patch passed
          +1 mvneclipse 0m 20s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          -1 findbugs 1m 31s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 0m 54s the patch passed
          +1 unit 0m 52s hadoop-hdfs-client in the patch passed.
          -1 unit 64m 0s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          91m 14s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.TestSecureEncryptionZoneWithKMS
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestCrcCorruption
            hadoop.hdfs.TestTrashWithSecureEncryptionZones



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-7859
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842725/HDFS-7859.008.patch
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 15a63a4dc782 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 754f15b
          Default Java 1.8.0_111
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/whitespace-eol.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17846/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17846/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 24s Maven dependency ordering for branch +1 mvninstall 6m 58s trunk passed +1 compile 1m 19s trunk passed +1 checkstyle 0m 46s trunk passed +1 mvnsite 1m 24s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 5s trunk passed +1 javadoc 0m 59s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 13s the patch passed +1 compile 1m 16s the patch passed +1 cc 1m 16s the patch passed +1 javac 1m 16s the patch passed -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 2 new + 1197 unchanged - 2 fixed = 1199 total (was 1199) +1 mvnsite 1m 20s the patch passed +1 mvneclipse 0m 20s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply -1 findbugs 1m 31s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 0m 54s the patch passed +1 unit 0m 52s hadoop-hdfs-client in the patch passed. -1 unit 64m 0s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 91m 14s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.TestSecureEncryptionZoneWithKMS   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.TestTrashWithSecureEncryptionZones Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-7859 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12842725/HDFS-7859.008.patch Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 15a63a4dc782 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 754f15b Default Java 1.8.0_111 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/whitespace-eol.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/17846/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/17846/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/17846/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          Sammi SammiChen added a comment -

          Hi Surendra, thanks for working on it! One suggestion is you should upload your patch with a different name, such as "HDFS-7859.009.patch". Otherwise the auto integration test will not be triggered to test again your new patch.

          Show
          Sammi SammiChen added a comment - Hi Surendra, thanks for working on it! One suggestion is you should upload your patch with a different name, such as " HDFS-7859 .009.patch". Otherwise the auto integration test will not be triggered to test again your new patch.
          Hide
          surendrasingh Surendra Singh Lilhore added a comment - - edited

          Thanks Xinwei Qin for nice work..
          As we discussed offline, I have rebased the patch.

          Attached updated patch.
          Please review..

          Show
          surendrasingh Surendra Singh Lilhore added a comment - - edited Thanks Xinwei Qin for nice work.. As we discussed offline, I have rebased the patch. Attached updated patch. Please review..
          Hide
          drankye Kai Zheng added a comment -

          Any update or do you need any help, Xinwei? Thanks.

          Show
          drankye Kai Zheng added a comment - Any update or do you need any help, Xinwei? Thanks.
          Hide
          xinwei Xinwei Qin added a comment -

          Sorry for attaching the wrong patch, not the latest one, I will correct it tomorrow morning.

          Show
          xinwei Xinwei Qin added a comment - Sorry for attaching the wrong patch, not the latest one, I will correct it tomorrow morning.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 17s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 29s Maven dependency ordering for branch
          +1 mvninstall 6m 58s trunk passed
          +1 compile 1m 23s trunk passed
          +1 checkstyle 0m 45s trunk passed
          +1 mvnsite 1m 24s trunk passed
          +1 mvneclipse 0m 24s trunk passed
          +1 findbugs 3m 5s trunk passed
          +1 javadoc 1m 14s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          -1 mvninstall 0m 40s hadoop-hdfs in the patch failed.
          -1 compile 0m 57s hadoop-hdfs-project in the patch failed.
          -1 cc 0m 57s hadoop-hdfs-project in the patch failed.
          -1 javac 0m 57s hadoop-hdfs-project in the patch failed.
          -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 3 new + 1220 unchanged - 2 fixed = 1223 total (was 1222)
          -1 mvnsite 0m 40s hadoop-hdfs in the patch failed.
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 30s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          -1 findbugs 0m 21s hadoop-hdfs in the patch failed.
          +1 javadoc 1m 9s the patch passed
          +1 unit 0m 53s hadoop-hdfs-client in the patch passed.
          -1 unit 0m 39s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 17s The patch does not generate ASF License warnings.
          26m 28s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825274/HDFS-7859.008.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 4340414684b8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 6fa9bf4
          Default Java 1.8.0_101
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
          compile https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt
          cc https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt
          javac https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16526/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16526/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 17s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 29s Maven dependency ordering for branch +1 mvninstall 6m 58s trunk passed +1 compile 1m 23s trunk passed +1 checkstyle 0m 45s trunk passed +1 mvnsite 1m 24s trunk passed +1 mvneclipse 0m 24s trunk passed +1 findbugs 3m 5s trunk passed +1 javadoc 1m 14s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch -1 mvninstall 0m 40s hadoop-hdfs in the patch failed. -1 compile 0m 57s hadoop-hdfs-project in the patch failed. -1 cc 0m 57s hadoop-hdfs-project in the patch failed. -1 javac 0m 57s hadoop-hdfs-project in the patch failed. -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 3 new + 1220 unchanged - 2 fixed = 1223 total (was 1222) -1 mvnsite 0m 40s hadoop-hdfs in the patch failed. +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 30s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 findbugs 0m 21s hadoop-hdfs in the patch failed. +1 javadoc 1m 9s the patch passed +1 unit 0m 53s hadoop-hdfs-client in the patch passed. -1 unit 0m 39s hadoop-hdfs in the patch failed. +1 asflicense 0m 17s The patch does not generate ASF License warnings. 26m 28s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825274/HDFS-7859.008.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 4340414684b8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 6fa9bf4 Default Java 1.8.0_101 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt compile https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt cc https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt javac https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-compile-hadoop-hdfs-project.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt mvnsite https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/16526/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16526/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16526/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Attach the new patch to fix the only TestOfflineEditsViewer failure. Checkstyle and Findbugs results are not relate to this issue.

          Show
          xinwei Xinwei Qin added a comment - Attach the new patch to fix the only TestOfflineEditsViewer failure. Checkstyle and Findbugs results are not relate to this issue.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 7m 39s trunk passed
          +1 compile 1m 24s trunk passed
          +1 checkstyle 0m 43s trunk passed
          +1 mvnsite 1m 24s trunk passed
          +1 mvneclipse 0m 25s trunk passed
          +1 findbugs 3m 2s trunk passed
          +1 javadoc 1m 14s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 15s the patch passed
          +1 compile 1m 20s the patch passed
          +1 cc 1m 20s the patch passed
          +1 javac 1m 20s the patch passed
          -0 checkstyle 0m 41s hadoop-hdfs-project: The patch generated 1 new + 1118 unchanged - 1 fixed = 1119 total (was 1119)
          +1 mvnsite 1m 20s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 33s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 10s the patch passed
          +1 unit 0m 52s hadoop-hdfs-client in the patch passed.
          -1 unit 56m 58s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 18s The patch does not generate ASF License warnings.
          85m 18s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825071/HDFS-7859.007.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 83a4efe3a9e4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 126d165
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16515/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16515/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 7m 39s trunk passed +1 compile 1m 24s trunk passed +1 checkstyle 0m 43s trunk passed +1 mvnsite 1m 24s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 2s trunk passed +1 javadoc 1m 14s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 15s the patch passed +1 compile 1m 20s the patch passed +1 cc 1m 20s the patch passed +1 javac 1m 20s the patch passed -0 checkstyle 0m 41s hadoop-hdfs-project: The patch generated 1 new + 1118 unchanged - 1 fixed = 1119 total (was 1119) +1 mvnsite 1m 20s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 33s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 10s the patch passed +1 unit 0m 52s hadoop-hdfs-client in the patch passed. -1 unit 56m 58s hadoop-hdfs in the patch failed. +1 asflicense 0m 18s The patch does not generate ASF License warnings. 85m 18s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12825071/HDFS-7859.007.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 83a4efe3a9e4 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 126d165 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/16515/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16515/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16515/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Hi, Zhe Zhang,
          Have updated the patch with your comments and fixed some UT failure, pls review.

          Show
          xinwei Xinwei Qin added a comment - Hi, Zhe Zhang , Have updated the patch with your comments and fixed some UT failure, pls review.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks Xinwei Qin for the update! Patch LGTM overall. A few comments:

          1. ErasureCodingPolicyManager#saveState should create a smaller array at the beginning:
                List<ErasureCodingPolicyProto> ecPolicies = Lists
                    .newArrayListWithCapacity(activePoliciesByName.size());
            

            It should calculate userAddedPoliciesCount}] at this point and use it to initiate {{ecPolicies.

          2. About 3 and 4, I think the current method name and usage may be more suitable,

            Makes sense.

          3. Could you address the checkstyle and findbug issues?

          I think we are very close on this one now.

          Show
          zhz Zhe Zhang added a comment - Thanks Xinwei Qin for the update! Patch LGTM overall. A few comments: ErasureCodingPolicyManager#saveState should create a smaller array at the beginning: List<ErasureCodingPolicyProto> ecPolicies = Lists .newArrayListWithCapacity(activePoliciesByName.size()); It should calculate userAddedPoliciesCount}] at this point and use it to initiate {{ecPolicies . About 3 and 4, I think the current method name and usage may be more suitable, Makes sense. Could you address the checkstyle and findbug issues? I think we are very close on this one now.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 21s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 10s Maven dependency ordering for branch
          +1 mvninstall 8m 40s trunk passed
          +1 compile 1m 44s trunk passed
          +1 checkstyle 0m 48s trunk passed
          +1 mvnsite 1m 54s trunk passed
          +1 mvneclipse 0m 27s trunk passed
          +1 findbugs 3m 52s trunk passed
          +1 javadoc 1m 24s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 20s the patch passed
          +1 compile 1m 31s the patch passed
          +1 cc 1m 31s the patch passed
          +1 javac 1m 31s the patch passed
          -0 checkstyle 0m 44s hadoop-hdfs-project: The patch generated 8 new + 1222 unchanged - 1 fixed = 1230 total (was 1223)
          +1 mvnsite 1m 27s the patch passed
          +1 mvneclipse 0m 24s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 36s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 18s the patch passed
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed.
          -1 unit 74m 39s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          107m 6s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.server.namenode.TestCheckpoint
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.server.datanode.TestLargeBlockReport
            hadoop.hdfs.server.namenode.TestStorageRestore
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade
            hadoop.hdfs.server.namenode.TestCacheDirectives



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:9560f25
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12824172/HDFS-7859.006.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 61eae3cf66ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 7f05ff7
          Default Java 1.8.0_101
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16451/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16451/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 21s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 10s Maven dependency ordering for branch +1 mvninstall 8m 40s trunk passed +1 compile 1m 44s trunk passed +1 checkstyle 0m 48s trunk passed +1 mvnsite 1m 54s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 3m 52s trunk passed +1 javadoc 1m 24s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 20s the patch passed +1 compile 1m 31s the patch passed +1 cc 1m 31s the patch passed +1 javac 1m 31s the patch passed -0 checkstyle 0m 44s hadoop-hdfs-project: The patch generated 8 new + 1222 unchanged - 1 fixed = 1230 total (was 1223) +1 mvnsite 1m 27s the patch passed +1 mvneclipse 0m 24s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 36s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 18s the patch passed +1 unit 0m 56s hadoop-hdfs-client in the patch passed. -1 unit 74m 39s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 107m 6s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.server.namenode.TestCheckpoint   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.server.datanode.TestLargeBlockReport   hadoop.hdfs.server.namenode.TestStorageRestore   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade   hadoop.hdfs.server.namenode.TestCacheDirectives Subsystem Report/Notes Docker Image:yetus/hadoop:9560f25 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12824172/HDFS-7859.006.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 61eae3cf66ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 7f05ff7 Default Java 1.8.0_101 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/16451/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/16451/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/16451/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Hi, Zhe Zhang
          I have updated the patch. About your comments, I have fixed 1, 2, 5, 6, 7. About 3 and 4, I think the current method name and usage may be more suitable, because addErasureCodingPolicy always does not need lock the FSDirectory, which is similar to the addCacheDirective.

          Show
          xinwei Xinwei Qin added a comment - Hi, Zhe Zhang I have updated the patch. About your comments, I have fixed 1, 2, 5, 6, 7. About 3 and 4, I think the current method name and usage may be more suitable, because addErasureCodingPolicy always does not need lock the FSDirectory , which is similar to the addCacheDirective .
          Hide
          xinwei Xinwei Qin added a comment -

          Thanks for your comments, Zhe Zhang, I will update the patch shortly. removeErasureCodingPolicy is similar to this, I'm glad to work on it as well, I will fill another JIRA about it.

          Show
          xinwei Xinwei Qin added a comment - Thanks for your comments, Zhe Zhang , I will update the patch shortly. removeErasureCodingPolicy is similar to this, I'm glad to work on it as well, I will fill another JIRA about it.
          Hide
          zhz Zhe Zhang added a comment -

          A few more comments:

          1. I think we should be more clear in code and comment that we only persist the user-added (non-builtin) policies
            +  public void saveErasureCodingPolices(DataOutputStream out, String sdPath)
            +      throws IOException {
            ...
            +    out.writeInt(activePoliciesByName.size());
            
            +  public PersistState saveState() throws IOException {
            +    List<ErasureCodingPolicyProto> ecPolicies = Lists
            +        .newArrayListWithCapacity(activePoliciesByName.size());
            +    List<ErasureCodingPolicy> hardCodingECPolicies =
            +        Arrays.asList(SYS_POLICIES);
            

            In above code, we should probably only use the number of user-added policies.

          2. In loadErasureCodingPolicies and loadState, we should probably also check and WARN if a built-in (SYS_POLICIES) policy is loaded.
          3. FSDirErasureCodingOp#addErasureCodingPolicy can be renamed as unprotectedAddErasureCodingPolicy to be clearer about locking
          4. In below code, maybe a clearer approach is to use FSDirErasureCodingOp#unprotectedAddErasureCodingPolicy
            +      fsNamesys.getErasureCodingPolicyManager()
            +          .addErasureCodingPolicy(addOp.getEcPolicy());
            
          5. The asserts can be replaced by a Precondition check. Maybe we should add the check in ErasureCodingManager#addErasureCodingPolicy too
            +    public AddErasureCodingPolicyOp setErasureCodingPolicy(
            +        ErasureCodingPolicy ecPolicy) {
            +      assert(ecPolicy.getName() != null);
            +      assert(ecPolicy.getSchema() != null);
            +      assert(ecPolicy.getCellSize() != 0);
            
          6. AddErasureCodingPolicyOp#toString should rely on ErasureCodingPolicy#toString to construct a String for the policy
          7. Below change is not necessary:
            -      fsn.getCacheManager().loadState(
            -          new CacheManager.PersistState(s, pools, directives));
            +      fsn.getCacheManager()
            +          .loadState(new CacheManager.PersistState(s, pools, directives));
            
          Show
          zhz Zhe Zhang added a comment - A few more comments: I think we should be more clear in code and comment that we only persist the user-added (non-builtin) policies + public void saveErasureCodingPolices(DataOutputStream out, String sdPath) + throws IOException { ... + out.writeInt(activePoliciesByName.size()); + public PersistState saveState() throws IOException { + List<ErasureCodingPolicyProto> ecPolicies = Lists + .newArrayListWithCapacity(activePoliciesByName.size()); + List<ErasureCodingPolicy> hardCodingECPolicies = + Arrays.asList(SYS_POLICIES); In above code, we should probably only use the number of user-added policies. In loadErasureCodingPolicies and loadState , we should probably also check and WARN if a built-in ( SYS_POLICIES ) policy is loaded. FSDirErasureCodingOp#addErasureCodingPolicy can be renamed as unprotectedAddErasureCodingPolicy to be clearer about locking In below code, maybe a clearer approach is to use FSDirErasureCodingOp#unprotectedAddErasureCodingPolicy + fsNamesys.getErasureCodingPolicyManager() + .addErasureCodingPolicy(addOp.getEcPolicy()); The asserts can be replaced by a Precondition check. Maybe we should add the check in ErasureCodingManager#addErasureCodingPolicy too + public AddErasureCodingPolicyOp setErasureCodingPolicy( + ErasureCodingPolicy ecPolicy) { + assert (ecPolicy.getName() != null ); + assert (ecPolicy.getSchema() != null ); + assert (ecPolicy.getCellSize() != 0); AddErasureCodingPolicyOp#toString should rely on ErasureCodingPolicy#toString to construct a String for the policy Below change is not necessary: - fsn.getCacheManager().loadState( - new CacheManager.PersistState(s, pools, directives)); + fsn.getCacheManager() + .loadState( new CacheManager.PersistState(s, pools, directives));
          Hide
          zhz Zhe Zhang added a comment -

          Thanks for the comment Kai. Even without the patch in this JIRA, we are already supporting multiple built-in policies (SYS_POLICIES). This JIRA adds support to add non-builtin policies – therefore we need to persist EC polices in fsimage (otherwise all policies are just in the code). And we are adding the addErasureCodingPolicy API in this change. Sounds like we both agree removeErasureCodingPolicy can be done separately. But I think that follow-on should be done before beta release for API stability. Xinwei Qin Do you have time to work on that as well? Thanks.

          Show
          zhz Zhe Zhang added a comment - Thanks for the comment Kai. Even without the patch in this JIRA, we are already supporting multiple built-in policies ( SYS_POLICIES ). This JIRA adds support to add non-builtin policies – therefore we need to persist EC polices in fsimage (otherwise all policies are just in the code). And we are adding the addErasureCodingPolicy API in this change. Sounds like we both agree removeErasureCodingPolicy can be done separately. But I think that follow-on should be done before beta release for API stability. Xinwei Qin Do you have time to work on that as well? Thanks.
          Hide
          drankye Kai Zheng added a comment -

          I remembered what we have confirmed is for phase-1 we have some built-in policies, like the default one 6+3-64k? I suggest we add another one using 10+4-64k to exercise the codes supporting multiple policies. Sounds good to me to do the work adding or removing policies in addition to this.

          Show
          drankye Kai Zheng added a comment - I remembered what we have confirmed is for phase-1 we have some built-in policies, like the default one 6+3-64k? I suggest we add another one using 10+4-64k to exercise the codes supporting multiple policies. Sounds good to me to do the work adding or removing policies in addition to this.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks Xinwei Qin much for the work. I think we should prioritize this since it's a pretty important blocker. I'm reviewing the patch and I think it looks good overall, will post a full review shortly. Looks like it's quite similar to how CacheManager persists state. There are several minor issues worth fixing:

          1. required uint32 numPolicies =1; needs a space
          2. if(!activePoliciesByName.containsKey(ecPolicyName)) needs a space after if
          3. ecPolicyName + " is already exists.") should remove "is"
          4. private void addInternal sounds like "addErasureCodingPolicyInternal" is better? Or just merge it into addErasureCodingPolicy? It's a small method anyway.

          More importantly, I'd like to discuss an operational issue. What if an admin adds a policy by mistake? Right now the policy cannot be removed or overwritten by a correct policy with the same name. I think we should add a removeErasureCodingPolicy operation; and I'm OK if that's done in a follow-on JIRA. But pinging Kai Zheng Andrew Wang Jing Zhao for more opinions.

          Show
          zhz Zhe Zhang added a comment - Thanks Xinwei Qin much for the work. I think we should prioritize this since it's a pretty important blocker. I'm reviewing the patch and I think it looks good overall, will post a full review shortly. Looks like it's quite similar to how CacheManager persists state. There are several minor issues worth fixing: required uint32 numPolicies =1; needs a space if(!activePoliciesByName.containsKey(ecPolicyName)) needs a space after if ecPolicyName + " is already exists.") should remove "is" private void addInternal sounds like "addErasureCodingPolicyInternal" is better? Or just merge it into addErasureCodingPolicy ? It's a small method anyway. More importantly, I'd like to discuss an operational issue. What if an admin adds a policy by mistake? Right now the policy cannot be removed or overwritten by a correct policy with the same name. I think we should add a removeErasureCodingPolicy operation; and I'm OK if that's done in a follow-on JIRA. But pinging Kai Zheng Andrew Wang Jing Zhao for more opinions.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 25s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 3 new or modified test files.
          0 mvndep 0m 8s Maven dependency ordering for branch
          +1 mvninstall 6m 58s trunk passed
          +1 compile 1m 32s trunk passed
          +1 checkstyle 0m 49s trunk passed
          +1 mvnsite 1m 34s trunk passed
          +1 mvneclipse 0m 29s trunk passed
          +1 findbugs 3m 13s trunk passed
          +1 javadoc 1m 18s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 28s the patch passed
          +1 compile 1m 27s the patch passed
          +1 cc 1m 27s the patch passed
          +1 javac 1m 27s the patch passed
          -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 6 new + 1222 unchanged - 1 fixed = 1228 total (was 1223)
          +1 mvnsite 1m 22s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          -1 findbugs 1m 38s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 17s the patch passed
          +1 unit 0m 56s hadoop-hdfs-client in the patch passed.
          -1 unit 73m 50s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          103m 14s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:85209cc
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12815997/HDFS-7859.005.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 8076befcebb3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 2e835d9
          Default Java 1.8.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15971/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15971/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 25s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 3 new or modified test files. 0 mvndep 0m 8s Maven dependency ordering for branch +1 mvninstall 6m 58s trunk passed +1 compile 1m 32s trunk passed +1 checkstyle 0m 49s trunk passed +1 mvnsite 1m 34s trunk passed +1 mvneclipse 0m 29s trunk passed +1 findbugs 3m 13s trunk passed +1 javadoc 1m 18s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 28s the patch passed +1 compile 1m 27s the patch passed +1 cc 1m 27s the patch passed +1 javac 1m 27s the patch passed -0 checkstyle 0m 42s hadoop-hdfs-project: The patch generated 6 new + 1222 unchanged - 1 fixed = 1228 total (was 1223) +1 mvnsite 1m 22s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. -1 findbugs 1m 38s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 17s the patch passed +1 unit 0m 56s hadoop-hdfs-client in the patch passed. -1 unit 73m 50s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 103m 14s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer Subsystem Report/Notes Docker Image:yetus/hadoop:85209cc JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12815997/HDFS-7859.005.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 8076befcebb3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 2e835d9 Default Java 1.8.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/15971/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15971/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15971/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Rebased the patch(005.patch), and add a unit test case.

          Show
          xinwei Xinwei Qin added a comment - Rebased the patch(005.patch), and add a unit test case.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 25s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 6m 47s trunk passed
          +1 compile 1m 38s trunk passed
          +1 checkstyle 0m 45s trunk passed
          +1 mvnsite 1m 29s trunk passed
          +1 mvneclipse 0m 25s trunk passed
          +1 findbugs 3m 25s trunk passed
          +1 javadoc 1m 24s trunk passed
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 41s the patch passed
          +1 compile 1m 46s the patch passed
          +1 cc 1m 46s the patch passed
          +1 javac 1m 46s the patch passed
          -0 checkstyle 0m 41s hadoop-hdfs-project: The patch generated 17 new + 1081 unchanged - 1 fixed = 1098 total (was 1082)
          +1 mvnsite 1m 35s the patch passed
          +1 mvneclipse 0m 19s the patch passed
          +1 whitespace 0m 1s The patch has no whitespace issues.
          -1 findbugs 1m 50s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
          +1 javadoc 1m 24s the patch passed
          +1 unit 1m 7s hadoop-hdfs-client in the patch passed.
          -1 unit 73m 58s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 21s The patch does not generate ASF License warnings.
          105m 11s



          Reason Tests
          FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client
            Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java
          Failed junit tests hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.TestCheckpoint
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
            hadoop.hdfs.server.namenode.TestFSEditLogLoader
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.TestCrcCorruption
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.server.namenode.TestReconstructStripedBlocks
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.namenode.TestBackupNode
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:85209cc
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799661/HDFS-7859.004.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux 26b936d055cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 73615a7
          Default Java 1.8.0_91
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15919/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15919/console
          Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 25s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 6m 47s trunk passed +1 compile 1m 38s trunk passed +1 checkstyle 0m 45s trunk passed +1 mvnsite 1m 29s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 25s trunk passed +1 javadoc 1m 24s trunk passed 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 41s the patch passed +1 compile 1m 46s the patch passed +1 cc 1m 46s the patch passed +1 javac 1m 46s the patch passed -0 checkstyle 0m 41s hadoop-hdfs-project: The patch generated 17 new + 1081 unchanged - 1 fixed = 1098 total (was 1082) +1 mvnsite 1m 35s the patch passed +1 mvneclipse 0m 19s the patch passed +1 whitespace 0m 1s The patch has no whitespace issues. -1 findbugs 1m 50s hadoop-hdfs-project/hadoop-hdfs-client generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 1m 24s the patch passed +1 unit 1m 7s hadoop-hdfs-client in the patch passed. -1 unit 73m 58s hadoop-hdfs in the patch failed. +1 asflicense 0m 21s The patch does not generate ASF License warnings. 105m 11s Reason Tests FindBugs module:hadoop-hdfs-project/hadoop-hdfs-client   Class org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure$Policy defines non-transient non-serializable instance field condition In ReplaceDatanodeOnFailure.java:instance field condition In ReplaceDatanodeOnFailure.java Failed junit tests hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.TestCheckpoint   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.server.namenode.TestNameNodeRecovery   hadoop.hdfs.server.namenode.TestFSEditLogLoader   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.server.namenode.TestReconstructStripedBlocks   hadoop.hdfs.TestHDFSServerPorts   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.namenode.TestBackupNode   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain Subsystem Report/Notes Docker Image:yetus/hadoop:85209cc JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799661/HDFS-7859.004.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux 26b936d055cc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 73615a7 Default Java 1.8.0_91 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt findbugs https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/new-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.html unit https://builds.apache.org/job/PreCommit-HDFS-Build/15919/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15919/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15919/console Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Rebased patch still have some test failure, I'm doing my best to fix it now.

          Show
          xinwei Xinwei Qin added a comment - Rebased patch still have some test failure, I'm doing my best to fix it now.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks!

          Show
          zhz Zhe Zhang added a comment - Thanks!
          Hide
          xinwei Xinwei Qin added a comment -

          Zhe Zhang, it's better to have this in 3.0, rebasing and perfect this will be done ASAP this week.

          Show
          xinwei Xinwei Qin added a comment - Zhe Zhang , it's better to have this in 3.0, rebasing and perfect this will be done ASAP this week.
          Hide
          zhz Zhe Zhang added a comment -

          I think this is a blocker for EC release in 3.0-alpah1. Xinwei Qin Kai Zheng Any plan to revive the work? Thanks a lot.

          Show
          zhz Zhe Zhang added a comment - I think this is a blocker for EC release in 3.0-alpah1. Xinwei Qin Kai Zheng Any plan to revive the work? Thanks a lot.
          Hide
          drankye Kai Zheng added a comment -

          Yeah. We might also need to add some tests.

          Show
          drankye Kai Zheng added a comment - Yeah. We might also need to add some tests.
          Hide
          xinwei Xinwei Qin added a comment -

          Need to update patch to fix the checkstyles and relevant unit test failure.

          Show
          xinwei Xinwei Qin added a comment - Need to update patch to fix the checkstyles and relevant unit test failure.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 10s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          0 mvndep 0m 36s Maven dependency ordering for branch
          +1 mvninstall 6m 36s trunk passed
          +1 compile 1m 17s trunk passed with JDK v1.8.0_77
          +1 compile 1m 21s trunk passed with JDK v1.7.0_95
          +1 checkstyle 0m 38s trunk passed
          +1 mvnsite 1m 24s trunk passed
          +1 mvneclipse 0m 25s trunk passed
          +1 findbugs 3m 34s trunk passed
          +1 javadoc 1m 26s trunk passed with JDK v1.8.0_77
          +1 javadoc 2m 16s trunk passed with JDK v1.7.0_95
          0 mvndep 0m 8s Maven dependency ordering for patch
          +1 mvninstall 1m 16s the patch passed
          +1 compile 1m 13s the patch passed with JDK v1.8.0_77
          +1 cc 1m 13s the patch passed
          +1 javac 1m 13s the patch passed
          +1 compile 1m 20s the patch passed with JDK v1.7.0_95
          +1 cc 1m 20s the patch passed
          -1 javac 3m 58s hadoop-hdfs-project-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 50 unchanged - 1 fixed = 51 total (was 51)
          +1 javac 1m 20s the patch passed
          -1 checkstyle 0m 37s hadoop-hdfs-project: patch generated 17 new + 1085 unchanged - 1 fixed = 1102 total (was 1086)
          +1 mvnsite 1m 22s the patch passed
          +1 mvneclipse 0m 22s the patch passed
          +1 whitespace 0m 0s Patch has no whitespace issues.
          +1 findbugs 4m 1s the patch passed
          +1 javadoc 1m 20s the patch passed with JDK v1.8.0_77
          +1 javadoc 2m 8s the patch passed with JDK v1.7.0_95
          +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77.
          -1 unit 56m 54s hadoop-hdfs in the patch failed with JDK v1.8.0_77.
          +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95.
          -1 unit 53m 56s hadoop-hdfs in the patch failed with JDK v1.7.0_95.
          +1 asflicense 0m 20s Patch does not generate ASF License warnings.
          148m 40s



          Reason Tests
          JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.TestFSEditLogLoader
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.TestCrcCorruption
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.shortcircuit.TestShortCircuitCache
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.server.namenode.TestBackupNode
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.TestCheckpoint
          JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestFSEditLogLoader
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.server.namenode.TestNamenodeRetryCache
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.TestHFlush
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.server.datanode.TestDataNodeLifeline
            hadoop.hdfs.server.namenode.TestBackupNode
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.TestCheckpoint



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:fbe3e86
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799661/HDFS-7859.004.patch
          JIRA Issue HDFS-7859
          Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle
          uname Linux b631723ef841 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / af9bdbe
          Default Java 1.7.0_95
          Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95
          findbugs v3.0.0
          javac hadoop-hdfs-project-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project-jdk1.7.0_95.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt
          JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15209/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15209/console
          Powered by Apache Yetus 0.2.0 http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 10s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. -1 test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. 0 mvndep 0m 36s Maven dependency ordering for branch +1 mvninstall 6m 36s trunk passed +1 compile 1m 17s trunk passed with JDK v1.8.0_77 +1 compile 1m 21s trunk passed with JDK v1.7.0_95 +1 checkstyle 0m 38s trunk passed +1 mvnsite 1m 24s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 34s trunk passed +1 javadoc 1m 26s trunk passed with JDK v1.8.0_77 +1 javadoc 2m 16s trunk passed with JDK v1.7.0_95 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 1m 16s the patch passed +1 compile 1m 13s the patch passed with JDK v1.8.0_77 +1 cc 1m 13s the patch passed +1 javac 1m 13s the patch passed +1 compile 1m 20s the patch passed with JDK v1.7.0_95 +1 cc 1m 20s the patch passed -1 javac 3m 58s hadoop-hdfs-project-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 50 unchanged - 1 fixed = 51 total (was 51) +1 javac 1m 20s the patch passed -1 checkstyle 0m 37s hadoop-hdfs-project: patch generated 17 new + 1085 unchanged - 1 fixed = 1102 total (was 1086) +1 mvnsite 1m 22s the patch passed +1 mvneclipse 0m 22s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 findbugs 4m 1s the patch passed +1 javadoc 1m 20s the patch passed with JDK v1.8.0_77 +1 javadoc 2m 8s the patch passed with JDK v1.7.0_95 +1 unit 0m 49s hadoop-hdfs-client in the patch passed with JDK v1.8.0_77. -1 unit 56m 54s hadoop-hdfs in the patch failed with JDK v1.8.0_77. +1 unit 0m 55s hadoop-hdfs-client in the patch passed with JDK v1.7.0_95. -1 unit 53m 56s hadoop-hdfs in the patch failed with JDK v1.7.0_95. +1 asflicense 0m 20s Patch does not generate ASF License warnings. 148m 40s Reason Tests JDK v1.8.0_77 Failed junit tests hadoop.hdfs.server.namenode.TestFSEditLogLoader   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.TestHFlush   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.shortcircuit.TestShortCircuitCache   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.TestHDFSServerPorts   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestNameNodeRecovery   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.server.namenode.TestBackupNode   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.TestCheckpoint JDK v1.7.0_95 Failed junit tests hadoop.hdfs.server.namenode.TestFSEditLogLoader   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.server.namenode.TestNamenodeRetryCache   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.TestHFlush   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.TestHDFSServerPorts   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestNameNodeRecovery   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.server.datanode.TestDataNodeLifeline   hadoop.hdfs.server.namenode.TestBackupNode   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.TestCheckpoint Subsystem Report/Notes Docker Image:yetus/hadoop:fbe3e86 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799661/HDFS-7859.004.patch JIRA Issue HDFS-7859 Optional Tests asflicense compile cc mvnsite javac unit javadoc mvninstall findbugs checkstyle uname Linux b631723ef841 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / af9bdbe Default Java 1.7.0_95 Multi-JDK versions /usr/lib/jvm/java-8-oracle:1.8.0_77 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 findbugs v3.0.0 javac hadoop-hdfs-project-jdk1.7.0_95: https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project-jdk1.7.0_95.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt unit test logs https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt https://builds.apache.org/job/PreCommit-HDFS-Build/15209/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_95.txt JDK v1.7.0_95 Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/15209/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/15209/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          Rebase the patch with the latest trunk.

          Show
          xinwei Xinwei Qin added a comment - Rebase the patch with the latest trunk.
          Hide
          xinwei Xinwei Qin added a comment -

          Rakesh R Kai Zheng, and Zhe Zhang, thanks for your comments and clarifications.
          Now, it is a good time to update this patch, though we should have a more clear about the details of custom policies. I am glad to rebase the patch with latest code and maybe attach it tomorrow.

          Show
          xinwei Xinwei Qin added a comment - Rakesh R Kai Zheng , and Zhe Zhang , thanks for your comments and clarifications. Now, it is a good time to update this patch, though we should have a more clear about the details of custom policies. I am glad to rebase the patch with latest code and maybe attach it tomorrow.
          Hide
          zhz Zhe Zhang added a comment -

          Thanks Kia and Rakesh for reactivating the discussion.

          I agree that at this stage there isn't a clear need for custom policies.

          The main motivation for persisting EC policies in NN is probably downgrade. Assuming that we never remove any existing built-in policies from ErasureCodingPolicyManager, we won't have issues with upgrade. But the chance of adding an EC policy in a 3.x release is nontrivial.

          So I don't think this is a 3.0 blocker. But it would be nice to have it for 3.0 release.

          Show
          zhz Zhe Zhang added a comment - Thanks Kia and Rakesh for reactivating the discussion. I agree that at this stage there isn't a clear need for custom policies. The main motivation for persisting EC policies in NN is probably downgrade. Assuming that we never remove any existing built-in policies from ErasureCodingPolicyManager , we won't have issues with upgrade. But the chance of adding an EC policy in a 3.x release is nontrivial. So I don't think this is a 3.0 blocker. But it would be nice to have it for 3.0 release.
          Hide
          rakeshr Rakesh R added a comment -

          I thought many considerations originally targeted for the issue have already been implemented elsewhere, therefore the only thing left is custom codec and schema support. I don't think there is a strong requirement for this feature but we can implement it perhaps in phase II I guess.

          Thanks for making it clear, Kai Zheng.

          Show
          rakeshr Rakesh R added a comment - I thought many considerations originally targeted for the issue have already been implemented elsewhere, therefore the only thing left is custom codec and schema support. I don't think there is a strong requirement for this feature but we can implement it perhaps in phase II I guess. Thanks for making it clear, Kai Zheng .
          Hide
          rakeshr Rakesh R added a comment -

          For the builtin schema and policies, IIRC, there was a consideration that we still need to persist the schema and policy to indicate the software upgrades (so the builtin ones may be changed).

          Yes, changing built-in schema is an interesting case. If we end up in a case to change the default one then persisting would be required. I think we can proceed to persist the ec policy details in the fsimage and editlog. I'm just adding a thought to understand more - probably we could explore whether layout version can be utilized to handle this kinda situations.

          The patch need to rebase in latest code. Would you mind rebasing it, Xinwei Qin .

          Show
          rakeshr Rakesh R added a comment - For the builtin schema and policies, IIRC, there was a consideration that we still need to persist the schema and policy to indicate the software upgrades (so the builtin ones may be changed). Yes, changing built-in schema is an interesting case. If we end up in a case to change the default one then persisting would be required. I think we can proceed to persist the ec policy details in the fsimage and editlog. I'm just adding a thought to understand more - probably we could explore whether layout version can be utilized to handle this kinda situations. The patch need to rebase in latest code. Would you mind rebasing it, Xinwei Qin .
          Hide
          drankye Kai Zheng added a comment -

          IIUC, persistence is more necessary when we supports custom schemas, isn't it?

          I thought you're right. For the builtin schema and policies, IIRC, there was a consideration that we still need to persist the schema and policy to indicate the software upgrades (so the builtin ones may be changed).

          Do we have any plan to implement HDFS-7337?

          I thought many considerations originally targeted for the issue have already been implemented elsewhere, therefore the only thing left is custom codec and schema support. I don't think there is a strong requirement for this feature but we can implement it perhaps in phase II I guess.

          Show
          drankye Kai Zheng added a comment - IIUC, persistence is more necessary when we supports custom schemas, isn't it? I thought you're right. For the builtin schema and policies, IIRC, there was a consideration that we still need to persist the schema and policy to indicate the software upgrades (so the builtin ones may be changed). Do we have any plan to implement HDFS-7337 ? I thought many considerations originally targeted for the issue have already been implemented elsewhere, therefore the only thing left is custom codec and schema support. I don't think there is a strong requirement for this feature but we can implement it perhaps in phase II I guess.
          Hide
          rakeshr Rakesh R added a comment -

          The proposed patch in this JIRA handles saving and loading the schema in fsimage/editlog. IIUC, persistence is more necessary when we supports custom schemas, isn't it?. I could see we are still discussing the ways to support HDFS-7337 and not yet reached a common agreement. Do we have any plan to implement HDFS-7337?.

          Show
          rakeshr Rakesh R added a comment - The proposed patch in this JIRA handles saving and loading the schema in fsimage/editlog. IIUC, persistence is more necessary when we supports custom schemas, isn't it?. I could see we are still discussing the ways to support HDFS-7337 and not yet reached a common agreement. Do we have any plan to implement HDFS-7337 ?.
          Hide
          drankye Kai Zheng added a comment -

          Should this be a blocker for 3.0 release? As multiple codecs, schemas and policies are going to be all supported, would this be needed? It's great if anybody would cast some thoughts. Thanks.

          Show
          drankye Kai Zheng added a comment - Should this be a blocker for 3.0 release? As multiple codecs, schemas and policies are going to be all supported, would this be needed? It's great if anybody would cast some thoughts. Thanks.
          Hide
          xinwei Xinwei Qin added a comment -

          Hi, Kai Zheng, I will proceed to fill this jira recently.

          Show
          xinwei Xinwei Qin added a comment - Hi, Kai Zheng , I will proceed to fill this jira recently.
          Hide
          drankye Kai Zheng added a comment -

          Hello Xinwei Qin ,

          According to discussions in HDFS-7337 and related issues, and Andrew's comment, we can proceed here to persist the erasure coding policies in NameNode image. Please let know if any more inputs are needed here. Thanks.

          Show
          drankye Kai Zheng added a comment - Hello Xinwei Qin , According to discussions in HDFS-7337 and related issues, and Andrew's comment , we can proceed here to persist the erasure coding policies in NameNode image. Please let know if any more inputs are needed here. Thanks.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          -1 patch 0m 6s HDFS-7859 does not apply to HDFS-7285. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help.



          Subsystem Report/Notes
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12729418/HDFS-7859-HDFS-7285.003.patch
          JIRA Issue HDFS-7859
          Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14143/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment -1 patch 0m 6s HDFS-7859 does not apply to HDFS-7285 . Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12729418/HDFS-7859-HDFS-7285.003.patch JIRA Issue HDFS-7859 Powered by Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org Console output https://builds.apache.org/job/PreCommit-HDFS-Build/14143/console This message was automatically generated.
          Hide
          zhz Zhe Zhang added a comment -

          Cool! I just registered. Thanks for organizing it Allen.

          Show
          zhz Zhe Zhang added a comment - Cool! I just registered. Thanks for organizing it Allen.
          Hide
          aw Allen Wittenauer added a comment -

          It means you haven't been paying attention to the bug bash emails.

          Show
          aw Allen Wittenauer added a comment - It means you haven't been paying attention to the bug bash emails.
          Hide
          zhz Zhe Zhang added a comment -

          Allen Wittenauer Could you explain a bit what this BB2015-05-TBR label means?

          Show
          zhz Zhe Zhang added a comment - Allen Wittenauer Could you explain a bit what this BB2015-05-TBR label means?
          Hide
          drankye Kai Zheng added a comment -

          Nicholas,

          This was already move out of HDFS-7285 you did and there was no plan to commit this in phase I AFAIK. I thought the patch updated here is good to have to be ready for follow-on once we get the merge done.

          Show
          drankye Kai Zheng added a comment - Nicholas, This was already move out of HDFS-7285 you did and there was no plan to commit this in phase I AFAIK. I thought the patch updated here is good to have to be ready for follow-on once we get the merge done.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          Please do not commit this JIRA to the HDFS-7285 branch since we won't support multiple schemas for the moment.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - Please do not commit this JIRA to the HDFS-7285 branch since we won't support multiple schemas for the moment.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 14m 37s Pre-patch HDFS-7285 compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          +1 whitespace 0m 0s The patch has no lines that end in whitespace.
          +1 javac 7m 34s There were no new javac warning messages.
          +1 javadoc 9m 40s There were no new javadoc warning messages.
          -1 release audit 0m 15s The applied patch generated 1 release audit warnings.
          -1 checkstyle 7m 48s The applied patch generated 10 additional checkstyle issues.
          +1 install 1m 32s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          -1 findbugs 3m 11s The patch appears to introduce 9 new Findbugs (version 2.0.3) warnings.
          +1 native 3m 15s Pre-build of native portion
          -1 hdfs tests 239m 34s Tests failed in hadoop-hdfs.
              288m 5s  



          Reason Tests
          FindBugs module:hadoop-hdfs
            Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 90% of time Unsynchronized access at DFSOutputStream.java:90% of time Unsynchronized access at DFSOutputStream.java:[line 142]
            Class org.apache.hadoop.hdfs.DataStreamer$LastException is not derived from an Exception, even though it is named as such At DataStreamer.java:from an Exception, even though it is named as such At DataStreamer.java:[lines 177-201]
            Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:[line 105]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:[line 208]
            Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 206]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 116]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java:[line 81]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:[line 85]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:[line 167]
          Failed unit tests hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.TestDFSClientRetries
            hadoop.hdfs.server.namenode.TestCheckpoint
            hadoop.hdfs.TestDFSOutputStream
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.TestDeleteRace
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.TestClose
            hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.server.datanode.TestBlockRecovery
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.server.namenode.TestAuditLogs
            hadoop.hdfs.TestAppendSnapshotTruncate
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.cli.TestHDFSCLI
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.server.namenode.TestFileTruncate
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.TestCrcCorruption
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.TestMultiThreadedHflush
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.TestQuota
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.TestFileLengthOnClusterRestart
          Timed out tests org.apache.hadoop.hdfs.TestDataTransferProtocol
            org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
            org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
            org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks
            org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12729418/HDFS-7859-HDFS-7285.003.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision HDFS-7285 / 5a83838
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/checkstyle-result-diff.txt
          Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10471/testReport/
          Java 1.7.0_55
          uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10471/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 14m 37s Pre-patch HDFS-7285 compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 whitespace 0m 0s The patch has no lines that end in whitespace. +1 javac 7m 34s There were no new javac warning messages. +1 javadoc 9m 40s There were no new javadoc warning messages. -1 release audit 0m 15s The applied patch generated 1 release audit warnings. -1 checkstyle 7m 48s The applied patch generated 10 additional checkstyle issues. +1 install 1m 32s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. -1 findbugs 3m 11s The patch appears to introduce 9 new Findbugs (version 2.0.3) warnings. +1 native 3m 15s Pre-build of native portion -1 hdfs tests 239m 34s Tests failed in hadoop-hdfs.     288m 5s   Reason Tests FindBugs module:hadoop-hdfs   Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 90% of time Unsynchronized access at DFSOutputStream.java:90% of time Unsynchronized access at DFSOutputStream.java: [line 142]   Class org.apache.hadoop.hdfs.DataStreamer$LastException is not derived from an Exception, even though it is named as such At DataStreamer.java:from an Exception, even though it is named as such At DataStreamer.java: [lines 177-201]   Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java: [line 105]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java: [line 208]   Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java: [line 206]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java: [line 116]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java: [line 81]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java: [line 85]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, int, int) At StripedBlockUtil.java: [line 167] Failed unit tests hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.TestDFSClientRetries   hadoop.hdfs.server.namenode.TestCheckpoint   hadoop.hdfs.TestDFSOutputStream   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.TestDeleteRace   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.TestClose   hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.server.datanode.TestBlockRecovery   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.server.namenode.TestAuditLogs   hadoop.hdfs.TestAppendSnapshotTruncate   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.TestHDFSServerPorts   hadoop.cli.TestHDFSCLI   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.server.namenode.TestFileTruncate   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.TestCrcCorruption   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestMultiThreadedHflush   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.TestQuota   hadoop.hdfs.server.namenode.TestNameNodeRecovery   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.TestFileLengthOnClusterRestart Timed out tests org.apache.hadoop.hdfs.TestDataTransferProtocol   org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery   org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader   org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks   org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12729418/HDFS-7859-HDFS-7285.003.patch Optional Tests javadoc javac unit findbugs checkstyle git revision HDFS-7285 / 5a83838 Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/checkstyle-result-diff.txt Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10471/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10471/testReport/ Java 1.7.0_55 uname Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10471/console This message was automatically generated.
          Hide
          xinwei Xinwei Qin added a comment -

          The 003 patch removes MODIFY and REMOVE ECSchema editlog operations, these operations will be added by another JIRA(HDFS-8295) later when they are supported.

          Show
          xinwei Xinwei Qin added a comment - The 003 patch removes MODIFY and REMOVE ECSchema editlog operations, these operations will be added by another JIRA( HDFS-8295 ) later when they are supported.
          Hide
          zhz Zhe Zhang added a comment -

          Allen Wittenauer Thanks again for bringing in the feature-branch pre-commit Jenkins functionality! It's really helpful. We just saw another successful run under HDFS-7678.

          Show
          zhz Zhe Zhang added a comment - Allen Wittenauer Thanks again for bringing in the feature-branch pre-commit Jenkins functionality! It's really helpful. We just saw another successful run under HDFS-7678 .
          Hide
          aw Allen Wittenauer added a comment -

          P.S., thanks for letting me use this issue as a guinea pig.

          Show
          aw Allen Wittenauer added a comment - P.S., thanks for letting me use this issue as a guinea pig.
          Hide
          aw Allen Wittenauer added a comment -

          263m 35s

          Youch. Just under the wire.

          git revision HDFS-7285 / bc3091b

          So yes, it switched to the feature branch to run the tests, as was expected.

          Show
          aw Allen Wittenauer added a comment - 263m 35s Youch. Just under the wire. git revision HDFS-7285 / bc3091b So yes, it switched to the feature branch to run the tests, as was expected.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 2s Pre-patch HDFS-7285 compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace.
          +1 javac 7m 42s There were no new javac warning messages.
          +1 javadoc 9m 49s There were no new javadoc warning messages.
          -1 release audit 0m 14s The applied patch generated 1 release audit warnings.
          -1 checkstyle 5m 48s The applied patch generated 10 additional checkstyle issues.
          +1 install 1m 35s mvn install still works.
          +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse.
          -1 findbugs 3m 18s The patch appears to introduce 11 new Findbugs (version 2.0.3) warnings.
          +1 native 3m 16s Pre-build of native portion
          -1 hdfs tests 216m 12s Tests failed in hadoop-hdfs.
              263m 35s  



          Reason Tests
          FindBugs module:hadoop-hdfs
            Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java:[line 142]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:[line 95]
            Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:[line 104]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:[line 208]
            Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 206]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 116]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java:[line 81]
            org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java:[line 4552]
            org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$ModifyECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java:[line 4624]
            org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeECSchema(DataOutputStream, ECSchema) makes inefficient use of keySet iterator instead of entrySet iterator At FSImageSerialization.java:of keySet iterator instead of entrySet iterator At FSImageSerialization.java:[line 792]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:[line 75]
          Failed unit tests hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.server.namenode.TestFileTruncate
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.server.datanode.TestDataNodeMetrics
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.namenode.TestCheckpoint
          Timed out tests org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
            org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks
            org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12728687/HDFS-7859-HDFS-7285.002.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision HDFS-7285 / bc3091b
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/whitespace.txt
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/checkstyle-result-diff.txt
          Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10425/testReport/
          Java 1.7.0_55
          uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10425/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 2s Pre-patch HDFS-7285 compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. +1 javac 7m 42s There were no new javac warning messages. +1 javadoc 9m 49s There were no new javadoc warning messages. -1 release audit 0m 14s The applied patch generated 1 release audit warnings. -1 checkstyle 5m 48s The applied patch generated 10 additional checkstyle issues. +1 install 1m 35s mvn install still works. +1 eclipse:eclipse 0m 34s The patch built with eclipse:eclipse. -1 findbugs 3m 18s The patch appears to introduce 11 new Findbugs (version 2.0.3) warnings. +1 native 3m 16s Pre-build of native portion -1 hdfs tests 216m 12s Tests failed in hadoop-hdfs.     263m 35s   Reason Tests FindBugs module:hadoop-hdfs   Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java: [line 142]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java: [line 95]   Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java: [line 104]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java: [line 208]   Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java: [line 206]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java: [line 116]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java: [line 81]   org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java: [line 4552]   org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$ModifyECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java: [line 4624]   org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeECSchema(DataOutputStream, ECSchema) makes inefficient use of keySet iterator instead of entrySet iterator At FSImageSerialization.java:of keySet iterator instead of entrySet iterator At FSImageSerialization.java: [line 792]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java: [line 75] Failed unit tests hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.server.namenode.TestFileTruncate   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.server.datanode.TestDataNodeMetrics   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.TestHDFSServerPorts   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestNameNodeRecovery   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.namenode.TestCheckpoint Timed out tests org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager   org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks   org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12728687/HDFS-7859-HDFS-7285.002.patch Optional Tests javadoc javac unit findbugs checkstyle git revision HDFS-7285 / bc3091b whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/whitespace.txt Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/checkstyle-result-diff.txt Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10425/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10425/testReport/ Java 1.7.0_55 uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10425/console This message was automatically generated.
          Hide
          hadoopqa Hadoop QA added a comment -



          -1 overall



          Vote Subsystem Runtime Comment
          0 pre-patch 15m 23s Pre-patch HDFS-7285 compilation is healthy.
          +1 @author 0m 0s The patch does not contain any @author tags.
          -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
          -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace.
          +1 javac 7m 34s There were no new javac warning messages.
          +1 javadoc 9m 41s There were no new javadoc warning messages.
          -1 release audit 0m 16s The applied patch generated 1 release audit warnings.
          -1 checkstyle 4m 1s The applied patch generated 9 additional checkstyle issues.
          +1 install 1m 50s mvn install still works.
          +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse.
          -1 findbugs 3m 13s The patch appears to introduce 11 new Findbugs (version 2.0.3) warnings.
          +1 native 3m 17s Pre-build of native portion
          -1 hdfs tests 217m 55s Tests failed in hadoop-hdfs.
              263m 53s  



          Reason Tests
          FindBugs module:hadoop-hdfs
            Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java:[line 142]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:[line 95]
            Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:[line 104]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:[line 208]
            Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:[line 206]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 116]
            Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java:[line 81]
            org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java:[line 4552]
            org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$ModifyECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java:[line 4624]
            org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeECSchema(DataOutputStream, ECSchema) makes inefficient use of keySet iterator instead of entrySet iterator At FSImageSerialization.java:of keySet iterator instead of entrySet iterator At FSImageSerialization.java:[line 792]
            Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:[line 75]
          Failed unit tests hadoop.hdfs.server.namenode.TestSaveNamespace
            hadoop.hdfs.server.blockmanagement.TestDatanodeManager
            hadoop.hdfs.qjournal.TestNNWithQJM
            hadoop.hdfs.server.namenode.TestNameEditsConfigs
            hadoop.hdfs.TestDFSRollback
            hadoop.hdfs.server.namenode.TestCreateEditsLog
            hadoop.hdfs.server.namenode.TestCheckpoint
            hadoop.hdfs.server.namenode.TestMetadataVersionOutput
            hadoop.hdfs.server.namenode.TestFSImageWithSnapshot
            hadoop.hdfs.TestReadStripedFile
            hadoop.hdfs.TestBlockStoragePolicy
            hadoop.hdfs.server.namenode.TestEditLogRace
            hadoop.hdfs.TestDFSUpgrade
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup
            hadoop.hdfs.server.namenode.TestFileTruncate
            hadoop.hdfs.TestDFSFinalize
            hadoop.hdfs.TestDFSStorageStateRecovery
            hadoop.hdfs.web.TestWebHDFSAcl
            hadoop.hdfs.protocol.TestLayoutVersion
            hadoop.hdfs.TestHDFSServerPorts
            hadoop.hdfs.server.namenode.TestFileContextAcl
            hadoop.hdfs.TestRollingUpgrade
            hadoop.hdfs.TestDFSStartupVersions
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
            hadoop.hdfs.server.namenode.TestFSNamesystem
            hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
            hadoop.hdfs.server.namenode.TestValidateConfigurationSettings
            hadoop.hdfs.server.namenode.TestNNThroughputBenchmark
            hadoop.hdfs.server.namenode.TestNameNodeAcl
            hadoop.hdfs.TestDFSInotifyEventInputStream
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad
            hadoop.hdfs.server.namenode.TestStartup
            hadoop.hdfs.server.namenode.TestNameNodeRecovery
          Timed out tests org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
            org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks



          Subsystem Report/Notes
          Patch URL http://issues.apache.org/jira/secure/attachment/12728418/HDFS-7859-HDFS-7285.002.patch
          Optional Tests javadoc javac unit findbugs checkstyle
          git revision HDFS-7285 / bc3091b
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/whitespace.txt
          Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/patchReleaseAuditProblems.txt
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/checkstyle-result-diff.txt
          Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/testrun_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10424/testReport/
          Java 1.7.0_55
          uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10424/console

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 pre-patch 15m 23s Pre-patch HDFS-7285 compilation is healthy. +1 @author 0m 0s The patch does not contain any @author tags. -1 tests included 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. -1 whitespace 0m 0s The patch has 2 line(s) that end in whitespace. +1 javac 7m 34s There were no new javac warning messages. +1 javadoc 9m 41s There were no new javadoc warning messages. -1 release audit 0m 16s The applied patch generated 1 release audit warnings. -1 checkstyle 4m 1s The applied patch generated 9 additional checkstyle issues. +1 install 1m 50s mvn install still works. +1 eclipse:eclipse 0m 33s The patch built with eclipse:eclipse. -1 findbugs 3m 13s The patch appears to introduce 11 new Findbugs (version 2.0.3) warnings. +1 native 3m 17s Pre-build of native portion -1 hdfs tests 217m 55s Tests failed in hadoop-hdfs.     263m 53s   Reason Tests FindBugs module:hadoop-hdfs   Inconsistent synchronization of org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized access at DFSOutputStream.java: [line 142]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java:to long in org.apache.hadoop.hdfs.DFSStripedInputStream.planReadPortions(int, int, long, int, int) At DFSStripedInputStream.java: [line 95]   Dead store to offSuccess in org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java:org.apache.hadoop.hdfs.StripedDataStreamer.endBlock() At StripedDataStreamer.java: [line 104]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java:to long in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStriped.spaceConsumed() At BlockInfoStriped.java: [line 208]   Possible null pointer dereference of arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long) Dereferenced at BlockInfoStripedUnderConstruction.java: [line 206]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String, ECSchema): String.getBytes() At ErasureCodingZoneManager.java: [line 116]   Found reliance on default encoding in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath): new String(byte[]) At ErasureCodingZoneManager.java: [line 81]   org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$AddECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java: [line 4552]   org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$ModifyECSchemaOp.toString() makes inefficient use of keySet iterator instead of entrySet iterator At FSEditLogOp.java:keySet iterator instead of entrySet iterator At FSEditLogOp.java: [line 4624]   org.apache.hadoop.hdfs.server.namenode.FSImageSerialization.writeECSchema(DataOutputStream, ECSchema) makes inefficient use of keySet iterator instead of entrySet iterator At FSImageSerialization.java:of keySet iterator instead of entrySet iterator At FSImageSerialization.java: [line 792]   Result of integer multiplication cast to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java:to long in org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock, int, int, int, int) At StripedBlockUtil.java: [line 75] Failed unit tests hadoop.hdfs.server.namenode.TestSaveNamespace   hadoop.hdfs.server.blockmanagement.TestDatanodeManager   hadoop.hdfs.qjournal.TestNNWithQJM   hadoop.hdfs.server.namenode.TestNameEditsConfigs   hadoop.hdfs.TestDFSRollback   hadoop.hdfs.server.namenode.TestCreateEditsLog   hadoop.hdfs.server.namenode.TestCheckpoint   hadoop.hdfs.server.namenode.TestMetadataVersionOutput   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot   hadoop.hdfs.TestReadStripedFile   hadoop.hdfs.TestBlockStoragePolicy   hadoop.hdfs.server.namenode.TestEditLogRace   hadoop.hdfs.TestDFSUpgrade   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithNodeGroup   hadoop.hdfs.server.namenode.TestFileTruncate   hadoop.hdfs.TestDFSFinalize   hadoop.hdfs.TestDFSStorageStateRecovery   hadoop.hdfs.web.TestWebHDFSAcl   hadoop.hdfs.protocol.TestLayoutVersion   hadoop.hdfs.TestHDFSServerPorts   hadoop.hdfs.server.namenode.TestFileContextAcl   hadoop.hdfs.TestRollingUpgrade   hadoop.hdfs.TestDFSStartupVersions   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer   hadoop.hdfs.server.namenode.TestFSNamesystem   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA   hadoop.hdfs.server.namenode.TestValidateConfigurationSettings   hadoop.hdfs.server.namenode.TestNNThroughputBenchmark   hadoop.hdfs.server.namenode.TestNameNodeAcl   hadoop.hdfs.TestDFSInotifyEventInputStream   hadoop.hdfs.server.blockmanagement.TestReplicationPolicyConsiderLoad   hadoop.hdfs.server.namenode.TestStartup   hadoop.hdfs.server.namenode.TestNameNodeRecovery Timed out tests org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader   org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks Subsystem Report/Notes Patch URL http://issues.apache.org/jira/secure/attachment/12728418/HDFS-7859-HDFS-7285.002.patch Optional Tests javadoc javac unit findbugs checkstyle git revision HDFS-7285 / bc3091b whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/whitespace.txt Release Audit https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/patchReleaseAuditProblems.txt checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/checkstyle-result-diff.txt Findbugs warnings https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html hadoop-hdfs test log https://builds.apache.org/job/PreCommit-HDFS-Build/10424/artifact/patchprocess/testrun_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/10424/testReport/ Java 1.7.0_55 uname Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Console output https://builds.apache.org/job/PreCommit-HDFS-Build/10424/console This message was automatically generated.
          Hide
          aw Allen Wittenauer added a comment -

          FYI, there are now two of these running:

          https://builds.apache.org/job/PreCommit-HDFS-Build/10424/console
          https://builds.apache.org/job/PreCommit-HDFS-Build/10425/console

          It's still churning through hadoop-hdfs unit tests on the one that Xinwei Qin submitted earlier. hadoop-hdfs is one of the slowest set of unit tests we have. I have a hunch that you folks have added code in this branch which has made it even slower ... to the point that Jenkins will likely kill the test patch job before it finishes.

          Show
          aw Allen Wittenauer added a comment - FYI, there are now two of these running: https://builds.apache.org/job/PreCommit-HDFS-Build/10424/console https://builds.apache.org/job/PreCommit-HDFS-Build/10425/console It's still churning through hadoop-hdfs unit tests on the one that Xinwei Qin submitted earlier. hadoop-hdfs is one of the slowest set of unit tests we have. I have a hunch that you folks have added code in this branch which has made it even slower ... to the point that Jenkins will likely kill the test patch job before it finishes.
          Hide
          zhz Zhe Zhang added a comment -

          Submitting a duplicate patch to trigger Jenkins.

          Show
          zhz Zhe Zhang added a comment - Submitting a duplicate patch to trigger Jenkins.
          Hide
          aw Allen Wittenauer added a comment -

          test-patch.sh reads the name of the patch, not any of the JIRA metadata. So if the patch is named something generic, it thinks it is trunk. See HowToContribute for the official rules, but as you can see from the name of the patch above, it knows about a few different methods to name them.

          Show
          aw Allen Wittenauer added a comment - test-patch.sh reads the name of the patch, not any of the JIRA metadata. So if the patch is named something generic, it thinks it is trunk. See HowToContribute for the official rules, but as you can see from the name of the patch above, it knows about a few different methods to name them.
          Hide
          zhz Zhe Zhang added a comment -

          Allen Wittenauer I quickly went through HDFS-7285 sub tasks. If you'd like you can try with HDFS-8236.

          I actually tried with HDFS-8033 earlier but it still tried to apply the patch against trunk. Maybe it's because I didn't set target version to HDFS-7285 when submitting patch.

          Show
          zhz Zhe Zhang added a comment - Allen Wittenauer I quickly went through HDFS-7285 sub tasks. If you'd like you can try with HDFS-8236 . I actually tried with HDFS-8033 earlier but it still tried to apply the patch against trunk. Maybe it's because I didn't set target version to HDFS-7285 when submitting patch .
          Hide
          aw Allen Wittenauer added a comment -

          I did some playing with a test jira this morning. IIRC, It looks like submit patch is only available to the requester and the assignee when the jira is in the 'in progress' status. The 'in progress' status can only be changed by the assignee and/or the requester. I then thought well, I'll force it through jenkins... but test-patch.sh is "smart" in that it will only process jiras that are in patch available status. So while I could have changed the meta info in the JIRA to force it to kick off, I didn't want to freak anyone out more than I already had by popping up in here. I thought it was going to be an easy/quick test.

          Running test-patch.sh as a developer against this JIRA # does run it against the HDFS-7285 branch though, as expected. (I had tested patches against branch-2, but hadn't had a chance to test against a dev branch... so this updated last night and thought it'd be a good guinea pig)

          Show
          aw Allen Wittenauer added a comment - I did some playing with a test jira this morning. IIRC, It looks like submit patch is only available to the requester and the assignee when the jira is in the 'in progress' status. The 'in progress' status can only be changed by the assignee and/or the requester. I then thought well, I'll force it through jenkins... but test-patch.sh is "smart" in that it will only process jiras that are in patch available status. So while I could have changed the meta info in the JIRA to force it to kick off, I didn't want to freak anyone out more than I already had by popping up in here. I thought it was going to be an easy/quick test. Running test-patch.sh as a developer against this JIRA # does run it against the HDFS-7285 branch though, as expected. (I had tested patches against branch-2, but hadn't had a chance to test against a dev branch... so this updated last night and thought it'd be a good guinea pig)
          Hide
          zhz Zhe Zhang added a comment -

          Thanks Allen. Do you know why "Submit Patch" isn't available here?

          Show
          zhz Zhe Zhang added a comment - Thanks Allen. Do you know why "Submit Patch" isn't available here?
          Hide
          aw Allen Wittenauer added a comment -

          (now we just need a submit button. lol)

          Show
          aw Allen Wittenauer added a comment - (now we just need a submit button. lol)
          Hide
          aw Allen Wittenauer added a comment -

          same patch, but uploaded with the branch name to make test-patch.sh kick off with that branch instead of trunk.

          Show
          aw Allen Wittenauer added a comment - same patch, but uploaded with the branch name to make test-patch.sh kick off with that branch instead of trunk.
          Hide
          xinwei Xinwei Qin added a comment -

          Update the patch based on the latest HDFS-7285 branch and Kai's comments.

          Show
          xinwei Xinwei Qin added a comment - Update the patch based on the latest HDFS-7285 branch and Kai's comments.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          HDFS-8062 does note require this since default schema can be hard coded.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - HDFS-8062 does note require this since default schema can be hard coded.
          Hide
          drankye Kai Zheng added a comment -

          Tsz Wo Nicholas Sze I don't have much time to sort the complete list yet but thought HDFS-8062 would be one.

          Show
          drankye Kai Zheng added a comment - Tsz Wo Nicholas Sze I don't have much time to sort the complete list yet but thought HDFS-8062 would be one.
          Hide
          drankye Kai Zheng added a comment -

          What do you mean? Could you give an example?

          Well, my last said was bad and inaccurate. After double checking related codes, I saw only stripped blocks derived from the following hard-coded values are persisted in the image. So please ignore the saying.

          What are the subsequent issues?

          We do have some and will sort them out later. I have opened HDFS-8156 to resolve some deps caused by HDFS-7866, originally planned to be done here.

          Show
          drankye Kai Zheng added a comment - What do you mean? Could you give an example? Well, my last said was bad and inaccurate. After double checking related codes, I saw only stripped blocks derived from the following hard-coded values are persisted in the image. So please ignore the saying. What are the subsequent issues? We do have some and will sort them out later. I have opened HDFS-8156 to resolve some deps caused by HDFS-7866 , originally planned to be done here.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > ... We had already persisted some hard-coded values that should be covered by a schema ...

          What do you mean? Could you give an example?

          > ... Please note this issue blocks many subsequent issues and I thought we still have enough time for them right before the merge happening.

          What are the subsequent issues?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > ... We had already persisted some hard-coded values that should be covered by a schema ... What do you mean? Could you give an example? > ... Please note this issue blocks many subsequent issues and I thought we still have enough time for them right before the merge happening. What are the subsequent issues?
          Hide
          drankye Kai Zheng added a comment -

          Tsz Wo Nicholas Sze,

          Since we don't not yet support add/delete/update/rename schema operations, we don't need to persist anything in NN at this moment. We will support some of these schema operations down the road. We may persist schemas at that time. Sound good?

          Please note it's not true "we don't need to persist anything in NN at this moment.". We had already persisted some hard-coded values that should be covered by a schema in the image. Without this, we will definitely need to revisit the image format change some time later. As I said above, it's flexible enough in the schema definition and if we persist the whole schema object in image, we would not likely need to change the image later. Please note this issue blocks many subsequent issues and I thought we still have enough time for them right before the merge happening.

          Show
          drankye Kai Zheng added a comment - Tsz Wo Nicholas Sze , Since we don't not yet support add/delete/update/rename schema operations, we don't need to persist anything in NN at this moment. We will support some of these schema operations down the road. We may persist schemas at that time. Sound good? Please note it's not true "we don't need to persist anything in NN at this moment.". We had already persisted some hard-coded values that should be covered by a schema in the image. Without this, we will definitely need to revisit the image format change some time later. As I said above, it's flexible enough in the schema definition and if we persist the whole schema object in image, we would not likely need to change the image later. Please note this issue blocks many subsequent issues and I thought we still have enough time for them right before the merge happening.
          Hide
          drankye Kai Zheng added a comment -

          Hi Tsz Wo Nicholas Sze,

          Per your request I updated the doc in HDFS-7337 accordingly. It entirely rewrote the schema section and mainly reflects existing related discussions and even implementations. I wish it addresses your questions here well. Your further comments and questions are very welcome. Thanks in advance!

          Show
          drankye Kai Zheng added a comment - Hi Tsz Wo Nicholas Sze , Per your request I updated the doc in HDFS-7337 accordingly. It entirely rewrote the schema section and mainly reflects existing related discussions and even implementations. I wish it addresses your questions here well. Your further comments and questions are very welcome. Thanks in advance!
          Hide
          drankye Kai Zheng added a comment -

          How can we be sure that the schema object format won't change?

          Good question. In ECSchema class, in addition to the common parameters widely used by typical erasure codecs, an options map is also included so potentially any complex codec can use it to contain its own specific parameters or key-value pairs, such parameters are subject to its corresponding erasure coders to interpret. We try to make it flexible enough to avoid such change, but in case it needs change anyway, I thought it's supported, I mean the image layout version.

          Show
          drankye Kai Zheng added a comment - How can we be sure that the schema object format won't change? Good question. In ECSchema class, in addition to the common parameters widely used by typical erasure codecs, an options map is also included so potentially any complex codec can use it to contain its own specific parameters or key-value pairs, such parameters are subject to its corresponding erasure coders to interpret. We try to make it flexible enough to avoid such change, but in case it needs change anyway, I thought it's supported, I mean the image layout version.
          Hide
          drankye Kai Zheng added a comment -

          Using schema name as ID

          As we would not make it heavy so don't have some field like description for an ECSchema, a friendly name like RS-6-3 would make it more sense in the way rather than an number ID. Users should be clearly understand the schema before using it to create any zone. The name will help with identifying that.

          We don't even need the xml file.

          Yeah, if we would do that thru command to define a schema by specifying the schema parameters, it should also be OK. I don't have strongly preference about that. Any file format or even not using file would also work I guess. We talked about this in the meetup, looks like XML file was synced.

          Show
          drankye Kai Zheng added a comment - Using schema name as ID As we would not make it heavy so don't have some field like description for an ECSchema , a friendly name like RS-6-3 would make it more sense in the way rather than an number ID. Users should be clearly understand the schema before using it to create any zone. The name will help with identifying that. We don't even need the xml file. Yeah, if we would do that thru command to define a schema by specifying the schema parameters, it should also be OK. I don't have strongly preference about that. Any file format or even not using file would also work I guess. We talked about this in the meetup, looks like XML file was synced.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > ... We would persist the whole schema object ...

          How can we be sure that the schema object format won't change?

          Since we don't not yet support add/delete/update/rename schema operations, we don't need to persist anything in NN at this moment. We will support some of these schema operations down the road. We may persist schemas at that time. Sound good?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > ... We would persist the whole schema object ... How can we be sure that the schema object format won't change? Since we don't not yet support add/delete/update/rename schema operations, we don't need to persist anything in NN at this moment. We will support some of these schema operations down the road. We may persist schemas at that time. Sound good?
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > ... schema name for the ID purpose. ...

          There are a few choice choices:

          1. Using schema name as ID
          2. A schema name and a separated numeric ID
          3. Multiple schema names and a numeric ID

          Why using #1?

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > ... schema name for the ID purpose. ... There are a few choice choices: Using schema name as ID A schema name and a separated numeric ID Multiple schema names and a numeric ID Why using #1?
          Hide
          drankye Kai Zheng added a comment -

          we don't know what to persist. For example, should we persist schema ID? We are not able to answer this question since we don't even know if a schema should have an ID.

          It's not true. We have ECSchema defined and it uses schema name for the ID purpose. We would persist the whole schema object. The on-going work although isn't reflected in the design doc but we did do that following our related discussion. In the meetup with Zhe Zhang and Jing Zhao, we covered this aspect and even your questions already. It's my mistake I didn't put it down clearly and update the doc accordingly.

          Show
          drankye Kai Zheng added a comment - we don't know what to persist. For example, should we persist schema ID? We are not able to answer this question since we don't even know if a schema should have an ID. It's not true. We have ECSchema defined and it uses schema name for the ID purpose. We would persist the whole schema object. The on-going work although isn't reflected in the design doc but we did do that following our related discussion. In the meetup with Zhe Zhang and Jing Zhao , we covered this aspect and even your questions already. It's my mistake I didn't put it down clearly and update the doc accordingly.
          Hide
          szetszwo Tsz Wo Nicholas Sze added a comment -

          > The patch under this JIRA handles saving / loading these default schemas in fsimage. I think this is necessary even without loading custom schemas from XML. Otherwise we cannot guarantee the NameNode which loads the fsimage has the same default schemas as the NameNode which saved it. It is obviously even more necessary when we add custom schemas ...

          I think we should not persist anything to NN before we have a clear design since we don't know what to persist. For example, should we persist schema ID? We are not able to answer this question since we don't even know if a schema should have an ID.

          If we change the layout later on, it requires cluster upgrade for the new layout and we have to support the old layout for backward compatibility.

          For now, I suggest to just hard code the only (6,3)-Reed-Solomon schema. We don't even need the xml file.

          Show
          szetszwo Tsz Wo Nicholas Sze added a comment - > The patch under this JIRA handles saving / loading these default schemas in fsimage. I think this is necessary even without loading custom schemas from XML. Otherwise we cannot guarantee the NameNode which loads the fsimage has the same default schemas as the NameNode which saved it. It is obviously even more necessary when we add custom schemas ... I think we should not persist anything to NN before we have a clear design since we don't know what to persist. For example, should we persist schema ID? We are not able to answer this question since we don't even know if a schema should have an ID. If we change the layout later on, it requires cluster upgrade for the new layout and we have to support the old layout for backward compatibility. For now, I suggest to just hard code the only (6,3)-Reed-Solomon schema. We don't even need the xml file.
          Hide
          drankye Kai Zheng added a comment -

          Hi Zhe Zhang,

          Thanks for taking care of this and your good suggestion. It looks reasonable to me. This will sound like a more solid base for the merge.

          To summarize further:
          1. This issue HDFS-7859 would provide two system defined schemas in Java codes: one is the system default schema (rs-6-3), already there; a new one, suggesting rs-10-4; It also ensure the two schemas will be persisted in the image/editlog for later querying.
          2. The left gaps will be processed as follow-on to be done in HDFS-7866, mainly about how to customize site specific schemas thru a XML file. The design will also be updated.

          Show
          drankye Kai Zheng added a comment - Hi Zhe Zhang , Thanks for taking care of this and your good suggestion. It looks reasonable to me. This will sound like a more solid base for the merge. To summarize further: 1. This issue HDFS-7859 would provide two system defined schemas in Java codes: one is the system default schema (rs-6-3), already there; a new one, suggesting rs-10-4; It also ensure the two schemas will be persisted in the image/editlog for later querying. 2. The left gaps will be processed as follow-on to be done in HDFS-7866 , mainly about how to customize site specific schemas thru a XML file. The design will also be updated.
          Hide
          zhz Zhe Zhang added a comment -

          Tsz Wo Nicholas Sze / Kai Zheng: The phasing plan I posted might be a little confusing in regards of schemas. My apologies.

          In the offline meetup on 03/31, we didn't reach a clear conclusion on how much of schema work to include before merging. Therefore I left it in phase I, but marked it as optional. My thought was that we could make a better decision after observing how fast the work could proceed. Up to this point I think this thread is going pretty well and it seems we can have a multi-schema implementation when other HDFS-7285 tasks are done (see details below).

          Good questions on schema design. I think we eventually need to answer them in the broader scope of HDFS-7337. IIUC HDFS-7859 / HDFS-7866 are not touching most of the tricky scenarios. Based on Kai's latest comment , HDFS-7866 will mostly handle default schemas embedded in the ECSchemaManager code.

          The patch under this JIRA handles saving / loading these default schemas in fsimage. I think this is necessary even without loading custom schemas from XML. Otherwise we cannot guarantee the NameNode which loads the fsimage has the same default schemas as the NameNode which saved it. It is obviously even more necessary when we add custom schemas. The logic in the patch is quite straightforward; it's mostly about serialize / deserialize schemas.

          So here's my proposal:

          1. Shrink this patch to get rid of logics on modifying and removing schemas (ECSchemaManager#modifyECSchema and OP_MODIFY_EC_SCHEMA).
          2. Repurpose HDFS-7866 to focus on loading custom schemas from site xml files.

          Tsz Wo Nicholas Sze, Kai Zheng, Vinayakumar B: let me know if you agree with the above. If we are all synced on this, how about moving this JIRA back to HDFS-7285 and keeping HDFS-7866 under HDFS-8031?

          Show
          zhz Zhe Zhang added a comment - Tsz Wo Nicholas Sze / Kai Zheng : The phasing plan I posted might be a little confusing in regards of schemas. My apologies. In the offline meetup on 03/31, we didn't reach a clear conclusion on how much of schema work to include before merging. Therefore I left it in phase I, but marked it as optional. My thought was that we could make a better decision after observing how fast the work could proceed. Up to this point I think this thread is going pretty well and it seems we can have a multi-schema implementation when other HDFS-7285 tasks are done (see details below). Good questions on schema design. I think we eventually need to answer them in the broader scope of HDFS-7337 . IIUC HDFS-7859 / HDFS-7866 are not touching most of the tricky scenarios. Based on Kai's latest comment , HDFS-7866 will mostly handle default schemas embedded in the ECSchemaManager code. The patch under this JIRA handles saving / loading these default schemas in fsimage. I think this is necessary even without loading custom schemas from XML. Otherwise we cannot guarantee the NameNode which loads the fsimage has the same default schemas as the NameNode which saved it. It is obviously even more necessary when we add custom schemas. The logic in the patch is quite straightforward; it's mostly about serialize / deserialize schemas. So here's my proposal: Shrink this patch to get rid of logics on modifying and removing schemas ( ECSchemaManager#modifyECSchema and OP_MODIFY_EC_SCHEMA ). Repurpose HDFS-7866 to focus on loading custom schemas from site xml files. Tsz Wo Nicholas Sze , Kai Zheng , Vinayakumar B : let me know if you agree with the above. If we are all synced on this, how about moving this JIRA back to HDFS-7285 and keeping HDFS-7866 under HDFS-8031 ?