Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11565

Use compact identifiers for built-in ECPolicies in HdfsFileStatus

    Details

    • Target Version/s:
    • Hadoop Flags:
      Incompatible change
    • Release Note:
      Some of the existing fields in ErasureCodingPolicyProto have changed from required to optional. For system EC policies, these fields are populated from hardcoded values.

      Description

      Discussed briefly on HDFS-7337 with Kai Zheng. Quoting our convo:

      From looking at the protos, one other question I had is about the overhead of these protos when using the hardcoded policies. There are a bunch of strings and ints, which can be kind of heavy since they're added to each HdfsFileStatus. Should we make the built-in ones identified by purely an ID, with these fully specified protos used for the pluggable policies?

      Sounds like this could be considered separately because, either built-in policies or plugged-in polices, the full meta info is maintained either by the codes or in the fsimage persisted, so identifying them by purely an ID should works fine. If agree, we could refactor the codes you mentioned above separately.

      1. HDFS-11565.001.patch
        7 kB
        Andrew Wang
      2. HDFS-11565.002.patch
        7 kB
        Andrew Wang
      3. HDFS-11565.003.patch
        9 kB
        Andrew Wang

        Issue Links

          Activity

          Hide
          andrew.wang Andrew Wang added a comment -

          Patch attached. This depends on HDFS-11623.

          Show
          andrew.wang Andrew Wang added a comment - Patch attached. This depends on HDFS-11623 .
          Hide
          andrew.wang Andrew Wang added a comment -

          One optimization made by this patch is using ErasureCodingPolicies deduplicates the built-in ErasureCodingPolicy objects in memory. We've seen significant memory taken by FileStatus fields when profiling applications like Hive.

          Later, we should also consider deduplicating the pluggable EC policies. Should be relatively straightforward to do.

          Show
          andrew.wang Andrew Wang added a comment - One optimization made by this patch is using ErasureCodingPolicies deduplicates the built-in ErasureCodingPolicy objects in memory. We've seen significant memory taken by FileStatus fields when profiling applications like Hive. Later, we should also consider deduplicating the pluggable EC policies. Should be relatively straightforward to do.
          Hide
          jojochuang Wei-Chiu Chuang added a comment - - edited

          Andrew Wang thanks for working it. The patch itself looks reasonable. Let's review it after HDFS-11623 is checked in.

          One issue I saw is

          +    if (policy == null) {
          +      return new ErasureCodingPolicy(proto.getName(),
          +          convertECSchema(proto.getSchema()),
          +          proto.getCellSize(), id);
          +    }
          

          This means a new ErasureCodingPolicy object each time it is called. Shouldn't it be cached like SYSTEM_POLICIES_BY_NAME?

          Show
          jojochuang Wei-Chiu Chuang added a comment - - edited Andrew Wang thanks for working it. The patch itself looks reasonable. Let's review it after HDFS-11623 is checked in. One issue I saw is + if (policy == null ) { + return new ErasureCodingPolicy(proto.getName(), + convertECSchema(proto.getSchema()), + proto.getCellSize(), id); + } This means a new ErasureCodingPolicy object each time it is called. Shouldn't it be cached like SYSTEM_POLICIES_BY_NAME?
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for taking a look Wei-chiu! Like I said in my previous comment, I'd prefer deduping pluggable EC policies once we make more progress on that work. I can file a follow-on JIRA.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for taking a look Wei-chiu! Like I said in my previous comment, I'd prefer deduping pluggable EC policies once we make more progress on that work. I can file a follow-on JIRA.
          Hide
          andrew.wang Andrew Wang added a comment -

          Patch attached. I also reverted the change to the PB tag numbers, which makes it a little less incompatible.

          Show
          andrew.wang Andrew Wang added a comment - Patch attached. I also reverted the change to the PB tag numbers, which makes it a little less incompatible.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 15s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 12m 53s trunk passed
          +1 compile 1m 42s trunk passed
          +1 checkstyle 0m 40s trunk passed
          +1 mvnsite 1m 23s trunk passed
          +1 mvneclipse 0m 25s trunk passed
          +1 findbugs 3m 8s trunk passed
          +1 javadoc 1m 0s trunk passed
          0 mvndep 0m 6s Maven dependency ordering for patch
          -1 mvninstall 0m 30s hadoop-hdfs-client in the patch failed.
          -1 mvninstall 0m 48s hadoop-hdfs in the patch failed.
          +1 compile 1m 19s the patch passed
          +1 cc 1m 19s the patch passed
          +1 javac 1m 19s the patch passed
          +1 checkstyle 0m 38s the patch passed
          +1 mvnsite 1m 19s the patch passed
          +1 mvneclipse 0m 20s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 18s the patch passed
          +1 javadoc 0m 55s the patch passed
          +1 unit 1m 11s hadoop-hdfs-client in the patch passed.
          -1 unit 71m 55s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          105m 31s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.protocolPB.TestPBHelper
          Timed out junit tests org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:a9ad5d6
          JIRA Issue HDFS-11565
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862546/HDFS-11565.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 914f51276636 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / e8bdad7
          Default Java 1.8.0_121
          findbugs v3.0.0
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt
          mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19018/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19018/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 15s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 12m 53s trunk passed +1 compile 1m 42s trunk passed +1 checkstyle 0m 40s trunk passed +1 mvnsite 1m 23s trunk passed +1 mvneclipse 0m 25s trunk passed +1 findbugs 3m 8s trunk passed +1 javadoc 1m 0s trunk passed 0 mvndep 0m 6s Maven dependency ordering for patch -1 mvninstall 0m 30s hadoop-hdfs-client in the patch failed. -1 mvninstall 0m 48s hadoop-hdfs in the patch failed. +1 compile 1m 19s the patch passed +1 cc 1m 19s the patch passed +1 javac 1m 19s the patch passed +1 checkstyle 0m 38s the patch passed +1 mvnsite 1m 19s the patch passed +1 mvneclipse 0m 20s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 18s the patch passed +1 javadoc 0m 55s the patch passed +1 unit 1m 11s hadoop-hdfs-client in the patch passed. -1 unit 71m 55s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 105m 31s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.protocolPB.TestPBHelper Timed out junit tests org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean Subsystem Report/Notes Docker Image:yetus/hadoop:a9ad5d6 JIRA Issue HDFS-11565 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862546/HDFS-11565.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 914f51276636 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / e8bdad7 Default Java 1.8.0_121 findbugs v3.0.0 mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt mvninstall https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19018/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19018/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19018/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Could a reviewer try running TestPBHelper for me as well? It works for me locally, so I suspect precommit doesn't handle the PB change properly. cc Allen Wittenauer.

          Show
          andrew.wang Andrew Wang added a comment - Could a reviewer try running TestPBHelper for me as well? It works for me locally, so I suspect precommit doesn't handle the PB change properly. cc Allen Wittenauer .
          Hide
          jojochuang Wei-Chiu Chuang added a comment - - edited

          Hi Andrew! Thanks for updating the patch and I think it's mostly good. I've got one question though:

          The test code

                // Convert proto back to an object and check for equality.
                ErasureCodingPolicy convertedPolicy = PBHelperClient
                    .convertErasureCodingPolicy(proto);
                assertEquals("Converted policy not equal", policy, convertedPolicy);
          

          aserts the two ECP are equal. So I looked at ErasureCodingPolicy#equals too:

          ErasureCodingPolicy#equals
          @Override
            public boolean equals(Object o) {
              if (this == o) {
                return true;
              }
              if (o == null || getClass() != o.getClass()) {
                return false;
              }
              ErasureCodingPolicy that = (ErasureCodingPolicy) o;
          
              return that.getName().equals(name) &&
                  that.getCellSize() == cellSize &&
                  that.getSchema().equals(schema);
            }
          

          It only compares name, cellsize and schema. Should it also compare ECP id?

          This is not directly related to your patch, but given that an ECP id is not guaranteed to map to a unique ErasureCodingPolicy (say a client connects to two different cluster running different custom ErasureCodingPolicy) it seems reasonable to add one more check.

          Show
          jojochuang Wei-Chiu Chuang added a comment - - edited Hi Andrew! Thanks for updating the patch and I think it's mostly good. I've got one question though: The test code // Convert proto back to an object and check for equality. ErasureCodingPolicy convertedPolicy = PBHelperClient .convertErasureCodingPolicy(proto); assertEquals( "Converted policy not equal" , policy, convertedPolicy); aserts the two ECP are equal. So I looked at ErasureCodingPolicy#equals too: ErasureCodingPolicy#equals @Override public boolean equals( Object o) { if ( this == o) { return true ; } if (o == null || getClass() != o.getClass()) { return false ; } ErasureCodingPolicy that = (ErasureCodingPolicy) o; return that.getName().equals(name) && that.getCellSize() == cellSize && that.getSchema().equals(schema); } It only compares name, cellsize and schema. Should it also compare ECP id? This is not directly related to your patch, but given that an ECP id is not guaranteed to map to a unique ErasureCodingPolicy (say a client connects to two different cluster running different custom ErasureCodingPolicy) it seems reasonable to add one more check.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Ah never mind. I figured it out. Two ECP with the same cell size, schema and name but different ids still represent the same ECP.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Ah never mind. I figured it out. Two ECP with the same cell size, schema and name but different ids still represent the same ECP.
          Hide
          aw Allen Wittenauer added a comment -

          It works for me locally, so I suspect precommit doesn't handle the PB change properly.

          Based upon a quick glance, it looks like maven is complaining about a dependency ordering problem which precommit is correctly passing along.

          Show
          aw Allen Wittenauer added a comment - It works for me locally, so I suspect precommit doesn't handle the PB change properly. Based upon a quick glance, it looks like maven is complaining about a dependency ordering problem which precommit is correctly passing along.
          Hide
          andrew.wang Andrew Wang added a comment -

          There seems to be a versioning issue where the build is picking up both a timestamp-versioned hadoop-common dependency and the SNAPSHOT version:

          Dependency convergence error for org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344 paths to dependency are:
          +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT
            +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344
          and
          +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT
            +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344
          and
          +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT
            +-org.apache.hadoop:hadoop-kms:3.0.0-alpha3-20170408.000302-344
              +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-SNAPSHOT
          and
          +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT
            +-org.apache.hadoop:hadoop-kms:3.0.0-alpha3-20170408.000302-344
              +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-SNAPSHOT
          

          Are we sure this isn't some kind of precommit build issue? This doesn't happen when I do mvn install from the root locally.

          Show
          andrew.wang Andrew Wang added a comment - There seems to be a versioning issue where the build is picking up both a timestamp-versioned hadoop-common dependency and the SNAPSHOT version: Dependency convergence error for org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344 paths to dependency are: +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344 and +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-20170408.000249-344 and +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT +-org.apache.hadoop:hadoop-kms:3.0.0-alpha3-20170408.000302-344 +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-SNAPSHOT and +-org.apache.hadoop:hadoop-hdfs:3.0.0-alpha3-SNAPSHOT +-org.apache.hadoop:hadoop-kms:3.0.0-alpha3-20170408.000302-344 +-org.apache.hadoop:hadoop-common:3.0.0-alpha3-SNAPSHOT Are we sure this isn't some kind of precommit build issue? This doesn't happen when I do mvn install from the root locally.
          Hide
          aw Allen Wittenauer added a comment -

          This doesn't happen when I do mvn install from the root locally.

          These types of problems rarely happen on local installs because a) almost everyone installs from root and b) the m2 cache already has everything in it from development.

          Show
          aw Allen Wittenauer added a comment - This doesn't happen when I do mvn install from the root locally. These types of problems rarely happen on local installs because a) almost everyone installs from root and b) the m2 cache already has everything in it from development.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Given that name, schema and cellsize are all optional, should we do extra checks to make sure these fields are properly populated in PBHelperClienet#convertErasureCodingPolicy? Currently it assumes if it's not a system built-in ECPs, name, schema id are valid values, but that are not necessarily true. I want to make sure if an invalid ECP is passed, the code throws an exception right away.

              if (policy == null) {
                return new ErasureCodingPolicy(proto.getName(),
                    convertECSchema(proto.getSchema()),
                    proto.getCellSize(), id);
              }
          
          Show
          jojochuang Wei-Chiu Chuang added a comment - Given that name, schema and cellsize are all optional, should we do extra checks to make sure these fields are properly populated in PBHelperClienet#convertErasureCodingPolicy? Currently it assumes if it's not a system built-in ECPs, name, schema id are valid values, but that are not necessarily true. I want to make sure if an invalid ECP is passed, the code throws an exception right away. if (policy == null ) { return new ErasureCodingPolicy(proto.getName(), convertECSchema(proto.getSchema()), proto.getCellSize(), id); }
          Hide
          andrew.wang Andrew Wang added a comment -

          Sure, good idea Wei-chiu, added some Precondition checks and new unit tests. Let's hope precommit works this time.

          Show
          andrew.wang Andrew Wang added a comment - Sure, good idea Wei-chiu, added some Precondition checks and new unit tests. Let's hope precommit works this time.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          0 mvndep 0m 7s Maven dependency ordering for branch
          +1 mvninstall 14m 32s trunk passed
          +1 compile 1m 26s trunk passed
          +1 checkstyle 0m 40s trunk passed
          +1 mvnsite 1m 27s trunk passed
          +1 mvneclipse 0m 27s trunk passed
          +1 findbugs 3m 14s trunk passed
          +1 javadoc 1m 2s trunk passed
          0 mvndep 0m 7s Maven dependency ordering for patch
          +1 mvninstall 1m 23s the patch passed
          +1 compile 1m 23s the patch passed
          +1 cc 1m 23s the patch passed
          +1 javac 1m 23s the patch passed
          -0 checkstyle 0m 38s hadoop-hdfs-project: The patch generated 1 new + 99 unchanged - 0 fixed = 100 total (was 99)
          +1 mvnsite 1m 21s the patch passed
          +1 mvneclipse 0m 23s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 3m 23s the patch passed
          +1 javadoc 0m 56s the patch passed
          +1 unit 1m 9s hadoop-hdfs-client in the patch passed.
          -1 unit 64m 19s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          100m 0s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:612578f
          JIRA Issue HDFS-11565
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862930/HDFS-11565.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc
          uname Linux 9911dc0d171d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 3a91376
          Default Java 1.8.0_121
          findbugs v3.0.0
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19053/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19053/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19053/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19053/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. 0 mvndep 0m 7s Maven dependency ordering for branch +1 mvninstall 14m 32s trunk passed +1 compile 1m 26s trunk passed +1 checkstyle 0m 40s trunk passed +1 mvnsite 1m 27s trunk passed +1 mvneclipse 0m 27s trunk passed +1 findbugs 3m 14s trunk passed +1 javadoc 1m 2s trunk passed 0 mvndep 0m 7s Maven dependency ordering for patch +1 mvninstall 1m 23s the patch passed +1 compile 1m 23s the patch passed +1 cc 1m 23s the patch passed +1 javac 1m 23s the patch passed -0 checkstyle 0m 38s hadoop-hdfs-project: The patch generated 1 new + 99 unchanged - 0 fixed = 100 total (was 99) +1 mvnsite 1m 21s the patch passed +1 mvneclipse 0m 23s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 3m 23s the patch passed +1 javadoc 0m 56s the patch passed +1 unit 1m 9s hadoop-hdfs-client in the patch passed. -1 unit 64m 19s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 100m 0s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting Subsystem Report/Notes Docker Image:yetus/hadoop:612578f JIRA Issue HDFS-11565 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12862930/HDFS-11565.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc uname Linux 9911dc0d171d 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 3a91376 Default Java 1.8.0_121 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19053/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19053/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19053/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19053/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          +1 after fixing the checkstyle warning. Thanks Andrew!

          Show
          jojochuang Wei-Chiu Chuang added a comment - +1 after fixing the checkstyle warning. Thanks Andrew!
          Hide
          andrew.wang Andrew Wang added a comment -

          I'll fix the unused import on commit, thanks for reviewing Wei-chiu!

          Show
          andrew.wang Andrew Wang added a comment - I'll fix the unused import on commit, thanks for reviewing Wei-chiu!
          Hide
          andrew.wang Andrew Wang added a comment -

          Committed to trunk, thanks again Wei-chiu for reviewing.

          Show
          andrew.wang Andrew Wang added a comment - Committed to trunk, thanks again Wei-chiu for reviewing.
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11580 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11580/)
          HDFS-11565. Use compact identifiers for built-in ECPolicies in (wang: rev 966b1b5b44103f3e3952da45da264d76fb3ac384)

          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11580 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11580/ ) HDFS-11565 . Use compact identifiers for built-in ECPolicies in (wang: rev 966b1b5b44103f3e3952da45da264d76fb3ac384) (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11591 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11591/)
          HDFS-11565. Use compact identifiers for built-in ECPolicies in (wang: rev 966b1b5b44103f3e3952da45da264d76fb3ac384)

          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11591 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11591/ ) HDFS-11565 . Use compact identifiers for built-in ECPolicies in (wang: rev 966b1b5b44103f3e3952da45da264d76fb3ac384) (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto

            People

            • Assignee:
              andrew.wang Andrew Wang
              Reporter:
              andrew.wang Andrew Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development