Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-11956

Do not require a storage ID or target storage IDs when writing a block

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 3.0.0-alpha4
    • Fix Version/s: 3.0.0-alpha4
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Release Note:
      Hide
      Hadoop 2.x clients do not pass the storage ID or target storage IDs when writing a block. For backwards compatibility, the DataNode will not require the presence of these fields. This means older clients are unable to write to a particular storage as chosen by the NameNode (e.g. HDFS-9806).
      Show
      Hadoop 2.x clients do not pass the storage ID or target storage IDs when writing a block. For backwards compatibility, the DataNode will not require the presence of these fields. This means older clients are unable to write to a particular storage as chosen by the NameNode (e.g. HDFS-9806 ).

      Description

      Seems like HDFS-9807 broke backwards compatibility with Hadoop 2.x clients. When talking to a 3.0.0-alpha4 DN with security on:

      2017-06-06 23:27:22,568 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block token verification failed: op=WRITE_BLOCK, remoteAddress=/172.28.208.200:53900, message=Block token with StorageIDs [DS-c0f24154-a39b-4941-93cd-5b8323067ba2] not valid for access with StorageIDs []
      
      1. HDFS-11956.004.patch
        4 kB
        Ewan Higgs
      2. HDFS-11956.003.patch
        17 kB
        Ewan Higgs
      3. HDFS-11956.002.patch
        16 kB
        Ewan Higgs
      4. HDFS-11956.001.patch
        14 kB
        Ewan Higgs

        Issue Links

          Activity

          Hide
          hudson Hudson added a comment -

          SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11925 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11925/)
          HDFS-11956. Do not require a storage ID or target storage IDs when (wang: rev 2c367b464c86a7d67a2b8dd82ae804d169957573)

          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java
          • (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
          Show
          hudson Hudson added a comment - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11925 (See https://builds.apache.org/job/Hadoop-trunk-Commit/11925/ ) HDFS-11956 . Do not require a storage ID or target storage IDs when (wang: rev 2c367b464c86a7d67a2b8dd82ae804d169957573) (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/token/block/TestBlockToken.java (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for working on this Ewan, committed to trunk! Thanks also to Wei-chiu for reviewing.

          Show
          andrew.wang Andrew Wang added a comment - Thanks for working on this Ewan, committed to trunk! Thanks also to Wei-chiu for reviewing.
          Hide
          andrew.wang Andrew Wang added a comment -

          +1 LGTM, I'll adjust the release notes as well.

          Show
          andrew.wang Andrew Wang added a comment - +1 LGTM, I'll adjust the release notes as well.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 1m 33s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
          +1 mvninstall 14m 35s trunk passed
          +1 compile 0m 57s trunk passed
          +1 checkstyle 0m 39s trunk passed
          +1 mvnsite 1m 4s trunk passed
          +1 findbugs 1m 49s trunk passed
          +1 javadoc 0m 42s trunk passed
          +1 mvninstall 0m 57s the patch passed
          +1 compile 0m 48s the patch passed
          +1 javac 0m 48s the patch passed
          +1 checkstyle 0m 36s the patch passed
          +1 mvnsite 1m 2s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 2m 3s the patch passed
          +1 javadoc 0m 43s the patch passed
          -1 unit 75m 43s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          104m 56s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
            hadoop.hdfs.TestPread
            hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.web.TestWebHdfsTimeouts



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-11956
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874469/HDFS-11956.004.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 9eaf573c5f1b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 379f19a
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/20041/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20041/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20041/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 1m 33s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 14m 35s trunk passed +1 compile 0m 57s trunk passed +1 checkstyle 0m 39s trunk passed +1 mvnsite 1m 4s trunk passed +1 findbugs 1m 49s trunk passed +1 javadoc 0m 42s trunk passed +1 mvninstall 0m 57s the patch passed +1 compile 0m 48s the patch passed +1 javac 0m 48s the patch passed +1 checkstyle 0m 36s the patch passed +1 mvnsite 1m 2s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 2m 3s the patch passed +1 javadoc 0m 43s the patch passed -1 unit 75m 43s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 104m 56s Reason Tests Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl   hadoop.hdfs.TestPread   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.web.TestWebHdfsTimeouts Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-11956 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12874469/HDFS-11956.004.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 9eaf573c5f1b 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 379f19a Default Java 1.8.0_131 findbugs v3.1.0-RC1 unit https://builds.apache.org/job/PreCommit-HDFS-Build/20041/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/20041/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/20041/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          ehiggs Ewan Higgs added a comment -

          Attaching version of the patch that doesn't use a config switch.

          Show
          ehiggs Ewan Higgs added a comment - Attaching version of the patch that doesn't use a config switch.
          Hide
          andrew.wang Andrew Wang added a comment -

          Maybe the forward compatibility of StorageTypes is another JIRA I should raise?

          Sure, would certainly be great to support this if possible. If not, there is precedence for requiring a client upgrade to use new features, like encryption or EC.

          What's your time frame for tagging alpha4?

          I'm going to be travelling for a while starting July 8th, so my hope was Monday June 26th so there's some slack in the schedule. Since reverting seems like an okay solution, we don't need to feel pressured for this JIRA.

          Show
          andrew.wang Andrew Wang added a comment - Maybe the forward compatibility of StorageTypes is another JIRA I should raise? Sure, would certainly be great to support this if possible. If not, there is precedence for requiring a client upgrade to use new features, like encryption or EC. What's your time frame for tagging alpha4? I'm going to be travelling for a while starting July 8th, so my hope was Monday June 26th so there's some slack in the schedule. Since reverting seems like an okay solution, we don't need to feel pressured for this JIRA.
          Hide
          ehiggs Ewan Higgs added a comment -

          Hi Andrew

          IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)?

          I think you understood correctly. I don't think an old client will be able to deserialise a PROVIDED StorageType from the protobuf, so it will fail to pass along that StorageType (though I have not yet done the cross-version testing with Hadoop 2.6). I think this is the same as would be the case any time a new StorageType is introduced (e.g. if we hypothetically added StorageType.NVME, StorageType.SMR, etc.). Maybe the forward compatibility of StorageTypes is another JIRA I should raise?

          If so, then I see how this works; essentially, only require storageIDs when writing to provided storage.

          Yes.

          For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing.

          I'm traveling today so I won't be able to furnish a patch just yet. What's your time frame for tagging alpha4?

          Show
          ehiggs Ewan Higgs added a comment - Hi Andrew IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)? I think you understood correctly. I don't think an old client will be able to deserialise a PROVIDED StorageType from the protobuf, so it will fail to pass along that StorageType (though I have not yet done the cross-version testing with Hadoop 2.6). I think this is the same as would be the case any time a new StorageType is introduced (e.g. if we hypothetically added StorageType.NVME , StorageType.SMR , etc.). Maybe the forward compatibility of StorageTypes is another JIRA I should raise? If so, then I see how this works; essentially, only require storageIDs when writing to provided storage. Yes. For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing. I'm traveling today so I won't be able to furnish a patch just yet. What's your time frame for tagging alpha4?
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks Ewan. I'm new to this feature, so IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)? If so, then I see how this works; essentially, only require storageIDs when writing to provided storage.

          For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing.

          Show
          andrew.wang Andrew Wang added a comment - Thanks Ewan. I'm new to this feature, so IIUC, we know the storage type even for an old client since it passes it in the writeBlock request. Can an old client correctly pass along an unknown StorageType (e.g. PROVIDED)? If so, then I see how this works; essentially, only require storageIDs when writing to provided storage. For 3.0.0-alpha4 I can also revert HDFS-9807 while we figure out this JIRA. We did this internally to unblock testing.
          Hide
          ehiggs Ewan Higgs added a comment -

          Hi,
          Another idea is to just ignore the BlockTokenIdentifier if the storageId list in the request is empty. The current intention of the storageId in the message is just a suggestion for the datanode in most cases; but in the case of provided storage (HDFS-9806) it will be the storageId of the provided storage system. If the storageId list is empty then it will just fail the write to the provided storage since it won't know where/how to write it.

          Show
          ehiggs Ewan Higgs added a comment - Hi, Another idea is to just ignore the BlockTokenIdentifier if the storageId list in the request is empty. The current intention of the storageId in the message is just a suggestion for the datanode in most cases; but in the case of provided storage ( HDFS-9806 ) it will be the storageId of the provided storage system. If the storageId list is empty then it will just fail the write to the provided storage since it won't know where/how to write it.
          Hide
          andrew.wang Andrew Wang added a comment -

          Hey folks, could we close on this issue this week? I'm planning to cut alpha4 next week.

          Show
          andrew.wang Andrew Wang added a comment - Hey folks, could we close on this issue this week? I'm planning to cut alpha4 next week.
          Hide
          jojochuang Wei-Chiu Chuang added a comment -

          Hey Ewans, could you please elaborate a little bit more on this config key?
          For example, instead of "will allow older clients to access the system" maybe you can be more precise and say this will allow old clients (Hadoop 2.x) to access a Hadoop 3 cluster?
          Also, "but will prevent some newer features from working." might be better to mention you mean features added in Hadoop 3. But the way, what are the new features that would not work? Looking at HDFS-9807, looks like disabling it would break HSM block placement policy.

          Show
          jojochuang Wei-Chiu Chuang added a comment - Hey Ewans, could you please elaborate a little bit more on this config key? For example, instead of "will allow older clients to access the system" maybe you can be more precise and say this will allow old clients (Hadoop 2.x) to access a Hadoop 3 cluster? Also, "but will prevent some newer features from working." might be better to mention you mean features added in Hadoop 3. But the way, what are the new features that would not work? Looking at HDFS-9807 , looks like disabling it would break HSM block placement policy.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 19s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 14m 39s trunk passed
          +1 compile 0m 57s trunk passed
          +1 checkstyle 0m 54s trunk passed
          +1 mvnsite 1m 9s trunk passed
          +1 findbugs 1m 57s trunk passed
          +1 javadoc 0m 47s trunk passed
          +1 mvninstall 0m 56s the patch passed
          +1 compile 0m 44s the patch passed
          +1 javac 0m 44s the patch passed
          -0 checkstyle 0m 43s hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711)
          +1 mvnsite 0m 55s the patch passed
          -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
          +1 xml 0m 1s The patch has no ill-formed XML file.
          +1 findbugs 1m 49s the patch passed
          +1 javadoc 0m 38s the patch passed
          -1 unit 93m 39s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          121m 54s



          Reason Tests
          Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
            hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-11956
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873187/HDFS-11956.003.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml
          uname Linux 60f266674f7c 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / fb68980
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/whitespace-eol.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19922/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19922/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 19s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 14m 39s trunk passed +1 compile 0m 57s trunk passed +1 checkstyle 0m 54s trunk passed +1 mvnsite 1m 9s trunk passed +1 findbugs 1m 57s trunk passed +1 javadoc 0m 47s trunk passed +1 mvninstall 0m 56s the patch passed +1 compile 0m 44s the patch passed +1 javac 0m 44s the patch passed -0 checkstyle 0m 43s hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711) +1 mvnsite 0m 55s the patch passed -1 whitespace 0m 0s The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply +1 xml 0m 1s The patch has no ill-formed XML file. +1 findbugs 1m 49s the patch passed +1 javadoc 0m 38s the patch passed -1 unit 93m 39s hadoop-hdfs in the patch failed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 121m 54s Reason Tests Failed junit tests hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-11956 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873187/HDFS-11956.003.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml uname Linux 60f266674f7c 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / fb68980 Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt whitespace https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/whitespace-eol.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19922/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19922/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19922/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          ehiggs Ewan Higgs added a comment -

          Attaching patch with value for hdfs-default.xml and some checkstyle fixes.

          Show
          ehiggs Ewan Higgs added a comment - Attaching patch with value for hdfs-default.xml and some checkstyle fixes.
          Hide
          andrew.wang Andrew Wang added a comment -

          LGTM, though we should also add an entry to hdfs-default.xml as documentation for this new option, and some of the checkstyles look fixable. Would appreciate if you could validate that the failed unit tests are unrelated (sadly, there are a lot).

          Chris Douglas do you want to review as well?

          Show
          andrew.wang Andrew Wang added a comment - LGTM, though we should also add an entry to hdfs-default.xml as documentation for this new option, and some of the checkstyles look fixable. Would appreciate if you could validate that the failed unit tests are unrelated (sadly, there are a lot). Chris Douglas do you want to review as well?
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 18s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 14m 38s trunk passed
          +1 compile 0m 58s trunk passed
          +1 checkstyle 0m 50s trunk passed
          +1 mvnsite 1m 6s trunk passed
          +1 findbugs 2m 0s trunk passed
          +1 javadoc 0m 48s trunk passed
          +1 mvninstall 0m 58s the patch passed
          +1 compile 0m 57s the patch passed
          +1 javac 0m 57s the patch passed
          -0 checkstyle 0m 50s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 711 unchanged - 0 fixed = 718 total (was 711)
          +1 mvnsite 1m 0s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 55s the patch passed
          +1 javadoc 0m 40s the patch passed
          -1 unit 93m 38s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 20s The patch does not generate ASF License warnings.
          122m 21s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMXBean
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140
            hadoop.tools.TestHdfsConfigFields
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
            hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
            hadoop.hdfs.server.blockmanagement.TestReplicationPolicy
            hadoop.hdfs.server.namenode.ha.TestPipelinesFailover



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-11956
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873046/HDFS-11956.002.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux d3243376df8b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 999c8fc
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19909/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19909/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 18s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 14m 38s trunk passed +1 compile 0m 58s trunk passed +1 checkstyle 0m 50s trunk passed +1 mvnsite 1m 6s trunk passed +1 findbugs 2m 0s trunk passed +1 javadoc 0m 48s trunk passed +1 mvninstall 0m 58s the patch passed +1 compile 0m 57s the patch passed +1 javac 0m 57s the patch passed -0 checkstyle 0m 50s hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 711 unchanged - 0 fixed = 718 total (was 711) +1 mvnsite 1m 0s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 55s the patch passed +1 javadoc 0m 40s the patch passed -1 unit 93m 38s hadoop-hdfs in the patch failed. +1 asflicense 0m 20s The patch does not generate ASF License warnings. 122m 21s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeMXBean   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140   hadoop.tools.TestHdfsConfigFields   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate   hadoop.hdfs.server.blockmanagement.TestReplicationPolicy   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-11956 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873046/HDFS-11956.002.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux d3243376df8b 3.13.0-108-generic #155-Ubuntu SMP Wed Jan 11 16:58:52 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 999c8fc Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19909/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19909/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19909/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          ehiggs Ewan Higgs added a comment -

          Attaching updated patch with a unit test. In the test, strictSM BlockTokenSecretManager will fail when the passed storageIds are wrong; but permissiveSM will allow it. strictSM corresponds to having the config value enabled while permissiveSM corresponds to it being disabled for legacy clients.

          Show
          ehiggs Ewan Higgs added a comment - Attaching updated patch with a unit test. In the test, strictSM BlockTokenSecretManager will fail when the passed storageIds are wrong; but permissiveSM will allow it. strictSM corresponds to having the config value enabled while permissiveSM corresponds to it being disabled for legacy clients.
          Hide
          hadoopqa Hadoop QA added a comment -
          -1 overall



          Vote Subsystem Runtime Comment
          0 reexec 0m 14s Docker mode activated.
          +1 @author 0m 0s The patch does not contain any @author tags.
          +1 test4tests 0m 0s The patch appears to include 2 new or modified test files.
          +1 mvninstall 13m 31s trunk passed
          +1 compile 0m 50s trunk passed
          +1 checkstyle 0m 44s trunk passed
          +1 mvnsite 1m 0s trunk passed
          +1 findbugs 1m 43s trunk passed
          +1 javadoc 0m 42s trunk passed
          +1 mvninstall 0m 48s the patch passed
          +1 compile 0m 45s the patch passed
          +1 javac 0m 45s the patch passed
          -0 checkstyle 0m 41s hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711)
          +1 mvnsite 0m 51s the patch passed
          +1 whitespace 0m 0s The patch has no whitespace issues.
          +1 findbugs 1m 44s the patch passed
          +1 javadoc 0m 37s the patch passed
          -1 unit 70m 15s hadoop-hdfs in the patch failed.
          +1 asflicense 0m 19s The patch does not generate ASF License warnings.
          96m 0s



          Reason Tests
          Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
            hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
            hadoop.hdfs.web.TestWebHDFS
            hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
            hadoop.hdfs.server.namenode.TestDecommissioningStatus
            hadoop.tools.TestHdfsConfigFields



          Subsystem Report/Notes
          Docker Image:yetus/hadoop:14b5c93
          JIRA Issue HDFS-11956
          JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873002/HDFS-11956.001.patch
          Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle
          uname Linux 14c8f2a52ed8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
          Build tool maven
          Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh
          git revision trunk / 999c8fc
          Default Java 1.8.0_131
          findbugs v3.1.0-RC1
          checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
          unit https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
          Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19906/testReport/
          modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
          Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19906/console
          Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org

          This message was automatically generated.

          Show
          hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 reexec 0m 14s Docker mode activated. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 2 new or modified test files. +1 mvninstall 13m 31s trunk passed +1 compile 0m 50s trunk passed +1 checkstyle 0m 44s trunk passed +1 mvnsite 1m 0s trunk passed +1 findbugs 1m 43s trunk passed +1 javadoc 0m 42s trunk passed +1 mvninstall 0m 48s the patch passed +1 compile 0m 45s the patch passed +1 javac 0m 45s the patch passed -0 checkstyle 0m 41s hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 711 unchanged - 0 fixed = 714 total (was 711) +1 mvnsite 0m 51s the patch passed +1 whitespace 0m 0s The patch has no whitespace issues. +1 findbugs 1m 44s the patch passed +1 javadoc 0m 37s the patch passed -1 unit 70m 15s hadoop-hdfs in the patch failed. +1 asflicense 0m 19s The patch does not generate ASF License warnings. 96m 0s Reason Tests Failed junit tests hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy   hadoop.hdfs.web.TestWebHDFS   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090   hadoop.hdfs.server.namenode.TestDecommissioningStatus   hadoop.tools.TestHdfsConfigFields Subsystem Report/Notes Docker Image:yetus/hadoop:14b5c93 JIRA Issue HDFS-11956 JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12873002/HDFS-11956.001.patch Optional Tests asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle uname Linux 14c8f2a52ed8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /testptch/hadoop/patchprocess/precommit/personality/provided.sh git revision trunk / 999c8fc Default Java 1.8.0_131 findbugs v3.1.0-RC1 checkstyle https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt unit https://builds.apache.org/job/PreCommit-HDFS-Build/19906/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt Test Results https://builds.apache.org/job/PreCommit-HDFS-Build/19906/testReport/ modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs Console output https://builds.apache.org/job/PreCommit-HDFS-Build/19906/console Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org This message was automatically generated.
          Hide
          andrew.wang Andrew Wang added a comment -

          Thanks for working on this Ewan. Is it possible to add a unit test for this?

          Show
          andrew.wang Andrew Wang added a comment - Thanks for working on this Ewan. Is it possible to add a unit test for this?
          Hide
          ehiggs Ewan Higgs added a comment -

          Introduce dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request. This allows for backwards compatibility all the way back to 2.6.x.

          Show
          ehiggs Ewan Higgs added a comment - Introduce dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request. This allows for backwards compatibility all the way back to 2.6.x.
          Hide
          ehiggs Ewan Higgs added a comment -

          Attaching a patch that introduces dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request.

          Show
          ehiggs Ewan Higgs added a comment - Attaching a patch that introduces dfs.block.access.token.storageid.enable which will be false by default. When it's turned on, the BlockTokenSecretManager.checkAccess will consider the storage ID when verifying the request.
          Hide
          ehiggs Ewan Higgs added a comment -

          I took a look and see that this fails when writing blocks. e.g.:

          hadoop-2.6.5/bin/hdfs dfs -copyFromLocal hello.txt /
          

          This comes from the fact that the BlockTokenIdenfitier has the StorageID in there; but the StorageID is an optional field in the request which is new in 3.0. This means that it isn't passed in. Defaulting to 'null' and allowing this would of course defeat the purpose of the BlockTokenIdentifier, so I think this should be fixed with a bitflag (e.g. dfs.block.access.token.storageid.enable) which defaults to false and makes the [[BlockTokenSecretManager}} only use the storage id in the checkAccess call if it's enabled. This will allow old clients work; but it won't allow the system to take advantage of new features enabled by using the storage id in the write calls.

          Show
          ehiggs Ewan Higgs added a comment - I took a look and see that this fails when writing blocks. e.g.: hadoop-2.6.5/bin/hdfs dfs -copyFromLocal hello.txt / This comes from the fact that the BlockTokenIdenfitier has the StorageID in there; but the StorageID is an optional field in the request which is new in 3.0. This means that it isn't passed in. Defaulting to 'null' and allowing this would of course defeat the purpose of the BlockTokenIdentifier, so I think this should be fixed with a bitflag (e.g. dfs.block.access.token.storageid.enable ) which defaults to false and makes the [[BlockTokenSecretManager}} only use the storage id in the checkAccess call if it's enabled. This will allow old clients work; but it won't allow the system to take advantage of new features enabled by using the storage id in the write calls.
          Hide
          andrew.wang Andrew Wang added a comment -

          Assigning to Chris since he expressed interest over on HDFS-9807. Thanks Chris!

          Show
          andrew.wang Andrew Wang added a comment - Assigning to Chris since he expressed interest over on HDFS-9807 . Thanks Chris!

            People

            • Assignee:
              ehiggs Ewan Higgs
              Reporter:
              andrew.wang Andrew Wang
            • Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development