Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 2.6.0
    • Component/s: namenode, security
    • Labels:
      None

      Description

      FileStatus should have a 'boolean isEncrypted()' method. (it was in the context of discussing with AndreW about FileStatus being a Writable).

      Having this method would allow MR JobSubmitter do the following:


      BOOLEAN intermediateEncryption = false
      IF jobconf.contains("mr.intermidate.encryption") THEN
      intermediateEncryption = jobConf.getBoolean("mr.intermidate.encryption")
      ELSE
      IF (I/O)Format INSTANCEOF File(I/O)Format THEN
      intermediateEncryption = ANY File(I/O)Format HAS a Path with status isEncrypted()==TRUE
      FI
      jobConf.setBoolean("mr.intermidate.encryption", intermediateEncryption)
      FI

      1. HDFS-6843.001.patch
        11 kB
        Charles Lamb
      2. HDFS-6843.002.patch
        9 kB
        Charles Lamb
      3. HDFS-6843.003.patch
        10 kB
        Charles Lamb
      4. HDFS-6843.004.patch
        22 kB
        Charles Lamb
      5. HDFS-6843.005.patch
        25 kB
        Charles Lamb
      6. HDFS-6843.005.patch
        25 kB
        Charles Lamb
      7. HDFS-6843.006.patch
        23 kB
        Charles Lamb
      8. HDFS-6843.007.patch
        25 kB
        Charles Lamb
      9. HDFS-6843.008.patch
        25 kB
        Charles Lamb
      10. HDFS-6843.009.patch
        25 kB
        Charles Lamb
      11. HDFS-6843.010.patch
        25 kB
        Charles Lamb

        Issue Links

          Activity

          Hide
          Charles Lamb added a comment -

          I've attached a patch for this.

          Show
          Charles Lamb added a comment - I've attached a patch for this.
          Hide
          Charles Lamb added a comment -

          I'm going to "take back" this patch since we've decided that FileStatus.isEncrypted() needs to work on directories.

          Show
          Charles Lamb added a comment - I'm going to "take back" this patch since we've decided that FileStatus.isEncrypted() needs to work on directories.
          Hide
          Steve Loughran added a comment -

          what about making it an enum in case encryption policies change in future?

          Show
          Steve Loughran added a comment - what about making it an enum in case encryption policies change in future?
          Hide
          Andrew Wang added a comment -

          I looked into how we'd do this, and one major issue is Writable compatibility. We can't add a new field to FileStatus without breaking compat. ACLs took the approach of re-using an unused bit in the permissions short, and we'd have to do something similar.

          An enum would involve reserving more of our precious unused bits for this purpose. Steve, do you mind laying out your usecase in a little more detail? An enum by itself isn't very expressive. I figured if users want more information, we could add a new API that returns an EncryptedFileStatus with all the gory details.

          Show
          Andrew Wang added a comment - I looked into how we'd do this, and one major issue is Writable compatibility. We can't add a new field to FileStatus without breaking compat. ACLs took the approach of re-using an unused bit in the permissions short, and we'd have to do something similar. An enum would involve reserving more of our precious unused bits for this purpose. Steve, do you mind laying out your usecase in a little more detail? An enum by itself isn't very expressive. I figured if users want more information, we could add a new API that returns an EncryptedFileStatus with all the gory details.
          Hide
          Steve Loughran added a comment -

          ...no actual need, it's just if you are adding stuff enums tend to stop you realising 12 months later to go from a simple flag to something more nuanced. If it's just going to be a perms bit, that's consistent with unix's over-abused permissions flags

          Show
          Steve Loughran added a comment - ...no actual need, it's just if you are adding stuff enums tend to stop you realising 12 months later to go from a simple flag to something more nuanced. If it's just going to be a perms bit, that's consistent with unix's over-abused permissions flags
          Hide
          Andrew Wang added a comment -

          +1 to that sentiment in the abstract. The other one I like is always including some flag bits with an API, just in case

          For this though, I think we're going to stick with the boolean for now.

          Show
          Andrew Wang added a comment - +1 to that sentiment in the abstract. The other one I like is always including some flag bits with an API, just in case For this though, I think we're going to stick with the boolean for now.
          Hide
          Charles Lamb added a comment -

          Having the isEncrypted() method sit on FileStatus could create TOCTOU conditions so it has instead been implemented on FSDataInput and FSDataOutputStream.

          Show
          Charles Lamb added a comment - Having the isEncrypted() method sit on FileStatus could create TOCTOU conditions so it has instead been implemented on FSDataInput and FSDataOutputStream.
          Hide
          Charles Lamb added a comment -

          Patch attached.

          Show
          Charles Lamb added a comment - Patch attached.
          Hide
          Andrew Wang added a comment -

          Overall LGTM, just one nit: let's not c+p the Javadoc outside of the new interface.

          Alejandro Abdelnur can you confirm that this is sufficient for your needs in MR land? This is slightly different from what we discussed earlier with FileStatus bits.

          Show
          Andrew Wang added a comment - Overall LGTM, just one nit: let's not c+p the Javadoc outside of the new interface. Alejandro Abdelnur can you confirm that this is sufficient for your needs in MR land? This is slightly different from what we discussed earlier with FileStatus bits.
          Hide
          Charles Lamb added a comment -

          Updated the javadoc to use @inheritDoc. Alejandro Abdelnur, let us know if it's ok as is.

          Show
          Charles Lamb added a comment - Updated the javadoc to use @inheritDoc. Alejandro Abdelnur , let us know if it's ok as is.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12664773/HDFS-6843.002.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          -1 javac. The patch appears to cause the build to fail.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7795//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12664773/HDFS-6843.002.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. -1 javac . The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7795//console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12662753/HDFS-6843.001.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7793//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7793//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12662753/HDFS-6843.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7793//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7793//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          The test failure appears to be unrelated. I ran the test on my local machine and it passed.

          Show
          Charles Lamb added a comment - The test failure appears to be unrelated. I ran the test on my local machine and it passed.
          Hide
          Steve Loughran added a comment -

          As this is extending the core FS APIs

          1. Can you update the specification of those FS APIs to add a new attribute to a file status
          2. the contract tests need to be extended too, we just need to verify that isEncrypted returns "something" ... that could be integrated with one of the exiting tests

          This may need to be mentioned in a HADOOP- JIRA even if the patch is in here ... this is broader than just HDFS

          Show
          Steve Loughran added a comment - As this is extending the core FS APIs Can you update the specification of those FS APIs to add a new attribute to a file status the contract tests need to be extended too, we just need to verify that isEncrypted returns "something" ... that could be integrated with one of the exiting tests This may need to be mentioned in a HADOOP- JIRA even if the patch is in here ... this is broader than just HDFS
          Hide
          jay vyas added a comment -

          as per Steve Loughran's comment, definetly important to put this specification in HADOOP-... so that the other HCFS implementations can move it into the contract, so we can have unit tests for it, etc, using .

          Show
          jay vyas added a comment - as per Steve Loughran 's comment, definetly important to put this specification in HADOOP-... so that the other HCFS implementations can move it into the contract, so we can have unit tests for it, etc, using .
          Hide
          Andrew Wang added a comment -

          Steve, are we now requiring all API changes to also update the spec? I'm fine with filing a follow-on JIRA for this type of thing, but requiring it seems kind of onerous and also counter to past precedent.

          Show
          Andrew Wang added a comment - Steve, are we now requiring all API changes to also update the spec? I'm fine with filing a follow-on JIRA for this type of thing, but requiring it seems kind of onerous and also counter to past precedent.
          Hide
          jay vyas added a comment -

          Hi andrew. my two cents, is that I think Steve only means the two following tasks:

          • referring to modifying /hadoop-common/src/site/markdown/filesystem/fileinputstream.md to describe the isEncrypted semantics, and
          • adding a unit test to .//hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java .
            So , although we call it a *specification*, its just an .md file and a unit test... probably most patches add a unit test, so its really just one extra unit of work : updating the md file.
            Hope that makes it sound less burdensome?
          Show
          jay vyas added a comment - Hi andrew. my two cents, is that I think Steve only means the two following tasks: referring to modifying /hadoop-common/src/site/markdown/filesystem/fileinputstream.md to describe the isEncrypted semantics, and adding a unit test to .//hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java . So , although we call it a * specification *, its just an .md file and a unit test... probably most patches add a unit test, so its really just one extra unit of work : updating the md file. Hope that makes it sound less burdensome?
          Hide
          Andrew Wang added a comment -

          I'm aware, I did review the original FS contract patch The spec explicitly says that it can be incomplete or incorrect, and that the in-code HDFS implementation is the true spec. That indicated to me that updating the spec was not a requirement for changes. Yes, in this case we just need to check a boolean, but it could be a lot more work for other changes, and I don't want to set a precedent that updating the spec is necessary.

          Let's just take this to a follow-on JIRA?

          Show
          Andrew Wang added a comment - I'm aware, I did review the original FS contract patch The spec explicitly says that it can be incomplete or incorrect, and that the in-code HDFS implementation is the true spec. That indicated to me that updating the spec was not a requirement for changes. Yes, in this case we just need to check a boolean, but it could be a lot more work for other changes, and I don't want to set a precedent that updating the spec is necessary. Let's just take this to a follow-on JIRA?
          Hide
          Steve Loughran added a comment -

          Andrew ... its what, 5-10 lines? If we don't do it, then the spec dates to the point of unusability, and 18 months from now someone else maybe even you is going to have to do what I did, go through every method, look at what HDFS does, look at the other uses, write tests to see which diverge, fix. Or worse: someone else is going to implement the interface, wrongly.

          The spec explicitly says that it can be incomplete or incorrect

          more an observation on the current state...I didn't want to get it worse.

          What is being proposed here is an extension to the existing interfaces, the specification is changing, and I'd like that to be reflected in the documentation. So that we can set a precedent: change the specification of the interfaces and you get to update the documentation and contract tests to go with the change, rather than just add a few javadocs around the place.

          Show
          Steve Loughran added a comment - Andrew ... its what, 5-10 lines? If we don't do it, then the spec dates to the point of unusability, and 18 months from now someone else maybe even you is going to have to do what I did, go through every method, look at what HDFS does, look at the other uses, write tests to see which diverge, fix. Or worse: someone else is going to implement the interface, wrongly. The spec explicitly says that it can be incomplete or incorrect more an observation on the current state...I didn't want to get it worse. What is being proposed here is an extension to the existing interfaces, the specification is changing , and I'd like that to be reflected in the documentation. So that we can set a precedent: change the specification of the interfaces and you get to update the documentation and contract tests to go with the change, rather than just add a few javadocs around the place.
          Hide
          Andrew Wang added a comment -

          If we're going to start requiring this, it's something that should be decided on common-dev / hdfs-dev, not unilaterally on this JIRA. As I stated above, I have the opposite view about requiring this in the main patch. I want to make it as easy as possible for external people to contribute code, and it's definitely more work to have to understand the spec and contract tests, update the site, build it, check it, add a new test, and run the contract tests.

          Steve, you're also the person best equipped to review these changes anyway. Additionally, if you're interested in making them yourself, I'm happy to review too (as I've demonstrated previously).

          For all these reasons, this feels like precisely the kind of thing that should go into a follow-on JIRA.

          Let me also stress also that I helped review the contract tests in the first place because I think they're important and valuable, but again, I don't think that their introduction should mean an additional prerequisite for check-in.

          Show
          Andrew Wang added a comment - If we're going to start requiring this, it's something that should be decided on common-dev / hdfs-dev, not unilaterally on this JIRA. As I stated above, I have the opposite view about requiring this in the main patch. I want to make it as easy as possible for external people to contribute code, and it's definitely more work to have to understand the spec and contract tests, update the site, build it, check it, add a new test, and run the contract tests. Steve, you're also the person best equipped to review these changes anyway. Additionally, if you're interested in making them yourself, I'm happy to review too (as I've demonstrated previously). For all these reasons, this feels like precisely the kind of thing that should go into a follow-on JIRA. Let me also stress also that I helped review the contract tests in the first place because I think they're important and valuable, but again, I don't think that their introduction should mean an additional prerequisite for check-in.
          Hide
          Charles Lamb added a comment -

          I've attached diffs for the code changes. I've opened HADOOP-11026 for the followon spec/test changes.

          Show
          Charles Lamb added a comment - I've attached diffs for the code changes. I've opened HADOOP-11026 for the followon spec/test changes.
          Hide
          Stephen Watt added a comment -

          Andrew to your point about lowering the on-ramp for external folks to commit code - I'd argue that the community would explicitly want a high on-ramp with commensurate scrutiny for any modification to the Hadoop FileSystem Interface given that the entire Hadoop ecosystem might be impacted by any change to that interface.

          I maintain glusterfs-hadoop (the hadoop filesystem plugin for glusterfs) and I am a huge fan of having up-to-date filesystem contract tests. Yes, it requires additional effort, but the advantage that you get is a broad alternative Hadoop filesystem ecosystem that can validate their implementation of the Hadoop FileSystem Interface contract. I think its unlikely that people will voluntarily ensure the contract documentation and contract tests are up to date whenever a change is made unless they are incented somehow to do so. I think the question before the community is, "what process can we use to always ensure the Hadoop FS contracts and tests are up to date?"

          Show
          Stephen Watt added a comment - Andrew to your point about lowering the on-ramp for external folks to commit code - I'd argue that the community would explicitly want a high on-ramp with commensurate scrutiny for any modification to the Hadoop FileSystem Interface given that the entire Hadoop ecosystem might be impacted by any change to that interface. I maintain glusterfs-hadoop (the hadoop filesystem plugin for glusterfs) and I am a huge fan of having up-to-date filesystem contract tests. Yes, it requires additional effort, but the advantage that you get is a broad alternative Hadoop filesystem ecosystem that can validate their implementation of the Hadoop FileSystem Interface contract. I think its unlikely that people will voluntarily ensure the contract documentation and contract tests are up to date whenever a change is made unless they are incented somehow to do so. I think the question before the community is, "what process can we use to always ensure the Hadoop FS contracts and tests are up to date?"
          Hide
          Andrew Wang added a comment -

          Hi Steve W,

          I do not want to raise the bar to code contributions. I'm grateful that people manage to find time to file a JIRA and post a patch in the first place. Requiring another couple hours and patch revs might be the difference between getting something committed vs. having the person get discouraged and disappear. I see this happen all the time as it is.

          Let me also step through why I think follow-ons are good:

          • It's easy for interested parties to track. The JIRA summary will almost certainly say "update filesystem contact", and we could also have a JIRA label like "fscontract" or "hcfs" which people can watch.
          • The follow-on JIRA will focus solely on the contract, so it should be easier to review.
          • I don't know if your average committer is able to properly review spec changes anyway. Most of us do not maintain a HCFS, so do not necessarily have the same understanding of the concerns as someone who does. Getting a rubberstamp +1 to the spec change isn't that meaningful.

          I'd appreciate if we could take any further discussion to common-dev / hdfs-dev, as this JIRA is not the right forum.

          Show
          Andrew Wang added a comment - Hi Steve W, I do not want to raise the bar to code contributions. I'm grateful that people manage to find time to file a JIRA and post a patch in the first place. Requiring another couple hours and patch revs might be the difference between getting something committed vs. having the person get discouraged and disappear. I see this happen all the time as it is. Let me also step through why I think follow-ons are good: It's easy for interested parties to track. The JIRA summary will almost certainly say "update filesystem contact", and we could also have a JIRA label like "fscontract" or "hcfs" which people can watch. The follow-on JIRA will focus solely on the contract, so it should be easier to review. I don't know if your average committer is able to properly review spec changes anyway. Most of us do not maintain a HCFS, so do not necessarily have the same understanding of the concerns as someone who does. Getting a rubberstamp +1 to the spec change isn't that meaningful. I'd appreciate if we could take any further discussion to common-dev / hdfs-dev, as this JIRA is not the right forum.
          Hide
          jay vyas added a comment -

          Hi andrew and charles:

          • For this JIRA, we all (1) certainly agree that an update to the unit tests should be a part of this patch. And (2) A great place for such a test would be hadoop-common/src/test/java/org/apache/hadoop/fs/contract . So, for now, can we agree to that?
          • Meanwhile, I can start the mailing list thread about this subject.

          Thanks alot Andrew Wang for being clear and transparent about your position. Looking forward to hearing from the overall community on this (imo) *very important* topic.

          Show
          jay vyas added a comment - Hi andrew and charles: For this JIRA, we all (1) certainly agree that an update to the unit tests should be a part of this patch. And (2) A great place for such a test would be hadoop-common/src/test/java/org/apache/hadoop/fs/contract . So, for now, can we agree to that? Meanwhile, I can start the mailing list thread about this subject. Thanks alot Andrew Wang for being clear and transparent about your position. Looking forward to hearing from the overall community on this (imo) * very important * topic.
          Hide
          Charles Lamb added a comment -

          FYI, HADOOP-11026 now has the unit test and some minimal changes to the contract.

          Show
          Charles Lamb added a comment - FYI, HADOOP-11026 now has the unit test and some minimal changes to the contract.
          Hide
          jay vyas added a comment -

          Cool Charles Lamb ! It looks like for now we are all happy and we can leave the mailing list thread for another day

          Show
          jay vyas added a comment - Cool Charles Lamb ! It looks like for now we are all happy and we can leave the mailing list thread for another day
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12665387/HDFS-6843.003.patch
          against trunk revision c686aa3.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.TestEncryptionZones
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7857//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7857//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12665387/HDFS-6843.003.patch against trunk revision c686aa3. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.TestEncryptionZones org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7857//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7857//console This message is automatically generated.
          Hide
          Alejandro Abdelnur added a comment -

          errrhhhh, don't like it for the following reasons:

          • it requires to open a file to see if it is encrypted
          • it does not work on directories
          • it requires the HDFS client to contact the KMS to decrypt the encryption key (which won't be used)

          Regarding TOCTOU concerns, this is not different than computing the splits for an MR job and then running the MR job, if the files change between the split computation and the task execution, the job is doomed. We are already under immutability assumptions.

          I would put it in the FileStatus, most of the time I'll have a filestatus at hand, so checking if it is encrypted will be 0 cost (no extra RPC).

          Show
          Alejandro Abdelnur added a comment - errrhhhh, don't like it for the following reasons: it requires to open a file to see if it is encrypted it does not work on directories it requires the HDFS client to contact the KMS to decrypt the encryption key (which won't be used) Regarding TOCTOU concerns, this is not different than computing the splits for an MR job and then running the MR job, if the files change between the split computation and the task execution, the job is doomed. We are already under immutability assumptions. I would put it in the FileStatus, most of the time I'll have a filestatus at hand, so checking if it is encrypted will be 0 cost (no extra RPC).
          Hide
          Charles Lamb added a comment -

          Based on Tucu's comments, the patch is being cancelled.

          Show
          Charles Lamb added a comment - Based on Tucu's comments, the patch is being cancelled.
          Hide
          Charles Lamb added a comment -

          Revamped to be FileStatus.isEncrypted rather than FSDataIn/OutputStream.isEncrypted(). Also, FileStatus.isEncrypted returns true for directories that are in an EZ.

          Submitting this for a test-patch run.

          Show
          Charles Lamb added a comment - Revamped to be FileStatus.isEncrypted rather than FSDataIn/OutputStream.isEncrypted(). Also, FileStatus.isEncrypted returns true for directories that are in an EZ. Submitting this for a test-patch run.
          Hide
          Steve Loughran added a comment -

          Given that HADOOP-11026 is related purely to this, I'd them to be pulled in.

          1. without this, the semantics of "isEncrypted" aren't really being defined, are they? We're adding a new field, but not tying it up to the rest of the system
          2. We need to specify the transientness of this. If a directory has isEncrypted(), is it true that for all subdirectories this property holds? And all files in the directory?
          3. once we have that specification, we can derive the tests to verify that the logic being implemented here matches that specification.
          Show
          Steve Loughran added a comment - Given that HADOOP-11026 is related purely to this, I'd them to be pulled in. without this, the semantics of "isEncrypted" aren't really being defined, are they? We're adding a new field, but not tying it up to the rest of the system We need to specify the transientness of this. If a directory has isEncrypted(), is it true that for all subdirectories this property holds? And all files in the directory? once we have that specification, we can derive the tests to verify that the logic being implemented here matches that specification.
          Hide
          Charles Lamb added a comment -

          Steve Loughran,

          Now that we've eliminated FSData

          {Input,Output}

          Stream.isEncrypted and changed that to FileStatus.isEncrypted, the contract spec change becomes quite simple. The .005 patch includes changes to filesystem.md. Please take a look.

          Also, hopefully this will kick off a testpatch run.

          Show
          Charles Lamb added a comment - Steve Loughran , Now that we've eliminated FSData {Input,Output} Stream.isEncrypted and changed that to FileStatus.isEncrypted, the contract spec change becomes quite simple. The .005 patch includes changes to filesystem.md. Please take a look. Also, hopefully this will kick off a testpatch run.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12666797/HDFS-6843.005.patch
          against trunk revision 45efc96.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.web.TestWebHDFS
          org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
          org.apache.hadoop.hdfs.web.TestHttpsFileSystem
          org.apache.hadoop.hdfs.web.TestJsonUtil
          org.apache.hadoop.hdfs.web.TestWebHdfsTokens
          org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
          org.apache.hadoop.hdfs.TestEncryptionZones
          org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
          org.apache.hadoop.hdfs.TestLease
          org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.web.TestWebHDFSAcl
          org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter

          The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7913//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7913//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12666797/HDFS-6843.005.patch against trunk revision 45efc96. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.web.TestWebHDFS org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes org.apache.hadoop.hdfs.web.TestHttpsFileSystem org.apache.hadoop.hdfs.web.TestJsonUtil org.apache.hadoop.hdfs.web.TestWebHdfsTokens org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs org.apache.hadoop.hdfs.TestEncryptionZones org.apache.hadoop.hdfs.server.namenode.TestAuditLogger org.apache.hadoop.hdfs.TestLease org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.TestAuditLogs org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.web.TestWebHDFSAcl org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter The following test timeouts occurred in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7913//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7913//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          Resubmitting to see if weird testpatch errors go away. Previous run was as if the patch never got applied.

          Show
          Charles Lamb added a comment - Resubmitting to see if weird testpatch errors go away. Previous run was as if the patch never got applied.
          Hide
          Andrew Wang added a comment -

          Hi Charles, thanks for sticking with this. Hopefully we're finally closing in on the right solution.

          • FileStatus#isEncrypted, the javadoc is incorrect in that directories can be encrypted too.
          • When accessed from within /.reserved/raw, I think things should still show up as encrypted. It's a little inconsistent right now, since files wouldn't show up as isEncrypted, while dirs would. This would be a good thing to have in a unit test.
          • Would like a similar test as to what's in FSAclBaseTest that makes sure we can't set the isEncrypted bit
          • GNU ls uses * to indicate that a file is executable. I'd prefer not to overload this meaning in our webui.
          • Can you comment on manual testing done for the webUI? I think we used to have unit testing for the webui, but that might have gone away with the JS rewrite.
          Show
          Andrew Wang added a comment - Hi Charles, thanks for sticking with this. Hopefully we're finally closing in on the right solution. FileStatus#isEncrypted, the javadoc is incorrect in that directories can be encrypted too. When accessed from within /.reserved/raw, I think things should still show up as encrypted. It's a little inconsistent right now, since files wouldn't show up as isEncrypted, while dirs would. This would be a good thing to have in a unit test. Would like a similar test as to what's in FSAclBaseTest that makes sure we can't set the isEncrypted bit GNU ls uses * to indicate that a file is executable. I'd prefer not to overload this meaning in our webui. Can you comment on manual testing done for the webUI? I think we used to have unit testing for the webui, but that might have gone away with the JS rewrite.
          Hide
          Steve Loughran added a comment -

          Regarding the specification, I think you've dropped back to english too early, which leaves ambiguities around —exactly the kind of thing we are trying to avoid

          Here's my proposal

          1. we declare a new inEncryptionZone(FS, path) predicate saying "the FS implementation declares this to be encrypted"
          2. we add an invariant saying all files and dirs under a directory that is in an encryption zone are also in an encryption zone
          3. the value FileStatus.isEncrypted for a path is the value of inEncryptionZone(FS, path).

          In the specification, the extra bits (not placed into the source itself), are:

          
          ### `inEncryptionZone(FS, path)`
          
          A predicate which returns true iff the encryption mechanism of the FS implementation has marked the path as encrypted, and for any file for which `inEncryptionZone(FS, path)` holds the contents of `data(FS, path)` is encrypted such that only holders of the keys may read the unencrypted data. 
          
          ### `isEncrypted(data)` 
          is a predicate which returns true if the data itself is encrypted.
          
          The nature of the encryption, and the mechanism for creating an encryption zone are implementation details not covered in this specification. No guarantees are therefore made as to the quality of the encryption.
          
          ### Invariant 1: all files and directories under a directory in an encryption zone are also in an encryption zone
          
              forall d in directories(FS): inEncyptionZone(FS, d) implies
                forall c in children(FS, d) where (isFile(FS, c) or isDir(FS, c)) :
                  inEncyptionZone(FS, c)
          
          ### Invariant 2: for all files in an encrypted zone, the data is (somehow encrypted)
          
                forall f in files(FS) where  inEncyptionZone(FS, c):
                  isEncrypted(data(f))
          
          And for the FileStatus definition
          
                  FileStatus.isEncrypted() = inEncryptionZone(FS, path)
                  
          

          This leaves open the question of whether a file can be marked as encrypted in a directory which is not in an encryption zone.

          If it is a requirement, then that can be added as a new invariant (for all files , inEncryptionZone(FS, path) implies that the predicate also holds for their parent (which must be a directory, obviously).

          I don't know if that invariant should be defined, as on NTFS (and perhaps other native filesystems) you can tag an individual file as encrypted —something which the specification above does implicitly permit.

          It also assumes that the metadata of a file is not encrypted. Is this the case for HDFS? That is, you can list the dir and any attached data on a file other than the contents of data(FS, path? Invariant 2 covers this by declaring that if a file is in the zone, it is its data which is encrypted.

          Show
          Steve Loughran added a comment - Regarding the specification, I think you've dropped back to english too early, which leaves ambiguities around —exactly the kind of thing we are trying to avoid Here's my proposal we declare a new inEncryptionZone(FS, path) predicate saying "the FS implementation declares this to be encrypted" we add an invariant saying all files and dirs under a directory that is in an encryption zone are also in an encryption zone the value FileStatus.isEncrypted for a path is the value of inEncryptionZone(FS, path) . In the specification, the extra bits (not placed into the source itself), are: ### `inEncryptionZone(FS, path)` A predicate which returns true iff the encryption mechanism of the FS implementation has marked the path as encrypted, and for any file for which `inEncryptionZone(FS, path)` holds the contents of `data(FS, path)` is encrypted such that only holders of the keys may read the unencrypted data. ### `isEncrypted(data)` is a predicate which returns true if the data itself is encrypted. The nature of the encryption, and the mechanism for creating an encryption zone are implementation details not covered in this specification. No guarantees are therefore made as to the quality of the encryption. ### Invariant 1: all files and directories under a directory in an encryption zone are also in an encryption zone forall d in directories(FS): inEncyptionZone(FS, d) implies forall c in children(FS, d) where (isFile(FS, c) or isDir(FS, c)) : inEncyptionZone(FS, c) ### Invariant 2: for all files in an encrypted zone, the data is (somehow encrypted) forall f in files(FS) where inEncyptionZone(FS, c): isEncrypted(data(f)) And for the FileStatus definition FileStatus.isEncrypted() = inEncryptionZone(FS, path) This leaves open the question of whether a file can be marked as encrypted in a directory which is not in an encryption zone. If it is a requirement, then that can be added as a new invariant (for all files , inEncryptionZone(FS, path) implies that the predicate also holds for their parent (which must be a directory, obviously). I don't know if that invariant should be defined, as on NTFS (and perhaps other native filesystems) you can tag an individual file as encrypted —something which the specification above does implicitly permit. It also assumes that the metadata of a file is not encrypted. Is this the case for HDFS? That is, you can list the dir and any attached data on a file other than the contents of data(FS, path ? Invariant 2 covers this by declaring that if a file is in the zone, it is its data which is encrypted.
          Hide
          Charles Lamb added a comment -

          Andrew Wang, Steve Loughran,

          When accessed from within /.reserved/raw, I think things should still show up as encrypted. It's a little inconsistent right now, since files wouldn't show up as isEncrypted, while dirs would. This would be a good thing to have in a unit test.

          Good point. Just to clarify, only /.r/r files/dirs that are in an EZ will return true from isEncrypted.

          Would like a similar test as to what's in FSAclBaseTest that makes sure we can't set the isEncrypted bit

          Good catch. I believe that my earlier patch actually messed up this test by passing false to the new ctor. I've changed that to true and also changed the isEnc bit to true for the test.

          Per our offline discussion, the UI stuff has been removed.

          For the doc/spec changes, I've taken a stab at it. If I'm still
          too far off then we should probably move this back to a separate
          jira.

          Show
          Charles Lamb added a comment - Andrew Wang , Steve Loughran , When accessed from within /.reserved/raw, I think things should still show up as encrypted. It's a little inconsistent right now, since files wouldn't show up as isEncrypted, while dirs would. This would be a good thing to have in a unit test. Good point. Just to clarify, only /.r/r files/dirs that are in an EZ will return true from isEncrypted. Would like a similar test as to what's in FSAclBaseTest that makes sure we can't set the isEncrypted bit Good catch. I believe that my earlier patch actually messed up this test by passing false to the new ctor. I've changed that to true and also changed the isEnc bit to true for the test. Per our offline discussion, the UI stuff has been removed. For the doc/spec changes, I've taken a stab at it. If I'm still too far off then we should probably move this back to a separate jira.
          Hide
          Andrew Wang added a comment -

          Charles, I think we're missing the new FsPermissionExtension class in the patch. Also, could you hit "submit patch" so we can get a test run? Thanks.

          Show
          Andrew Wang added a comment - Charles, I think we're missing the new FsPermissionExtension class in the patch. Also, could you hit "submit patch" so we can get a test run? Thanks.
          Hide
          Andrew Wang added a comment -

          I looked through the patch assuming that FsPermissionExtension hasn't changed. Only review comment:

          • In the test, it seems we could extract a function, and run the function with the normal base path, then with the /.r/r base path. Save some lines? If you put the path being tested in the assert message, we don't lose any info.

          +1 pending this and Jenkins, thanks Charles.

          Show
          Andrew Wang added a comment - I looked through the patch assuming that FsPermissionExtension hasn't changed. Only review comment: In the test, it seems we could extract a function, and run the function with the normal base path, then with the /.r/r base path. Save some lines? If you put the path being tested in the assert message, we don't lose any info. +1 pending this and Jenkins, thanks Charles.
          Hide
          Charles Lamb added a comment -

          Andrew Wang,

          The .007 patch includes FsPermissionExtension and fixes the test.

          Thanks!

          Show
          Charles Lamb added a comment - Andrew Wang , The .007 patch includes FsPermissionExtension and fixes the test. Thanks!
          Hide
          Andrew Wang added a comment -

          I just kicked Jenkins to get another run. It seems to have run, but encountered some weird errors, and apparently it didn't post a JIRA comment.

          Show
          Andrew Wang added a comment - I just kicked Jenkins to get another run. It seems to have run, but encountered some weird errors, and apparently it didn't post a JIRA comment.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12668762/HDFS-6843.007.patch
          against trunk revision 24d920b.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
          org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen
          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.server.namenode.TestAuditLogger
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
          org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8030//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8030//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12668762/HDFS-6843.007.patch against trunk revision 24d920b. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.server.namenode.TestAuditLogger org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8030//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8030//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          .008 fixes Test*ContractOpen failures. I ran the other tests and they were spurious or unrelated.

          Show
          Charles Lamb added a comment - .008 fixes Test*ContractOpen failures. I ran the other tests and they were spurious or unrelated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12668862/HDFS-6843.008.patch
          against trunk revision 88e329f.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
          org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8035//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8035//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12668862/HDFS-6843.008.patch against trunk revision 88e329f. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8035//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8035//console This message is automatically generated.
          Hide
          Andrew Wang added a comment -

          The contract tests still fail for me locally; Charles, could you debug?

          Show
          Andrew Wang added a comment - The contract tests still fail for me locally; Charles, could you debug?
          Hide
          Charles Lamb added a comment -

          Andrew Wang,

          That's strange. They (TestLocalFSContractOpen and TestRawlocalContractOpen)both pass for me with the patch applied, both on the CLI and in an IDE. I've removed the rm and changed the createFile to be overwrite=true and am submitting this for a jenkins run to see what happens.

          TestPipelinesFailover and TestWebHdfsFileSystemContract fail with and without the patch.

          -------------------------------------------------------
          T E S T S
          -------------------------------------------------------
          Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.903 sec - in org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen
          Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
          Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.695 sec - in org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen

          Results :

          Tests run: 12, Failures: 0, Errors: 0, Skipped: 0

          Show
          Charles Lamb added a comment - Andrew Wang , That's strange. They (TestLocalFSContractOpen and TestRawlocalContractOpen)both pass for me with the patch applied, both on the CLI and in an IDE. I've removed the rm and changed the createFile to be overwrite=true and am submitting this for a jenkins run to see what happens. TestPipelinesFailover and TestWebHdfsFileSystemContract fail with and without the patch. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.903 sec - in org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.695 sec - in org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen Results : Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12669407/HDFS-6843.009.patch
          against trunk revision c0c7e6f.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
          org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen
          org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8052//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8052//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12669407/HDFS-6843.009.patch against trunk revision c0c7e6f. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractOpen org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8052//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8052//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          Fixed test to not use absolute pathnames with AbstractFSTestContractBase.

          Show
          Charles Lamb added a comment - Fixed test to not use absolute pathnames with AbstractFSTestContractBase.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12669453/HDFS-6843.010.patch
          against trunk revision c0c7e6f.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 3 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

          org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
          org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
          org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter
          org.apache.hadoop.yarn.server.resourcemanager.TestRM

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8056//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8056//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12669453/HDFS-6843.010.patch against trunk revision c0c7e6f. +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 3 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter org.apache.hadoop.yarn.server.resourcemanager.TestRM +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/8056//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8056//console This message is automatically generated.
          Hide
          Colin Patrick McCabe added a comment -

          +1. Thanks, Charles and Andrew.

          Show
          Colin Patrick McCabe added a comment - +1. Thanks, Charles and Andrew.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Yarn-trunk #684 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/684/)
          HDFS-6843. Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
            HDFS-6843. Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512)
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Yarn-trunk #684 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/684/ ) HDFS-6843 . Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java HDFS-6843 . Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512) hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1900 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1900/)
          HDFS-6843. Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
            HDFS-6843. Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512)
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1900 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1900/ ) HDFS-6843 . Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md HDFS-6843 . Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512) hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1875 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1875/)
          HDFS-6843. Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b)

          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
          • hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
          • hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java
          • hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
          • hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
            HDFS-6843. Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512)
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1875 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1875/ ) HDFS-6843 . Create FileStatus isEncrypted() method (clamb via cmccabe) (cmccabe: rev e3803d002c660f18a5c2ecf32344fd6f3f491a5b) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsAclPermission.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/FsPermissionExtension.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractOpenTest.java hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java HDFS-6843 . Add to CHANGES.txt (cmccabe: rev f24ac429d102777fe021e9852cfff38312643512) hadoop-common-project/hadoop-common/CHANGES.txt

            People

            • Assignee:
              Charles Lamb
              Reporter:
              Charles Lamb
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development