Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0, 2.3.0
    • Fix Version/s: 2.6.0
    • Component/s: security
    • Labels:
      None

      Description

      Because of privacy and security regulations, for many industries, sensitive data at rest must be in encrypted form. For example: the health­care industry (HIPAA regulations), the card payment industry (PCI DSS regulations) or the US government (FISMA regulations).

      This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can be used transparently by any application accessing HDFS via Hadoop Filesystem Java API, Hadoop libhdfs C library, or WebHDFS REST API.

      The resulting implementation should be able to be used in compliance with different regulation requirements.

      1. fs-encryption.2014-08-18.patch
        653 kB
        Andrew Wang
      2. fs-encryption.2014-08-19.patch
        653 kB
        Andrew Wang
      3. HDFS-6134_test_plan.pdf
        147 kB
        Stephen Chu
      4. HDFS-6134.001.patch
        566 kB
        Charles Lamb
      5. HDFS-6134.002.patch
        562 kB
        Yi Liu
      6. HDFSDataatRestEncryption.pdf
        357 kB
        Charles Lamb
      7. HDFSDataatRestEncryptionProposal_obsolete.pdf
        219 kB
        Alejandro Abdelnur
      8. HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf
        86 kB
        Alejandro Abdelnur

        Issue Links

        1.
        HDFS Encryption Zones Sub-task Resolved Charles Lamb
         
        2.
        HDFS integration with KeyProvider Sub-task Resolved Charles Lamb
         
        3.
        Wire crypto streams for encrypted files in DFSClient Sub-task Resolved Charles Lamb
         
        4.
        Protocol and API for Encryption Zones Sub-task Resolved Charles Lamb
         
        5.
        Print out the KeyProvider after finding KP successfully on startup Sub-task Resolved Juan Yu
         
        6.
        CryptoCode.generateSecureRandom should be a static method Sub-task Resolved Charles Lamb
         
        7.
        HDFS CLI admin tool for creating & deleting an encryption zone Sub-task Resolved Charles Lamb
         
        8.
        Get the Key/IV from the NameNode for encrypted files in DFSClient Sub-task Resolved Andrew Wang
         
        9.
        Rename restrictions for encryption zones Sub-task Resolved Charles Lamb
         
        10.
        Client server negotiation of cipher suite Sub-task Resolved Andrew Wang
         
        11.
        Remove the Delete Encryption Zone function Sub-task Resolved Charles Lamb
         
        12.
        List of Encryption Zones should be based on inodes Sub-task Resolved Charles Lamb
         
        13.
        Test Crypto streams in HDFS Sub-task Resolved Yi Liu
         
        14.
        Namenode needs to get the actual keys and iv from the KeyProvider Sub-task Resolved Andrew Wang
         
        15.
        Clean up encryption-related tests Sub-task Resolved Andrew Wang
         
        16.
        Fix the keyid format for generated keys in FSNamesystem.createEncryptionZone Sub-task Resolved Charles Lamb
         
        17.
        Not able to create symlinks after HDFS-6516 Sub-task Resolved Uma Maheswara Rao G
         
        18.
        Refactor encryption zone functionality into new EncryptionZoneManager class Sub-task Resolved Andrew Wang
         
        19.
        Update usage of KeyProviderCryptoExtension APIs on NameNode Sub-task Resolved Andrew Wang
         
        20.
        Remove EncryptionZoneManager lock Sub-task Resolved Andrew Wang
         
        21.
        Remove unnecessary getEncryptionZoneForPath call in EZManager#createEncryptionZone Sub-task Resolved Uma Maheswara Rao G
         
        22.
        Remove KeyProvider in EncryptionZoneManager Sub-task Resolved Andrew Wang
         
        23.
        Decrypt EDEK before creating CryptoInputStream/CryptoOutputStream Sub-task Resolved Andrew Wang
         
        24.
        Creating encryption zone results in NPE when KeyProvider is null Sub-task Resolved Charles Lamb
         
        25.
        Create a special /.reserved/raw directory for raw access to encrypted data Sub-task Resolved Charles Lamb
         
        26.
        Create a .RAW extended attribute namespace Sub-task Resolved Charles Lamb
         
        27.
        Add more HDFS encryption tests Sub-task Resolved Andrew Wang
         
        28.
        Should not be able to create encryption zone using path to a non-directory file Sub-task Resolved Charles Lamb
         
        29.
        Require specification of an encryption key when creating an encryption zone Sub-task Resolved Andrew Wang
         
        30.
        Batch the encryption zones listing API Sub-task Resolved Andrew Wang
         
        31.
        DFSClient should use IV generated based on the configured CipherSuite with codecs used Sub-task Resolved Uma Maheswara Rao G
         
        32.
        Cannot remove directory within encryption zone to Trash Sub-task Resolved Unassigned
         
        33.
        Fix TestReservedRawPaths failures Sub-task Resolved Charles Lamb
         
        34.
        Mistakenly dfs.namenode.list.encryption.zones.num.responses configured as boolean Sub-task Resolved Uma Maheswara Rao G
         
        35.
        HDFS encryption documentation Sub-task Resolved Andrew Wang
         
        36.
        Fix findbugs and other warnings Sub-task Resolved Yi Liu
         
        37.
        Improve the configuration guidance in DFSClient when there are no Codec classes found in configs Sub-task Resolved Uma Maheswara Rao G
         
        38.
        Fix TestCLI to expect new output Sub-task Resolved Charles Lamb
         
        39.
        Add non-superuser capability to get the encryption zone for a specific path Sub-task Resolved Charles Lamb
         
        40.
        Constants in CommandWithDestination should be static Sub-task Resolved Charles Lamb
         

          Activity

          Hide
          Alejandro Abdelnur added a comment -

          This proposal (PDF attached) discusses 4 possible designs for HDFS file encryption.

          Show
          Alejandro Abdelnur added a comment - This proposal (PDF attached) discusses 4 possible designs for HDFS file encryption.
          Hide
          Avik Dey added a comment -

          Alejandro Abdelnur do you mind updating the possible design options PDF attached here, based on the latest patch on HADOOP-10150? after you have done that may be we discuss the following?

          • do we need a new proposal for the work already being done on HADOOP-10150?
          • are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150?
          • do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150, so we can collaborate and not duplicate?
          Show
          Avik Dey added a comment - Alejandro Abdelnur do you mind updating the possible design options PDF attached here, based on the latest patch on HADOOP-10150 ? after you have done that may be we discuss the following? do we need a new proposal for the work already being done on HADOOP-10150 ? are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150 ? do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150 , so we can collaborate and not duplicate?
          Hide
          Steve Loughran added a comment -
          1. be nice to link in the JIRA to the relevant docs
          2. In the comparator matrix I'd add something about 'cost of breach' -something where the NN has access to all the keys makes it a security-SPOF, client-side decryption breaches are limited to the data they access.
          Show
          Steve Loughran added a comment - be nice to link in the JIRA to the relevant docs In the comparator matrix I'd add something about 'cost of breach' -something where the NN has access to all the keys makes it a security-SPOF, client-side decryption breaches are limited to the data they access.
          Hide
          Alejandro Abdelnur added a comment -

          Avik Dey, I'll look at the stuff posted today in HADOOP-10150 and I'll report back.

          [~stevel@hortonworks.com], on #1 what do you refer to? On #2, yes good idea, I'll update the PDF with such.

          Show
          Alejandro Abdelnur added a comment - Avik Dey , I'll look at the stuff posted today in HADOOP-10150 and I'll report back. [~stevel@hortonworks.com] , on #1 what do you refer to? On #2, yes good idea, I'll update the PDF with such.
          Hide
          Steve Loughran added a comment -

          Alejandro, Sorry what I meant to say is that the PDF refers to other JIRAs -they should be added as links to this JIRA.

          Show
          Steve Loughran added a comment - Alejandro, Sorry what I meant to say is that the PDF refers to other JIRAs -they should be added as links to this JIRA.
          Hide
          Alejandro Abdelnur added a comment -

          (Cross-posting HADOOP-10150 & HDFS-6134]

          Avik Dey, I’ve just looked at the MAR/21 proposal in HADOOP-10150 (the patches uploaded on MAR/21 do not apply on trunk cleanly, so I cannot look at them easily. It seems to have missing pieces, like getXAttrs() and wiring to KeyProvider API. Would be possible to rebased them so they apply to trunk?)

          do we need a new proposal for the work already being done on HADOOP-10150?

          HADOOP-10150 aims to provide encryption for any filesystem implementation as a decorator filesystem. While HDFS-6134 aims to provide encryption for HDFS.

          The 2 approaches differ on the level of transparency you get. The comparison table in the "HDFS Data at Rest Encryption" attachment (https://issues.apache.org/jira/secure/attachment/12635964/HDFSDataAtRestEncryption.pdf) highlights the differences.

          Particularly, the things I’m concerned the most with HADOOP-10150 are:

          • All clients (doing encryption/decryption) must have access the key management service.
          • Secure key propagation to tasks running in the cluster (i.e. mapper and reducer tasks)
          • Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)
          • Not clear how hflush()

          are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150?

          IMO, a consolidated access/distribution of keys by the NN (as opposed to every client) improves the security of the system.

          do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150,

          They are enumerated in the "HDFS Data at Rest Encryption" attachment. The ones I don’t see them address in HADOOP-10150 are: #6, #8.A. And it is not clear how #4 & #5 can be achieved.

          so we can collaborate and not duplicate?

          Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both.

          Happy to jump on a call to discuss things and the report back to the community if you think that will speed up the discussion.

          ----------
          By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements.

          Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)

          Requirement #4 "Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS." This is contradicted by the design, in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. "

          Requirement #5 "Admin can configure encryption policies, such as which directory will be encrypted.", this seems driven by HDFS client configuration file (hdfs-site.xml). This is not really admin driven as clients could break this by configuring their hdfs-site.xml file)

          Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone.

          (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads?

          Explicit auditing on encrypted files access does not seem handled.

          Show
          Alejandro Abdelnur added a comment - (Cross-posting HADOOP-10150 & HDFS-6134 ] Avik Dey , I’ve just looked at the MAR/21 proposal in HADOOP-10150 (the patches uploaded on MAR/21 do not apply on trunk cleanly, so I cannot look at them easily. It seems to have missing pieces, like getXAttrs() and wiring to KeyProvider API. Would be possible to rebased them so they apply to trunk?) do we need a new proposal for the work already being done on HADOOP-10150 ? HADOOP-10150 aims to provide encryption for any filesystem implementation as a decorator filesystem. While HDFS-6134 aims to provide encryption for HDFS. The 2 approaches differ on the level of transparency you get. The comparison table in the "HDFS Data at Rest Encryption" attachment ( https://issues.apache.org/jira/secure/attachment/12635964/HDFSDataAtRestEncryption.pdf ) highlights the differences. Particularly, the things I’m concerned the most with HADOOP-10150 are: All clients (doing encryption/decryption) must have access the key management service. Secure key propagation to tasks running in the cluster (i.e. mapper and reducer tasks) Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM) Not clear how hflush() are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150 ? IMO, a consolidated access/distribution of keys by the NN (as opposed to every client) improves the security of the system. do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150 , They are enumerated in the "HDFS Data at Rest Encryption" attachment. The ones I don’t see them address in HADOOP-10150 are: #6, #8.A. And it is not clear how #4 & #5 can be achieved. so we can collaborate and not duplicate? Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both. Happy to jump on a call to discuss things and the report back to the community if you think that will speed up the discussion. ---------- By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements. Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes) Requirement #4 "Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS." This is contradicted by the design, in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. " Requirement #5 "Admin can configure encryption policies, such as which directory will be encrypted.", this seems driven by HDFS client configuration file (hdfs-site.xml). This is not really admin driven as clients could break this by configuring their hdfs-site.xml file) Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone. (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads? Explicit auditing on encrypted files access does not seem handled.
          Hide
          Larry McCay added a comment -

          Hi Alejandro Abdelnur - I like what I see here. We should file jira's for the KeyProvider API work that you mention in your document and discuss some of those aspects there. We have a number of common interests in that area.

          Show
          Larry McCay added a comment - Hi Alejandro Abdelnur - I like what I see here. We should file jira's for the KeyProvider API work that you mention in your document and discuss some of those aspects there. We have a number of common interests in that area.
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, gr8, I have done some work already on this area while prototyping, I'll create a few JIRAs later tonight and put up patches for the stuff I already have.

          Show
          Alejandro Abdelnur added a comment - Larry McCay , gr8, I have done some work already on this area while prototyping, I'll create a few JIRAs later tonight and put up patches for the stuff I already have.
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, I've just opened the following JIRAs for the KeyProvider API improvements and a KeyProvider server: HADOOP-10427, HADOOP-10428, HADOOP-10429, HADOOP-10430, HADOOP-10431, HADOOP-10432 & HADOOP-10433. I've already posted patches for all but the last one (I'll try to get it ready by EOW).

          Show
          Alejandro Abdelnur added a comment - Larry McCay , I've just opened the following JIRAs for the KeyProvider API improvements and a KeyProvider server: HADOOP-10427 , HADOOP-10428 , HADOOP-10429 , HADOOP-10430 , HADOOP-10431 , HADOOP-10432 & HADOOP-10433 . I've already posted patches for all but the last one (I'll try to get it ready by EOW).
          Hide
          Benoy Antony added a comment -

          Alejandro Abdelnur

          In the selected option (option 3), the HDFS (NN to be exact) fetches the keys. As long as the client is authenticated and authorized by HDFS, encrypted data can be read.
          The process is equivalent to a party trying to get into a house.
          Party is validated by security person (NN) based on party's identity card ( auth token) and security person's list of authorized persons ( acls on the file/directory).
          If the party passes these checks, security person hands the key to the party.
          (Option 4 is slightly different, security person doesn't hand over the key to the party, but opens the house for the party.)
          Also note that the security person(NN) needs to have access to keys of the all the houses.

          The plus point is simplicity. But it sets a low hurdle for security breach.

          A malicious party can get access to the keys by impersonating an authorized party. In the HDFS case, this is possible by

          1) Stealing a TGT/delegation token
          2) Admin impersonating a user by giving himself ability to impersonate.

          Now if the same key can decrypt other files, then malicious party can decrypt other files without going through further authn/authz checks. Option 4 doesn't have this specific vulnerability since the keys are handed over to the client.

          In some cases , this risk is acceptable.

          But in some cases , it is not acceptable. More protection is needed by requiring that client has to obtain the key himself.

          In the house example, Party is validated by security person based on party's identity card ( auth token) and security person's list of authorized persons ( acls on the file/directory).
          If the party passes these checks, security person let's the party proceed to the house. But the party needs to have the key.
          Thus even if one party can impersonate another party and thus fool the security, the impersonator cannot enter the house as he doesn't have the key.

          This additional hurdle is one of the reasons for clients to obtain the key.
          There could be other reasons.

          1) In some cases, another entity (NN) cannot have access to keys which it doesn't own.
          2) In some cases, NN may be in a location which has no connectivity to key store.

          So my question.

          Is it possible to make the key provisioning part customizable so that depending upon the requirement, the key can be obtained by client themselves or obtained by the HDFS ?
          (support option 2 and option 3)
          If so , it may also make sense to have both options supported on the same cluster instance as the level of security vary based on data.

          Show
          Benoy Antony added a comment - Alejandro Abdelnur In the selected option (option 3), the HDFS (NN to be exact) fetches the keys. As long as the client is authenticated and authorized by HDFS, encrypted data can be read. The process is equivalent to a party trying to get into a house. Party is validated by security person (NN) based on party's identity card ( auth token) and security person's list of authorized persons ( acls on the file/directory). If the party passes these checks, security person hands the key to the party. (Option 4 is slightly different, security person doesn't hand over the key to the party, but opens the house for the party.) Also note that the security person(NN) needs to have access to keys of the all the houses. The plus point is simplicity. But it sets a low hurdle for security breach. A malicious party can get access to the keys by impersonating an authorized party. In the HDFS case, this is possible by 1) Stealing a TGT/delegation token 2) Admin impersonating a user by giving himself ability to impersonate. Now if the same key can decrypt other files, then malicious party can decrypt other files without going through further authn/authz checks. Option 4 doesn't have this specific vulnerability since the keys are handed over to the client. In some cases , this risk is acceptable. But in some cases , it is not acceptable. More protection is needed by requiring that client has to obtain the key himself. In the house example, Party is validated by security person based on party's identity card ( auth token) and security person's list of authorized persons ( acls on the file/directory). If the party passes these checks, security person let's the party proceed to the house. But the party needs to have the key. Thus even if one party can impersonate another party and thus fool the security, the impersonator cannot enter the house as he doesn't have the key. This additional hurdle is one of the reasons for clients to obtain the key. There could be other reasons. 1) In some cases, another entity (NN) cannot have access to keys which it doesn't own. 2) In some cases, NN may be in a location which has no connectivity to key store. So my question. Is it possible to make the key provisioning part customizable so that depending upon the requirement, the key can be obtained by client themselves or obtained by the HDFS ? (support option 2 and option 3) If so , it may also make sense to have both options supported on the same cluster instance as the level of security vary based on data.
          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur , I reply your comments: https://issues.apache.org/jira/browse/HADOOP-10150?focusedCommentId=13946828&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13946828
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Nice doc!

          For #3 HDFS Encryption (Client Side), if the keys are available in NN, are the keys encrypted? Otherwise, a malicious admin could possibly dump the NN memory in order to obtain the keys. I think we should clarify how to encrypt the keys in NN in the doc.

          For #4, the keys are available in DNs for encrypting the data, i.e. the keys must not be encrypted. A malicious admin definitely could obtain the keys by dumping the DN memory. So #4 does not work.

          Show
          Tsz Wo Nicholas Sze added a comment - Nice doc! For #3 HDFS Encryption (Client Side), if the keys are available in NN, are the keys encrypted? Otherwise, a malicious admin could possibly dump the NN memory in order to obtain the keys. I think we should clarify how to encrypt the keys in NN in the doc. For #4, the keys are available in DNs for encrypting the data, i.e. the keys must not be encrypted. A malicious admin definitely could obtain the keys by dumping the DN memory. So #4 does not work.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          BTW, it looks like that it is going to be a large amount of work. How about creating a branch for this?

          Show
          Tsz Wo Nicholas Sze added a comment - BTW, it looks like that it is going to be a large amount of work. How about creating a branch for this?
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Actually, if we allow job tasks encrypting/decrypting data in a cluster node, then a malicious admin could again obtain the keys by dumping the memory of the cluster node.

          Show
          Tsz Wo Nicholas Sze added a comment - Actually, if we allow job tasks encrypting/decrypting data in a cluster node, then a malicious admin could again obtain the keys by dumping the memory of the cluster node.
          Hide
          Alejandro Abdelnur added a comment -

          [Cross-posting with HADOOP-10150, closing this JIRA as duplicate, discussion to continue in HADOOP-10150]

          Larry, Steve, Nicholas, thanks for your comments.

          Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey to see if we could combine what HADOOP-10150 and HDFS-6134 into one proposal while supporting both, encryption for multiple filesystems and transparent encryption for HDFS.

          Also, following Steve’s suggestion, I’ve put together a Attack Vectors Matrix for all approaches.

          I think both documents, the proposal and the attack vectors, address most if not all the questions/concerns raised in the JIRA.

          Please look for the documents in HADOOP-10150.

          Show
          Alejandro Abdelnur added a comment - [Cross-posting with HADOOP-10150, closing this JIRA as duplicate, discussion to continue in HADOOP-10150] Larry, Steve, Nicholas, thanks for your comments. Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey to see if we could combine what HADOOP-10150 and HDFS-6134 into one proposal while supporting both, encryption for multiple filesystems and transparent encryption for HDFS. Also, following Steve’s suggestion, I’ve put together a Attack Vectors Matrix for all approaches. I think both documents, the proposal and the attack vectors, address most if not all the questions/concerns raised in the JIRA. Please look for the documents in HADOOP-10150 .
          Hide
          Alejandro Abdelnur added a comment -

          [cross-posting with HADOOP-10150]

          Reopening HDFS-6134

          After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles we think is makes more sense to implement encryption for HDFS directly into the DistributedFileSystem client and to use CryptoFileSystem support encryption for FileSystems that don’t support native encryption.

          The reasons for this change of course are:

          • If we want to we add support for HDFS transparent compression, the compression should be done before the encryption (implying less entropy). If compression is to be handled by HDFS DistributedFileSystem, then the encryption has to be handled afterwards (in the write path).
          • The proposed CryptoSupport abstraction significantly complicates the implementation of CryptoFileSystem and the wiring in HDFS FileSystem client.
          • Building it directly into HDFS FileSystem client may allow us to avoid an extra copy of data.

          Because of this, the idea is now:

          • A common set of Crypto Input/Output streams. They would be used by CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. Note we cannot use the JDK Cipher Input/Output streams directly because we need to support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).
          • CryptoFileSystem.
            To support encryption in arbitrary FileSystems.
          • HDFS client encryption. To support transparent HDFS encryption.

          Both CryptoFilesystem and HDFS client encryption implementations would be built using the Crypto Input/Output streams, xAttributes and KeyProvider API.

          Show
          Alejandro Abdelnur added a comment - [cross-posting with HADOOP-10150] Reopening HDFS-6134 After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles we think is makes more sense to implement encryption for HDFS directly into the DistributedFileSystem client and to use CryptoFileSystem support encryption for FileSystems that don’t support native encryption. The reasons for this change of course are: If we want to we add support for HDFS transparent compression, the compression should be done before the encryption (implying less entropy). If compression is to be handled by HDFS DistributedFileSystem, then the encryption has to be handled afterwards (in the write path). The proposed CryptoSupport abstraction significantly complicates the implementation of CryptoFileSystem and the wiring in HDFS FileSystem client. Building it directly into HDFS FileSystem client may allow us to avoid an extra copy of data. Because of this, the idea is now: A common set of Crypto Input/Output streams. They would be used by CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. Note we cannot use the JDK Cipher Input/Output streams directly because we need to support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). CryptoFileSystem. To support encryption in arbitrary FileSystems. HDFS client encryption. To support transparent HDFS encryption. Both CryptoFilesystem and HDFS client encryption implementations would be built using the Crypto Input/Output streams, xAttributes and KeyProvider API.
          Hide
          Owen O'Malley added a comment -

          What are the use cases this is trying to address? What are the attacks?

          Do users or administrators set the encryption?

          Can different directories have different keys or is it one key for the entire filesystem?

          When you rename a directory does it need to be re-encrypted?

          How are backups handled? Does it require the encryption key? What is the performance impact on distcp when not using native libraries?

          For release in the Hadoop 2.x line, you need to preserve both forward and backwards wire compatibility. How do you plan to address that?

          It seems that the additional datanode and client complexity is prohibitive. Making changes to the HDFS write and read pipeline is extremely touchy.

          Show
          Owen O'Malley added a comment - What are the use cases this is trying to address? What are the attacks? Do users or administrators set the encryption? Can different directories have different keys or is it one key for the entire filesystem? When you rename a directory does it need to be re-encrypted? How are backups handled? Does it require the encryption key? What is the performance impact on distcp when not using native libraries? For release in the Hadoop 2.x line, you need to preserve both forward and backwards wire compatibility. How do you plan to address that? It seems that the additional datanode and client complexity is prohibitive. Making changes to the HDFS write and read pipeline is extremely touchy.
          Show
          Alejandro Abdelnur added a comment - Owen, I think the docs posted in HADOOP-10150 will address most if not all your questions/concerns: https://issues.apache.org/jira/secure/attachment/12640388/HDFSDataAtRestEncryptionAlternatives.pdf https://issues.apache.org/jira/secure/attachment/12640389/HDFSDataatRestEncryptionProposal.pdf https://issues.apache.org/jira/secure/attachment/12640390/HDFSDataatRestEncryptionAttackVectors.pdf
          Hide
          Andrew Purtell added a comment -

          Because of this, the idea is now:

          • A common set of Crypto Input/Output streams. [ ... ]
          • CryptoFileSystem. [ ... ]
          • HDFS client encryption. [ ... ]

          I would be great if the work is structured this way. A filtering CryptoFileSystem is needed for filesystem agnostic client side use cases, but e.g. if we want to push compression and encryption in HBase down into HDFS (which I think is desirable), or Hive or Pig or really any HDFS hosted Hadoop application, then doing so is far simpler if the DFS client supports transparent encryption directly.

          Show
          Andrew Purtell added a comment - Because of this, the idea is now: A common set of Crypto Input/Output streams. [ ... ] CryptoFileSystem. [ ... ] HDFS client encryption. [ ... ] I would be great if the work is structured this way. A filtering CryptoFileSystem is needed for filesystem agnostic client side use cases, but e.g. if we want to push compression and encryption in HBase down into HDFS (which I think is desirable), or Hive or Pig or really any HDFS hosted Hadoop application, then doing so is far simpler if the DFS client supports transparent encryption directly.
          Hide
          Steve Loughran added a comment -

          Some side comments

          1. The ASF don't want to be distributing cryptography libraries; the design has to address that distribution model.
          2. HADOOP-9565 proposes a BlobStore extending FileSystem to let users know that the FS doesn't have the semantics of a real filesystem -for blobstore security we'd need the same marker, so apps using a wrapped blobstore will know that some operations are non-atomic and potentially O(files)
          Show
          Steve Loughran added a comment - Some side comments The ASF don't want to be distributing cryptography libraries; the design has to address that distribution model. HADOOP-9565 proposes a BlobStore extending FileSystem to let users know that the FS doesn't have the semantics of a real filesystem -for blobstore security we'd need the same marker, so apps using a wrapped blobstore will know that some operations are non-atomic and potentially O(files)
          Hide
          Alejandro Abdelnur added a comment -

          Steve, we are not planning to implement cryptographic libraries, but to use existing ones. This is not different from what we are already doing with SASL for RPC encryption.

          Show
          Alejandro Abdelnur added a comment - Steve, we are not planning to implement cryptographic libraries, but to use existing ones. This is not different from what we are already doing with SASL for RPC encryption.
          Hide
          Todd Lipcon added a comment -

          I think two of Owen's questions may not be addressed in the docs. I'll do my best to answer them here:

          For release in the Hadoop 2.x line, you need to preserve both forward and backwards wire compatibility. How do you plan to address that?

          For data which has been marked encrypted, we obviously can't provide backwards-compatibility. I think the most sane behavior is probably that, if an old client tries to access encrypted data, they should receive the ciphertext instead of the decrypted plaintext. Another option might be to return an error. Either would be achievable by having the new client provide some flag in the OP_READ_BLOCK request which indicates "I am reading encrypted data and I am aware of it." If the new server sees that a client is reading encrypted data and does not have that flag, it could respond appropriately with either of the above two options.

          A new client accessing an old cluster should not be problematic, as we would only add new fields to RPCs. The NN RPCs to set up encryption zones, etc, would fail with the usual "not implemented" type exceptions (same as any other new feature).

          It seems that the additional datanode and client complexity is prohibitive. Making changes to the HDFS write and read pipeline is extremely touchy.

          I think prohibitive is a strong word. Adding new features may add complexity, but per the design docs that Alejandro pointed to, we think the advantages are worth it. There are several experienced HDFS developers working on this branch (alongside with the newer folks) so you can be sure we understand the areas of code being worked on and the associated risks. Having done much of the work required to support the checksum type changeover in Hadoop 2, I feel it's pretty likely the complexity of encryption is actually less than that project.

          Show
          Todd Lipcon added a comment - I think two of Owen's questions may not be addressed in the docs. I'll do my best to answer them here: For release in the Hadoop 2.x line, you need to preserve both forward and backwards wire compatibility. How do you plan to address that? For data which has been marked encrypted, we obviously can't provide backwards-compatibility. I think the most sane behavior is probably that, if an old client tries to access encrypted data, they should receive the ciphertext instead of the decrypted plaintext. Another option might be to return an error. Either would be achievable by having the new client provide some flag in the OP_READ_BLOCK request which indicates "I am reading encrypted data and I am aware of it." If the new server sees that a client is reading encrypted data and does not have that flag, it could respond appropriately with either of the above two options. A new client accessing an old cluster should not be problematic, as we would only add new fields to RPCs. The NN RPCs to set up encryption zones, etc, would fail with the usual "not implemented" type exceptions (same as any other new feature). It seems that the additional datanode and client complexity is prohibitive. Making changes to the HDFS write and read pipeline is extremely touchy. I think prohibitive is a strong word. Adding new features may add complexity, but per the design docs that Alejandro pointed to, we think the advantages are worth it. There are several experienced HDFS developers working on this branch (alongside with the newer folks) so you can be sure we understand the areas of code being worked on and the associated risks. Having done much of the work required to support the checksum type changeover in Hadoop 2, I feel it's pretty likely the complexity of encryption is actually less than that project.
          Hide
          Owen O'Malley added a comment -

          I still have two very strong concerns with this work:

          • A critical use case is that distcp (and other backup/disaster recovery tools) must be able to accurately copy files without access to the encryption keys. There are many cases when the automated backup tools are not permitted the encryption keys. Obviously, it also has the benefit of being both safer and faster if the data is moved in the original encryption.
          • The client needs to get the key material directly and not use the NameNode as a proxy. This is critical from a security point of view.
            • The security (including the audit log) on the key server is much stronger if there are no proxies between the user and the key server.
            • Security bugs in HDFS or mistakes in setting permissions are a critical use case for requiring encryption.

          Doing all of the work on the client (including getting the key) makes the entire much more secure.

          Show
          Owen O'Malley added a comment - I still have two very strong concerns with this work: A critical use case is that distcp (and other backup/disaster recovery tools) must be able to accurately copy files without access to the encryption keys. There are many cases when the automated backup tools are not permitted the encryption keys. Obviously, it also has the benefit of being both safer and faster if the data is moved in the original encryption. The client needs to get the key material directly and not use the NameNode as a proxy. This is critical from a security point of view. The security (including the audit log) on the key server is much stronger if there are no proxies between the user and the key server. Security bugs in HDFS or mistakes in setting permissions are a critical use case for requiring encryption. Doing all of the work on the client (including getting the key) makes the entire much more secure.
          Hide
          Alejandro Abdelnur added a comment -

          Owen O'Malley, thanks for bringing this up (in person, last week in the Hadoop Summit, and following up here in the JIRA).

          On the distcp not accessing the keys (not decrypting/encrypting), yes, that is the idea.

          On the client getting the keys material directly instead of via the NN. I think you have a point there. I’ve been thinking how that would work and discussed the idea with Andrew and Charles earlier today and we came up with the following approach:

          • The NN returns the keyVersionID for the file & the IV as part of the create()/open() call.
          • The KMS must add support for delegation token and proxyuser. Delegation tokens are required for Yarn containers to be able to get key materials. Proxyuser is required to support systems like Oozie/Knox that act on behave of other users.
          • The KeyProvider API must add a getDelegationToken(String renewer) method, default impl NULL. the KMS client will implement that.
          • The HDFS client getDelegationToken(), if a KeyProvider is configured on the client, will also call the KeyProvider#getDelegationToken(). Then the token will be propagated, together with the HDFS tokens to the credentials file.
          Show
          Alejandro Abdelnur added a comment - Owen O'Malley , thanks for bringing this up (in person, last week in the Hadoop Summit, and following up here in the JIRA). On the distcp not accessing the keys (not decrypting/encrypting), yes, that is the idea. On the client getting the keys material directly instead of via the NN. I think you have a point there. I’ve been thinking how that would work and discussed the idea with Andrew and Charles earlier today and we came up with the following approach: The NN returns the keyVersionID for the file & the IV as part of the create()/open() call. The KMS must add support for delegation token and proxyuser. Delegation tokens are required for Yarn containers to be able to get key materials. Proxyuser is required to support systems like Oozie/Knox that act on behave of other users. The KeyProvider API must add a getDelegationToken(String renewer) method, default impl NULL. the KMS client will implement that. The HDFS client getDelegationToken() , if a KeyProvider is configured on the client, will also call the KeyProvider#getDelegationToken() . Then the token will be propagated, together with the HDFS tokens to the credentials file.
          Hide
          Yi Liu added a comment -

          Hi Alejandro Abdelnur and Owen O'Malley, I agree with getting key material on the client side instead of via the NN.
          If in this way, we need to do authorization in KMS to check correct user accessing key, right? The authorization in KMS could be simple, we can specify users/groups accessing right for the key of encryption zone through configuration or command line.

          Furthermore, agree with the proposal of adding getDelegationToken(String renewer) method in KeyProvider interface.

          Show
          Yi Liu added a comment - Hi Alejandro Abdelnur and Owen O'Malley , I agree with getting key material on the client side instead of via the NN. If in this way, we need to do authorization in KMS to check correct user accessing key, right? The authorization in KMS could be simple, we can specify users/groups accessing right for the key of encryption zone through configuration or command line. Furthermore, agree with the proposal of adding getDelegationToken(String renewer) method in KeyProvider interface.
          Hide
          Charles Lamb added a comment -

          I have broken the unit tests for this into a new Jira (HDFS-6523). The .5 patch is the same as the .4 patch, but without the unit test.

          Show
          Charles Lamb added a comment - I have broken the unit tests for this into a new Jira ( HDFS-6523 ). The .5 patch is the same as the .4 patch, but without the unit test.
          Hide
          Owen O'Malley added a comment -

          The right way to do this is to have the Yarn job submission get the appropriate keys from KMS like it currently gets delegation tokens. Both the delegation tokens and the keys should be put into the job's credential object. That way you don't have all 100,000 containers hitting the KMS at once. It does mean we need a new interface for filesystems that given a list of paths, you ensure the keys are in a credential object. FileInputFormat and FileOutputFormat should check to see if the FileSystem implements that interface and pass in the job's credential object.

          Show
          Owen O'Malley added a comment - The right way to do this is to have the Yarn job submission get the appropriate keys from KMS like it currently gets delegation tokens. Both the delegation tokens and the keys should be put into the job's credential object. That way you don't have all 100,000 containers hitting the KMS at once. It does mean we need a new interface for filesystems that given a list of paths, you ensure the keys are in a credential object. FileInputFormat and FileOutputFormat should check to see if the FileSystem implements that interface and pass in the job's credential object.
          Hide
          Owen O'Malley added a comment -

          A follow up on that is that of course KMS will need proxy users so that Oozie will be able to get keys for the users. (If that is desired.)

          Show
          Owen O'Malley added a comment - A follow up on that is that of course KMS will need proxy users so that Oozie will be able to get keys for the users. (If that is desired.)
          Hide
          Alejandro Abdelnur added a comment -

          Owen O'Malley, Yarn job submission does not necessary know what files the yarn app will access, and which one of those are encrypted and what keys to fetch. that is the whole point of transparent encryption. the KMS caches keys and easily scales horizontally behind a VIP so it will be able to handle very large number of requests.

          Show
          Alejandro Abdelnur added a comment - Owen O'Malley , Yarn job submission does not necessary know what files the yarn app will access, and which one of those are encrypted and what keys to fetch. that is the whole point of transparent encryption. the KMS caches keys and easily scales horizontally behind a VIP so it will be able to handle very large number of requests.
          Hide
          Owen O'Malley added a comment -

          Alejandro, which use cases don't know their inputs or outputs? Clearly the main ones do know their input and output:

          • MapReduce
          • Hive
          • Pig

          It is important for the standard cases that we get the encryption keys up front instead of letting the horde of containers do it.

          Show
          Owen O'Malley added a comment - Alejandro, which use cases don't know their inputs or outputs? Clearly the main ones do know their input and output: MapReduce Hive Pig It is important for the standard cases that we get the encryption keys up front instead of letting the horde of containers do it.
          Hide
          Larry McCay added a comment -

          Hmmm, I agree with Owen. For usecases where these are not inherently known, metadata or some other packaging mechanism will need to identify the keys or file for which keys are required. Additionally, adding getDelegationToken to KeyProvider API is leaking specific provider implementations through the KeyProvider abstraction and should be avoided.

          Show
          Larry McCay added a comment - Hmmm, I agree with Owen. For usecases where these are not inherently known, metadata or some other packaging mechanism will need to identify the keys or file for which keys are required. Additionally, adding getDelegationToken to KeyProvider API is leaking specific provider implementations through the KeyProvider abstraction and should be avoided.
          Hide
          Alejandro Abdelnur added a comment -

          i.e.: if a M/R task opens a side file from HDFS that is not part of the input or output of the MR job. I've seen this quite often.

          Show
          Alejandro Abdelnur added a comment - i.e.: if a M/R task opens a side file from HDFS that is not part of the input or output of the MR job. I've seen this quite often.
          Hide
          Larry McCay added a comment -

          Alejandro Abdelnur - that is a good example of where additional metadata would have to indicate that a resource that requires a key is required by this deployed "application". The idea is to avoid KMS having to deal with hadoop runtime level scale when it can be accommodated at submit time. It is also much better to fail at submit time if the key is not available than at runtime.

          Show
          Larry McCay added a comment - Alejandro Abdelnur - that is a good example of where additional metadata would have to indicate that a resource that requires a key is required by this deployed "application". The idea is to avoid KMS having to deal with hadoop runtime level scale when it can be accommodated at submit time. It is also much better to fail at submit time if the key is not available than at runtime.
          Hide
          Owen O'Malley added a comment -

          Alejandro, this is exactly equivalent of the delegation token. If a job is opening side files, it needs to make sure it has the right delegation tokens and keys. For delegation tokens, we added an extra config option for listing the extra file systems. The same solution (or listing the extra key versions) would make sense.

          Show
          Owen O'Malley added a comment - Alejandro, this is exactly equivalent of the delegation token. If a job is opening side files, it needs to make sure it has the right delegation tokens and keys. For delegation tokens, we added an extra config option for listing the extra file systems. The same solution (or listing the extra key versions) would make sense.
          Hide
          Alejandro Abdelnur added a comment -

          Mike Yoder cornered me and brought up the point that given that we are using AES-CTR, we have to be extremely careful on not repeating IVs given an encryption key. Then he followed on explaining how we could run into the that scenario with the current implementation we are working on:

          • 1. All files in an encryption zone using the same keyVersion material share the same encryption key.
          • 2. All files in #1 have different IVs
          • 3. In AES-CTR, the 8 lower bytes of the IV are treated as a counter that is incremented every AES block (16 bytes).
          • 4. #3 ensures an IV is not repeated throughout the file (the biggest file, Long.MAX bytes, consumes 1/16 of the IV counter domain).
          • 5. IVs are public, and predictable based on the initial IV and the file offset.
          • 6. Because of #5, a possible attack would be to scan #1 files for IVs where the 8 higher bytes match. Then, fast-forward them to a common counter point (assuming files are long enough), then you’ll have more than one cypher-text using the same encryption key and the same IV. The chances of this are 1/2^64, but in cryptographic terms this is considered a high chance.

          A known solution to address this is:

          • A. Each file should use a unique data encryption key (DEK).
          • B. The unique DEK is encrypted with the EZ keyVersion and stored as one of the file xAttributes.
          • C. The unique DEK is generated by the KeyProvider and encrypted before leaving the KeyProvider. The NN never sees the DEK decrypted.
          • D. The NN gives the HDFS client the encrypted DEK and the keyVersion ID.
          • E. The HDFS client sends the encrypted DEK and the keyVersion ID to the KeyProvider and gets (if authorized to use the keyVersion) the decrypted DEK for the file.
          • F. The HDFS client uses the DEK to encrypt/decrypt the file.

          This solution requires the KeyProvider to have 2 new methods:

          • KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
          • KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion encryptedKey)

          Since the IV would be the file IV, then we don't have to store a new IV just for this. The implementation would do a known transformation on the IV (i.e.: xor with 0xff the original IV).

          The key materials (EZ key materials) to encrypt the encryption keys for files never leave the KeyProvider. They are not known to HDFS clients. This means that a compromised encryption key only compromises a file, not all the files in an EZ using the same key version. Because of this, a side effect of this change is a more secure solution.

          Show
          Alejandro Abdelnur added a comment - Mike Yoder cornered me and brought up the point that given that we are using AES-CTR, we have to be extremely careful on not repeating IVs given an encryption key. Then he followed on explaining how we could run into the that scenario with the current implementation we are working on: 1. All files in an encryption zone using the same keyVersion material share the same encryption key. 2. All files in #1 have different IVs 3. In AES-CTR, the 8 lower bytes of the IV are treated as a counter that is incremented every AES block (16 bytes). 4. #3 ensures an IV is not repeated throughout the file (the biggest file, Long.MAX bytes, consumes 1/16 of the IV counter domain). 5. IVs are public, and predictable based on the initial IV and the file offset. 6. Because of #5, a possible attack would be to scan #1 files for IVs where the 8 higher bytes match. Then, fast-forward them to a common counter point (assuming files are long enough), then you’ll have more than one cypher-text using the same encryption key and the same IV. The chances of this are 1/2^64, but in cryptographic terms this is considered a high chance. A known solution to address this is: A. Each file should use a unique data encryption key (DEK). B. The unique DEK is encrypted with the EZ keyVersion and stored as one of the file xAttributes. C. The unique DEK is generated by the KeyProvider and encrypted before leaving the KeyProvider. The NN never sees the DEK decrypted. D. The NN gives the HDFS client the encrypted DEK and the keyVersion ID. E. The HDFS client sends the encrypted DEK and the keyVersion ID to the KeyProvider and gets (if authorized to use the keyVersion) the decrypted DEK for the file. F. The HDFS client uses the DEK to encrypt/decrypt the file. This solution requires the KeyProvider to have 2 new methods: KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv) KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion encryptedKey) Since the IV would be the file IV, then we don't have to store a new IV just for this. The implementation would do a known transformation on the IV (i.e.: xor with 0xff the original IV). The key materials (EZ key materials) to encrypt the encryption keys for files never leave the KeyProvider. They are not known to HDFS clients. This means that a compromised encryption key only compromises a file, not all the files in an EZ using the same key version. Because of this, a side effect of this change is a more secure solution.
          Hide
          Larry McCay added a comment -

          I can buy the overall approach and agree that it is more secure.
          However, I'm not so sure that we need to add these methods to the KeyProvider API.

          Follow up questions:

          • do we need these methods for any other usecases?
          • does/can the HDFS client have access to the EZ key at the same time that it has to decrypt the DEK?
          • if this particular key provider always returns an encrypted DEK then can't the client know to always decrypt it with the EZ key?

          thoughts?

          Show
          Larry McCay added a comment - I can buy the overall approach and agree that it is more secure. However, I'm not so sure that we need to add these methods to the KeyProvider API. Follow up questions: do we need these methods for any other usecases? does/can the HDFS client have access to the EZ key at the same time that it has to decrypt the DEK? if this particular key provider always returns an encrypted DEK then can't the client know to always decrypt it with the EZ key? thoughts?
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, it is great that you agree that the proposed changes makes things more secure.

          Answering your bullets:

          #1. this is the first use case in Hadoop
          #2. no, the client does not have acces to the EZ keyVersion
          #3. the KeyProvider own the decryption of the DEK ,the client does not have the EZ key

          What is being proposed is the equivalent of KMIP's key-wrapping functionality. What makes the system more secure is the fact that the EZ keyVersion is never exposed to HDFS, nor HDFS clients. And the DEK for the file is never exposed to HDFS NN.

          Show
          Alejandro Abdelnur added a comment - Larry McCay , it is great that you agree that the proposed changes makes things more secure. Answering your bullets: #1. this is the first use case in Hadoop #2. no, the client does not have acces to the EZ keyVersion #3. the KeyProvider own the decryption of the DEK ,the client does not have the EZ key What is being proposed is the equivalent of KMIP's key-wrapping functionality. What makes the system more secure is the fact that the EZ keyVersion is never exposed to HDFS, nor HDFS clients. And the DEK for the file is never exposed to HDFS NN.
          Hide
          Larry McCay added a comment -

          Alejandro Abdelnur - I realize that it is the first usecase - that doesn't make it the only one that we have in mind or in the works. The fact that you have run into an issue with the EZ key granularity while using the CTR mode is a problem with the usecase design not necessarily with the abstraction of key providers. The question is whether wrapped keys will be required by other usecases where either the key usage pattern or the encryption modes in use may not require them.

          Currently, the KeyProvider API doesn't do any encryption itself - I just want to make sure that adding the additional complexity and responsibility to this interface is really necessary.

          Additional questions:

          • how does the keyprovider know what EZ key to use - is it the key that is referenced by the keyVersionName?
          • how do we key HDFS clients from asking for the EZ key - if it is stored by the passed in keyVersionName?
            • will this require special access control protection for EZ keys?
          • would the unique DEK be stored in the provider as well or only in the extended attributes of the file?
            • if stored in the provider what is the keyVersionName for it?
          Show
          Larry McCay added a comment - Alejandro Abdelnur - I realize that it is the first usecase - that doesn't make it the only one that we have in mind or in the works. The fact that you have run into an issue with the EZ key granularity while using the CTR mode is a problem with the usecase design not necessarily with the abstraction of key providers. The question is whether wrapped keys will be required by other usecases where either the key usage pattern or the encryption modes in use may not require them. Currently, the KeyProvider API doesn't do any encryption itself - I just want to make sure that adding the additional complexity and responsibility to this interface is really necessary. Additional questions: how does the keyprovider know what EZ key to use - is it the key that is referenced by the keyVersionName? how do we key HDFS clients from asking for the EZ key - if it is stored by the passed in keyVersionName? will this require special access control protection for EZ keys? would the unique DEK be stored in the provider as well or only in the extended attributes of the file? if stored in the provider what is the keyVersionName for it?
          Hide
          Alejandro Abdelnur added a comment -

          Larry McCay, thanks for following up.

          The proposed approach would be applicable using different cipher modes than CTR (ie CBC and XTR, even GCM if we handle the offset correction changes because of the GCM tag). In all cases, it would enable keeping the EZ keyVersion materials unknown to HDFS and clients and exposing the DEK for the files being accessed to the client accessing the file only.

          In the case of CTR, the proposed approach also helps avoiding the IV reuse scenario.

          Trying to answer your questions:

          how does the keyprovider know what EZ key to use - is it the key that is referenced by the keyVersionName?

            // creates EDEK using specified version name
            KeyVersion generateEncryptedKey(String versionName, byte[] iv) 
          
            // receives EDEK returns DEK using specified version name
            KeyVersion decryptEncryptedKey(String versionName, byte[] iv, KeyVersion encryptedKey) 
          

          The callers of both methods have the versionName at hand

          how do we key HDFS clients from asking for the EZ key - if it is stored by the passed in keyVersionName?

          The file iNode will store (EZ-keyVersionName, IV, EDEK), that info is passed to the client on create()/open(). Using that info, the client can go to the KeyProvider to get the DEK for the file.

          will this require special access control protection for EZ keys?

          KeyProviders could implement special control. For example KMS allows, via ACLs, to get KeyVersion without key material. This will effectively prevent EZ keys from leaving the KMS.

          would the unique DEK be stored in the provider as well or only in the extended attributes of the file? if stored in the provider what is the keyVersionName for it?

          the unique EDEKs (encrypted with the EZ keyversion) are not stored by the KeyProvider but in the xAttr of the file.

          Show
          Alejandro Abdelnur added a comment - Larry McCay , thanks for following up. The proposed approach would be applicable using different cipher modes than CTR (ie CBC and XTR, even GCM if we handle the offset correction changes because of the GCM tag). In all cases, it would enable keeping the EZ keyVersion materials unknown to HDFS and clients and exposing the DEK for the files being accessed to the client accessing the file only. In the case of CTR, the proposed approach also helps avoiding the IV reuse scenario. Trying to answer your questions: how does the keyprovider know what EZ key to use - is it the key that is referenced by the keyVersionName? // creates EDEK using specified version name KeyVersion generateEncryptedKey( String versionName, byte [] iv) // receives EDEK returns DEK using specified version name KeyVersion decryptEncryptedKey( String versionName, byte [] iv, KeyVersion encryptedKey) The callers of both methods have the versionName at hand how do we key HDFS clients from asking for the EZ key - if it is stored by the passed in keyVersionName? The file iNode will store (EZ-keyVersionName, IV, EDEK), that info is passed to the client on create()/open(). Using that info, the client can go to the KeyProvider to get the DEK for the file. will this require special access control protection for EZ keys? KeyProviders could implement special control. For example KMS allows, via ACLs, to get KeyVersion without key material. This will effectively prevent EZ keys from leaving the KMS. would the unique DEK be stored in the provider as well or only in the extended attributes of the file? if stored in the provider what is the keyVersionName for it? the unique EDEKs (encrypted with the EZ keyversion) are not stored by the KeyProvider but in the xAttr of the file.
          Hide
          Sanjay Radia added a comment -

          On the distcp not accessing the keys (not decrypting/encrypting), yes, that is the idea.

          Alejandro not sure if I understand what you mean by the above. Are you saying that distcp and other tools/applications that copy and backup data will have be changed to do something different when the file is encrypted?
          In a sense, this Jira's attempt to provide transparent encryption, is breaking existing transparency.

          Two other questions:

          • Are you relying in the the kerberos credentials OR delegation tokens to obtain the keys? Isn't using the delegation token to obtain keys reducing security?
          • Looks like the proposal relies on file-ACLs to hand out keys - part of the motivation for using encryption is that ACLs are often correctly set.
          Show
          Sanjay Radia added a comment - On the distcp not accessing the keys (not decrypting/encrypting), yes, that is the idea. Alejandro not sure if I understand what you mean by the above. Are you saying that distcp and other tools/applications that copy and backup data will have be changed to do something different when the file is encrypted? In a sense, this Jira's attempt to provide transparent encryption, is breaking existing transparency. Two other questions: Are you relying in the the kerberos credentials OR delegation tokens to obtain the keys? Isn't using the delegation token to obtain keys reducing security? Looks like the proposal relies on file-ACLs to hand out keys - part of the motivation for using encryption is that ACLs are often correctly set.
          Hide
          Andrew Purtell added a comment -

          A known solution to address this is:
          A. Each file should use a unique data encryption key (DEK).

          FWIW, this is what HBase transparent encryption does, unless the admin is explicitly providing their own DEKs.

          Show
          Andrew Purtell added a comment - A known solution to address this is: A. Each file should use a unique data encryption key (DEK). FWIW, this is what HBase transparent encryption does, unless the admin is explicitly providing their own DEKs.
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia, thanks for jumping in.

          Things has changed a bit since the latest design doc based on the received feedback, mostly from Owen O'Malley and Mike Yoder. (I will update the design doc to reflect these changes).

          On distcp:

          Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones.

          The proposal on changing distcp is to enable a second use used case, copy data from one cluster to another without having to decrypt/encrypt the data while doing the copy. This is useful when doing copies for disaster recovery, hdfs admins could do the copy without having to have access to the encryption keys.

          On relying on kerberos credentials OR delegation tokens to obtain keys:

          It works exactly like HDFS. KMS will support both Kerberos and delegation tokens. A Kerberized client can request a KMS delegation token which is serialized with the rest of the credentials to be used by containers running in the cluster. It is assumed you are using network encryption as well to avoid delegation tokens sniffing.

          On relying on file-ACLs to hand out keys:

          No, file-ACLs give you access to the data in HDFS. You also need to have access to the Key, that is the responsibility of the KeyProvider to do.

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , thanks for jumping in. Things has changed a bit since the latest design doc based on the received feedback, mostly from Owen O'Malley and Mike Yoder . (I will update the design doc to reflect these changes). On distcp: Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones. The proposal on changing distcp is to enable a second use used case, copy data from one cluster to another without having to decrypt/encrypt the data while doing the copy. This is useful when doing copies for disaster recovery, hdfs admins could do the copy without having to have access to the encryption keys. On relying on kerberos credentials OR delegation tokens to obtain keys: It works exactly like HDFS. KMS will support both Kerberos and delegation tokens. A Kerberized client can request a KMS delegation token which is serialized with the rest of the credentials to be used by containers running in the cluster. It is assumed you are using network encryption as well to avoid delegation tokens sniffing. On relying on file-ACLs to hand out keys: No, file-ACLs give you access to the data in HDFS. You also need to have access to the Key, that is the responsibility of the KeyProvider to do.
          Hide
          Sanjay Radia added a comment -
          • distcp and such tools and applications

            Vanilla distcp will just work with transparent encryption.

            This is not what one wants - distcp will not necessarily have permission in decrypt.

          • delegation tokens - KMS will accept delegation tokens - again I don't think this is what one wants - can the keys be obtained at job submission time?
          • File ACLs

            The NN gives the HDFS client the encrypted DEK and the keyVersion ID.

            I assume the NN will hand this out based on the file ACL. Does the above reduce the security?

          Show
          Sanjay Radia added a comment - distcp and such tools and applications Vanilla distcp will just work with transparent encryption. This is not what one wants - distcp will not necessarily have permission in decrypt. delegation tokens - KMS will accept delegation tokens - again I don't think this is what one wants - can the keys be obtained at job submission time? File ACLs The NN gives the HDFS client the encrypted DEK and the keyVersion ID. I assume the NN will hand this out based on the file ACL. Does the above reduce the security?
          Hide
          Sanjay Radia added a comment -

          There are a complex set of issues to be addressed. I know that a bunch of you have had some private meetings discussing the various options and tradeoffs. Can we please have a short more public meeting next week? I can organize and host this at Hortonworks along with Google plus for those that are remote. How about next thursday at 1:30pm?

          Show
          Sanjay Radia added a comment - There are a complex set of issues to be addressed. I know that a bunch of you have had some private meetings discussing the various options and tradeoffs. Can we please have a short more public meeting next week? I can organize and host this at Hortonworks along with Google plus for those that are remote. How about next thursday at 1:30pm?
          Hide
          Aaron T. Myers added a comment -

          This is not what one wants - distcp will not necessarily have permission in decrypt.

          I disagree - this is exactly what one wants. This is no different than today's distcp which may be run by a user that doesn't have permissions on all the files under the source directory.

          delegation tokens - KMS will accept delegation tokens - again I don't think this is what one wants - can the keys be obtained at job submission time?

          Owen and Tucu have already discussed this quite a bit above.

          I assume the NN will hand this out based on the file ACL. Does the above reduce the security?

          I don't see how this reduces security. The intention of adding transparent encryption support is just that - to provide encryption, not to provide another, additional authorization mechanism.

          There are a complex set of issues to be addressed. I know that a bunch of you have had some private meetings discussing the various options and tradeoffs. Can we please have a short more public meeting next week? I can organize and host this at Hortonworks along with Google plus for those that are remote. How about next thursday at 1:30pm?

          I think those working on this project have been very open about all of these designs and discussions from the beginning dating back to March, and I think Tucu and others have been doing a great job of answering questions, accepting feedback, and modifying the design accordingly. Not sure where the assertion about private meetings is coming from - everything that's been discussed off-JIRA has been reiterated back on JIRA. What questions do you have remaining that would require a meeting?

          Show
          Aaron T. Myers added a comment - This is not what one wants - distcp will not necessarily have permission in decrypt. I disagree - this is exactly what one wants. This is no different than today's distcp which may be run by a user that doesn't have permissions on all the files under the source directory. delegation tokens - KMS will accept delegation tokens - again I don't think this is what one wants - can the keys be obtained at job submission time? Owen and Tucu have already discussed this quite a bit above. I assume the NN will hand this out based on the file ACL. Does the above reduce the security? I don't see how this reduces security. The intention of adding transparent encryption support is just that - to provide encryption, not to provide another, additional authorization mechanism. There are a complex set of issues to be addressed. I know that a bunch of you have had some private meetings discussing the various options and tradeoffs. Can we please have a short more public meeting next week? I can organize and host this at Hortonworks along with Google plus for those that are remote. How about next thursday at 1:30pm? I think those working on this project have been very open about all of these designs and discussions from the beginning dating back to March, and I think Tucu and others have been doing a great job of answering questions, accepting feedback, and modifying the design accordingly. Not sure where the assertion about private meetings is coming from - everything that's been discussed off-JIRA has been reiterated back on JIRA. What questions do you have remaining that would require a meeting?
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia, all,

          I'm attaching a 1 pager with the current Conceptual Design for HDFS encryption. On purpose I've left out details like IVs, key versions, key rotation support, etc.

          Later I'll put together a detailed Technical Design document.

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , all, I'm attaching a 1 pager with the current Conceptual Design for HDFS encryption. On purpose I've left out details like IVs, key versions, key rotation support, etc. Later I'll put together a detailed Technical Design document.
          Hide
          Yi Liu added a comment -

          1.

          • 6. Because of #5, a possible attack would be to scan #1 files for IVs where the 8 higher bytes match. Then, fast-forward them to a common counter point (assuming files are long enough), then you’ll have more than one cypher-text using the same encryption key and the same IV. The chances of this are 1/2^64, but in cryptographic terms this is considered a high chance.

          CTR attack is not by finding two cipher-text using same encryption key and same IV, it’s by ability of constructing file and make it encrypted by same Data key and IV. The principle is as following: (Suppose we have two plain texts: P1 and P2; Cipher texts: C1 and C2)

            P1 XOR F(Key, IV) = C1
            P2 XOR F(Key, IV) = C2
          

          Absolutely C1 and C2 are known. To guess P2, if we just know IV (we can’t know key, it’s secret), but don’t know P1, then we can’t get P2. But if we can constructed P1 (then we know it), and make it encrypted by same Key and IV, then we can easily know P2 through:

            P2 =  C2 XOR F(Key, IV) = C2 XOR (C1 XOR P1)
          

          2.
          +1 for having two layer keys: EZ key and DEK . They are 3 points:

          • It’s demonstrated in other traditional FS or Database, such as Oracle transparent encryption. Fully agree with Alejandro Abdelnur "It’s a more secure solution that EZ key can’t be accessed by client". Actually it’s also the initial thought of two layer keys in HADOOP-10050,
          • Truly support Key rotation, that’s very important. If only having EZ key, even we have key versions, and for key rotation, we can use new version keys for the new files, but for old files, they are still encrypted by the old version keys (Unexpected user can still decrypt it). If we have two layer keys, we can encrypt DEK keys using new EZ key version if necessary without decrypted and encrypted the whole file.
          • More easier management.

          3.

          KeyVersion generateEncryptedKey(String keyVersionName, byte[] iv)
          KeyVersion decryptEncryptedKey(String keyVersionName, byte[] iv, KeyVersion encryptedKey
          

          We don’t need the iv? Parameter iv is for?

          Show
          Yi Liu added a comment - 1. • 6. Because of #5, a possible attack would be to scan #1 files for IVs where the 8 higher bytes match. Then, fast-forward them to a common counter point (assuming files are long enough), then you’ll have more than one cypher-text using the same encryption key and the same IV. The chances of this are 1/2^64, but in cryptographic terms this is considered a high chance. CTR attack is not by finding two cipher-text using same encryption key and same IV, it’s by ability of constructing file and make it encrypted by same Data key and IV. The principle is as following: (Suppose we have two plain texts: P1 and P2; Cipher texts: C1 and C2) P1 XOR F(Key, IV) = C1 P2 XOR F(Key, IV) = C2 Absolutely C1 and C2 are known. To guess P2, if we just know IV (we can’t know key, it’s secret), but don’t know P1, then we can’t get P2. But if we can constructed P1 (then we know it), and make it encrypted by same Key and IV, then we can easily know P2 through: P2 = C2 XOR F(Key, IV) = C2 XOR (C1 XOR P1) 2. +1 for having two layer keys: EZ key and DEK . They are 3 points: It’s demonstrated in other traditional FS or Database, such as Oracle transparent encryption. Fully agree with Alejandro Abdelnur "It’s a more secure solution that EZ key can’t be accessed by client". Actually it’s also the initial thought of two layer keys in HADOOP-10050 , Truly support Key rotation, that’s very important. If only having EZ key, even we have key versions, and for key rotation, we can use new version keys for the new files, but for old files, they are still encrypted by the old version keys (Unexpected user can still decrypt it). If we have two layer keys, we can encrypt DEK keys using new EZ key version if necessary without decrypted and encrypted the whole file. More easier management. 3. KeyVersion generateEncryptedKey( String keyVersionName, byte [] iv) KeyVersion decryptEncryptedKey( String keyVersionName, byte [] iv, KeyVersion encryptedKey We don’t need the iv? Parameter iv is for?
          Hide
          Alejandro Abdelnur added a comment -

          Yi Liu, on the why we need the IV in the proposed new methods. We need an IV because EDEKs are transient from the KeyProvider perspective.

          Show
          Alejandro Abdelnur added a comment - Yi Liu , on the why we need the IV in the proposed new methods. We need an IV because EDEKs are transient from the KeyProvider perspective.
          Hide
          Mike Yoder added a comment -

          Yi Liu - regarding your first point - it's actually worse than that. Have a look at http://en.wikipedia.org/wiki/Stream_cipher_attack. The attack is to xor C1 and C2. Doing the math,

          C1 xor C2 = P1 xor F(Key,IV) xor P2 xor F(Key,IV)
          F(Key,IV) xor F(Key,IV) = 0
          so
          C1 xor C2 = P1 xor P2
          

          xoring two plaintexts together is actually really easy to crack. As an example, have a look at the images here for an example - the author xors two images together to get a third image, which has both plainly visible: http://stackoverflow.com/questions/8504882/searching-for-a-way-to-do-bitwise-xor-on-images

          Regarding point 2 - quite happy you agree. This is what ecryptfs does; it's a good model.

          Show
          Mike Yoder added a comment - Yi Liu - regarding your first point - it's actually worse than that. Have a look at http://en.wikipedia.org/wiki/Stream_cipher_attack . The attack is to xor C1 and C2. Doing the math, C1 xor C2 = P1 xor F(Key,IV) xor P2 xor F(Key,IV) F(Key,IV) xor F(Key,IV) = 0 so C1 xor C2 = P1 xor P2 xoring two plaintexts together is actually really easy to crack. As an example, have a look at the images here for an example - the author xors two images together to get a third image, which has both plainly visible: http://stackoverflow.com/questions/8504882/searching-for-a-way-to-do-bitwise-xor-on-images Regarding point 2 - quite happy you agree. This is what ecryptfs does; it's a good model.
          Hide
          Charles Lamb added a comment -

          Revised one-pager.

          Show
          Charles Lamb added a comment - Revised one-pager.
          Hide
          Alejandro Abdelnur added a comment -

          Re-uploading the previous proposal as I've mistakenly deleted it. (using obsolete in the name to avoid confusion).

          Show
          Alejandro Abdelnur added a comment - Re-uploading the previous proposal as I've mistakenly deleted it. (using obsolete in the name to avoid confusion).
          Hide
          Yi Liu added a comment -

          Mike Yoder, isn’t the same as I state?
          Also the result

          C1 xor C2 = P1 xor P2
          

          isn’t

           P2 = C2 xor (C1 xor P1)

          ?
          My point is to guess P2, we should know P1. (Absolutely we know C1, C2)
          CTR attack is not by finding existing two cipher-text using same encryption key and same IV, it’s by ability of constructing file and make it encrypted by same Data key and IV. If we can construct P1, then we know it.

          Show
          Yi Liu added a comment - Mike Yoder , isn’t the same as I state? Also the result C1 xor C2 = P1 xor P2 isn’t P2 = C2 xor (C1 xor P1) ? My point is to guess P2, we should know P1. (Absolutely we know C1, C2) CTR attack is not by finding existing two cipher-text using same encryption key and same IV, it’s by ability of constructing file and make it encrypted by same Data key and IV. If we can construct P1, then we know it.
          Hide
          Mike Yoder added a comment -

          If you know P1 you can trivially get to P2, of course. My point was that we don't necessarily have to know P1 or P2 - if we only know (P1 xor P2), it's also generally easy to crack - much, much, much less work than AES encryption. Have a look at the wikipedia link above (my source of all knowledge ).

          Show
          Mike Yoder added a comment - If you know P1 you can trivially get to P2, of course. My point was that we don't necessarily have to know P1 or P2 - if we only know (P1 xor P2), it's also generally easy to crack - much, much, much less work than AES encryption. Have a look at the wikipedia link above (my source of all knowledge ).
          Hide
          Yi Liu added a comment -

          My point was that we don't necessarily have to know P1 or P2 - if we only know (P1 xor P2)

          The pictures have a point, and use an apple with white background. I agree if we can get information for P2 from (P1 xor P2), then exposing (P1 xor P2) is not good, just like the pictures.

          Show
          Yi Liu added a comment - My point was that we don't necessarily have to know P1 or P2 - if we only know (P1 xor P2) The pictures have a point, and use an apple with white background. I agree if we can get information for P2 from (P1 xor P2), then exposing (P1 xor P2) is not good, just like the pictures.
          Hide
          Sanjay Radia added a comment -

          Aaron said:

          distcp... I disagree - this is exactly what one wants ..

          So you are saying that distcp should decrypt and re-encrypt data as it copies it ... most backup tools do not this as they copy data - it is extra CPU resources and further unneeded venerability. There are customer use cases where distcp not over an encrypted channel; hence if one of the files being copied is encrypted one may not want the file to be transparently sent decrypted. Further, a sensitive file in a subtree may have been encrypted because the subtree is readable by a larger group and hence the distcp user may not have access to the keys.

          delegation tokens - KMS ... Owen and Tucu have already discussed this quite a bit above

          Turns out this issue come up in discussion with Owen, and he shares the concern and suggested that I post the concern. Besides even if Alejandro and Owen are in agreement, my question is relevant and has not been raised so far above: Encryption is used to overcome limitations of authorization and authentication in the system. It is relevant to ask if the use of delegation tokens to obtain keys adds weakness.

          meeting ...

          Aaron .. you are misunderstanding my point. I am not saying that the discussion on this jira have not been open.

          • See Alejandro's comments: " Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey " and "After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles" ...
            • there have been such meetings and I have no objections to such private meetings because I know that the bandwidth helps. I am merely asking for one more meeting where I can quickly come up to speed on the context that Alejandro, Todd, Yi, Tianyou, Andrew, Atm, share. It will help me and others better understand the viewpoint that some of you share due to prevous high bandwidth meetings.
            • There is a precedent of HDFS meetings in spite of open jira discussion - higher bandwidth to progress faster.
              **Perhaps I should have worded the "private meetings" differently ... sorry if it came across the wrong way.
          Show
          Sanjay Radia added a comment - Aaron said: distcp... I disagree - this is exactly what one wants .. So you are saying that distcp should decrypt and re-encrypt data as it copies it ... most backup tools do not this as they copy data - it is extra CPU resources and further unneeded venerability. There are customer use cases where distcp not over an encrypted channel; hence if one of the files being copied is encrypted one may not want the file to be transparently sent decrypted. Further, a sensitive file in a subtree may have been encrypted because the subtree is readable by a larger group and hence the distcp user may not have access to the keys. delegation tokens - KMS ... Owen and Tucu have already discussed this quite a bit above Turns out this issue come up in discussion with Owen, and he shares the concern and suggested that I post the concern. Besides even if Alejandro and Owen are in agreement, my question is relevant and has not been raised so far above: Encryption is used to overcome limitations of authorization and authentication in the system. It is relevant to ask if the use of delegation tokens to obtain keys adds weakness. meeting ... Aaron .. you are misunderstanding my point. I am not saying that the discussion on this jira have not been open. See Alejandro's comments: " Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey " and "After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles" ... there have been such meetings and I have no objections to such private meetings because I know that the bandwidth helps. I am merely asking for one more meeting where I can quickly come up to speed on the context that Alejandro, Todd, Yi, Tianyou, Andrew, Atm, share. It will help me and others better understand the viewpoint that some of you share due to prevous high bandwidth meetings. There is a precedent of HDFS meetings in spite of open jira discussion - higher bandwidth to progress faster. **Perhaps I should have worded the "private meetings" differently ... sorry if it came across the wrong way.
          Hide
          Steve Loughran added a comment -

          Maybe the issue with distcp is "sometimes you want to get at the raw data" -backups and copying being examples. This lets admin work on the data without needing access to the keys, just as today I can back up the underlying native OS disks without understanding HDFS (or any future encryption)

          Show
          Steve Loughran added a comment - Maybe the issue with distcp is "sometimes you want to get at the raw data" -backups and copying being examples. This lets admin work on the data without needing access to the keys, just as today I can back up the underlying native OS disks without understanding HDFS (or any future encryption)
          Hide
          Aaron T. Myers added a comment -

          Sanjay, Steve - regarding distcp, Alejandro has already said the following, which I think addresses what both of you are getting at. Note the second paragraph:

          Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones.

          The proposal on changing distcp is to enable a second use used case, copy data from one cluster to another without having to decrypt/encrypt the data while doing the copy. This is useful when doing copies for disaster recovery, hdfs admins could do the copy without having to have access to the encryption keys.

          Sanjay:

          Turns out this issue come up in discussion with Owen, and he shares the concern and suggested that I post the concern. Besides even if Alejandro and Owen are in agreement, my question is relevant and has not been raised so far above: Encryption is used to overcome limitations of authorization and authentication in the system. It is relevant to ask if the use of delegation tokens to obtain keys adds weakness.

          Transparent at-rest encryption is used to address other possible attack vectors, for example an admin removing hard drives from the cluster and looking at the data offline, or various attack vectors if network communication can be intercepted.

          I was under the impression that Owen's concern was mostly around performance, i.e. that he didn't want all of the many tasks/containers in an MR/YARN job to each request the same encryption key(s) from the KMS at startup. I think that's quite reasonable, but it doesn't need to be an either/or thing - YARN jobs can request the appropriate keys upfront to address performance concerns and the KMS can accept DTs for authentication to enable other use cases.

          Regardless, I don't see how being able to request encryption keys via DTs adds any weakness. The DTs can only be granted via Kerberos-authenticated channels, and they expire, so they allow no more access than one can get via Kerberos. Could you perhaps elaborate on the specific concern there?

          Aaron .. you are misunderstanding my point. I am not saying that the discussion on this jira have not been open.<snip>

          OK, good to hear. Sorry if I misinterpreted what you were saying.

          I am merely asking for one more meeting where I can quickly come up to speed on the context that Alejandro, Todd, Yi, Tianyou, Andrew, Atm, share. It will help me and others better understand the viewpoint that some of you share due to prevous high bandwidth meetings.

          I'm certainly open to another meeting in the abstract to bring folks up to speed, but I'd still like to know what questions you have that haven't been addressed so far on the JIRA. So far I think that most of the questions you've been asking have already been discussed.

          Show
          Aaron T. Myers added a comment - Sanjay, Steve - regarding distcp, Alejandro has already said the following, which I think addresses what both of you are getting at. Note the second paragraph: Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones. The proposal on changing distcp is to enable a second use used case, copy data from one cluster to another without having to decrypt/encrypt the data while doing the copy. This is useful when doing copies for disaster recovery, hdfs admins could do the copy without having to have access to the encryption keys. Sanjay: Turns out this issue come up in discussion with Owen, and he shares the concern and suggested that I post the concern. Besides even if Alejandro and Owen are in agreement, my question is relevant and has not been raised so far above: Encryption is used to overcome limitations of authorization and authentication in the system. It is relevant to ask if the use of delegation tokens to obtain keys adds weakness. Transparent at-rest encryption is used to address other possible attack vectors, for example an admin removing hard drives from the cluster and looking at the data offline, or various attack vectors if network communication can be intercepted. I was under the impression that Owen's concern was mostly around performance, i.e. that he didn't want all of the many tasks/containers in an MR/YARN job to each request the same encryption key(s) from the KMS at startup. I think that's quite reasonable, but it doesn't need to be an either/or thing - YARN jobs can request the appropriate keys upfront to address performance concerns and the KMS can accept DTs for authentication to enable other use cases. Regardless, I don't see how being able to request encryption keys via DTs adds any weakness. The DTs can only be granted via Kerberos-authenticated channels, and they expire, so they allow no more access than one can get via Kerberos. Could you perhaps elaborate on the specific concern there? Aaron .. you are misunderstanding my point. I am not saying that the discussion on this jira have not been open.<snip> OK, good to hear. Sorry if I misinterpreted what you were saying. I am merely asking for one more meeting where I can quickly come up to speed on the context that Alejandro, Todd, Yi, Tianyou, Andrew, Atm, share. It will help me and others better understand the viewpoint that some of you share due to prevous high bandwidth meetings. I'm certainly open to another meeting in the abstract to bring folks up to speed, but I'd still like to know what questions you have that haven't been addressed so far on the JIRA. So far I think that most of the questions you've been asking have already been discussed.
          Hide
          Sanjay Radia added a comment -

          I believe the transparent encryption will break the HAR file system.

          Show
          Sanjay Radia added a comment - I believe the transparent encryption will break the HAR file system.
          Hide
          Sanjay Radia added a comment -

          Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones. ...The proposal on changing distcp is to enable a second use used case.

          Alejandro, Aaron the general practice is not to give the admins running distcp access to keys. Hence, as you suggest, we could change distcp so that it does not use transparent decryption by default; however, there may be other such backup tools and applications that customers and other vendors may have written and we would be breaking them. This may also break the HAR filesystem.

          Aaron, you took on a very strong position that transparent decryption/reencryption is "is exactly what one wants". I am missing this - what are the use cases for distcp where one wants transparent decryption/reencryption?

          Show
          Sanjay Radia added a comment - Vanilla distcp will just work with transparent encryption. Data will be decrypted on read and encrypted on write, assuming both source and target are in encrypted zones. ...The proposal on changing distcp is to enable a second use used case. Alejandro, Aaron the general practice is not to give the admins running distcp access to keys. Hence, as you suggest, we could change distcp so that it does not use transparent decryption by default; however, there may be other such backup tools and applications that customers and other vendors may have written and we would be breaking them. This may also break the HAR filesystem. Aaron, you took on a very strong position that transparent decryption/reencryption is "is exactly what one wants". I am missing this - what are the use cases for distcp where one wants transparent decryption/reencryption?
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia,

          Can you be a bit more specific on HAR breaking?

          Regarding distcp, you want to support both modes: raw copies, without d/e for admins running distcp. Regular copies, with e/d to copy data in/out or an encryption zone, or to another encryption zone; and this within or across clusters.

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , Can you be a bit more specific on HAR breaking? Regarding distcp, you want to support both modes: raw copies, without d/e for admins running distcp. Regular copies, with e/d to copy data in/out or an encryption zone, or to another encryption zone; and this within or across clusters.
          Hide
          Owen O'Malley added a comment -

          I'm still -1 to adding this to HDFS. Having a layered file system is a much cleaner approach.

          Issues:

          • The user needs to be able move, copy, and distribute the directories without the key. I should be able to set up a falcon or oozie job that copies directories where the user doing the copy has NO potential access to the key material. This is a critical security constraint.
          • A critical use case for encryption is when hdfs admins should not have access to the contents of some files. Encryption is the only way to implement that since the hdfs admins always have file permissions to both the hdfs files and the underlying block files.
          • We shouldn't change the filesystem API to deal with encryption, because we have a solution that doesn't require the change and will be far less confusing to users. In particular, we shouldn't add hacks to read/write unencrypted bytes to HDFS.
          • Each file needs to record the key version and original IV as written up in the CFS design document. The IV should be incremented for each block, but must start at a random number. As Alejandro pointed out this is required for strong security.
          Show
          Owen O'Malley added a comment - I'm still -1 to adding this to HDFS. Having a layered file system is a much cleaner approach. Issues: The user needs to be able move, copy, and distribute the directories without the key. I should be able to set up a falcon or oozie job that copies directories where the user doing the copy has NO potential access to the key material. This is a critical security constraint. A critical use case for encryption is when hdfs admins should not have access to the contents of some files. Encryption is the only way to implement that since the hdfs admins always have file permissions to both the hdfs files and the underlying block files. We shouldn't change the filesystem API to deal with encryption, because we have a solution that doesn't require the change and will be far less confusing to users. In particular, we shouldn't add hacks to read/write unencrypted bytes to HDFS. Each file needs to record the key version and original IV as written up in the CFS design document. The IV should be incremented for each block, but must start at a random number. As Alejandro pointed out this is required for strong security.
          Hide
          Owen O'Malley added a comment -

          As Sanjay proposed, I think it would be great to get together and discuss the issues in person. Would a meeting this week work for you Alejandro?

          Show
          Owen O'Malley added a comment - As Sanjay proposed, I think it would be great to get together and discuss the issues in person. Would a meeting this week work for you Alejandro?
          Hide
          Alejandro Abdelnur added a comment -

          Owen O'Malley,

          "I’m still -1"

          I don’t see a previous -1 in any of the related JIRAs.

          During the Hadoop Summit, while talking in person, you advocated for the layered approach because some concerns on the design. Since then, the design has been changed to address those specific concerns.

          Having a layered file system is a much cleaner approach.

          The main drawback of the layered approach is that it is not transparent and it will break and require modifications to a log of existing applications and projects that assume HDFS file URIs are hdfs://.

          Also, it will break applications and projects that downcast FileSystem to DistributedFileSystem.

          Issues:

          I think all the issues you are bringing up are being addressed.

          Let me try to Recap on the current status of each one of them.

          The user needs to be able move, copy, and distribute the directories without the key.

          Yes, this is possible in the current design. FileSystem will had a new create()/open() signature to support this, if you have access to the file but not the key, you can use the new signatures to copy files as per the usecase you are mentioning.

          A critical use case for encryption is when hdfs admins should not have access to the contents of some files.

          Correct, the current design addresses this. HDFS admin has access to files but not the keys.

          We shouldn't change the filesystem API to deal with encryption.

          We are doing minor changes to enable the usecases you previously indicated. In the base FileSystem these operations delegate to the existing methods, thus no existing filesystem implementation breaks.

          BTW, it is not a hack, but a way to enable new usecases that were not a requirement before. For example, if we ever do transparent compression in HDFS, you could need these new versions of the create()/open() to be able to copy files without decompressing/compressing them.

          Each file needs to record the key version and original IV as written up in the CFS design document.

          This is happening already.

          Show
          Alejandro Abdelnur added a comment - Owen O'Malley , "I’m still -1" I don’t see a previous -1 in any of the related JIRAs. During the Hadoop Summit, while talking in person, you advocated for the layered approach because some concerns on the design. Since then, the design has been changed to address those specific concerns. Having a layered file system is a much cleaner approach. The main drawback of the layered approach is that it is not transparent and it will break and require modifications to a log of existing applications and projects that assume HDFS file URIs are hdfs:// . Also, it will break applications and projects that downcast FileSystem to DistributedFileSystem . Issues: I think all the issues you are bringing up are being addressed. Let me try to Recap on the current status of each one of them. The user needs to be able move, copy, and distribute the directories without the key. Yes, this is possible in the current design. FileSystem will had a new create()/open() signature to support this, if you have access to the file but not the key, you can use the new signatures to copy files as per the usecase you are mentioning. A critical use case for encryption is when hdfs admins should not have access to the contents of some files. Correct, the current design addresses this. HDFS admin has access to files but not the keys. We shouldn't change the filesystem API to deal with encryption. We are doing minor changes to enable the usecases you previously indicated. In the base FileSystem these operations delegate to the existing methods, thus no existing filesystem implementation breaks. BTW, it is not a hack, but a way to enable new usecases that were not a requirement before. For example, if we ever do transparent compression in HDFS, you could need these new versions of the create()/open() to be able to copy files without decompressing/compressing them. Each file needs to record the key version and original IV as written up in the CFS design document. This is happening already.
          Hide
          Sanjay Radia added a comment -

          Can you be a bit more specific on HAR breaking?

          Har copies subtree data into tar like structure. Har lets you access in the individual files transparently - all the work is done on the client side – the NN is not involved and hence will not be able to hand out the encrypted keys or key versions. It is possible that Har can be changed work but I am merely pointing out that I don't think har will work as is with the changes proposed in this Jira.

          Show
          Sanjay Radia added a comment - Can you be a bit more specific on HAR breaking? Har copies subtree data into tar like structure. Har lets you access in the individual files transparently - all the work is done on the client side – the NN is not involved and hence will not be able to hand out the encrypted keys or key versions. It is possible that Har can be changed work but I am merely pointing out that I don't think har will work as is with the changes proposed in this Jira.
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia, AFAIK a HAR from HDFS perspective a single file, the HAR file itself will be encrypted by HDFS. I don't see how this will be broken. What am I missing?

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , AFAIK a HAR from HDFS perspective a single file, the HAR file itself will be encrypted by HDFS. I don't see how this will be broken. What am I missing?
          Hide
          Sanjay Radia added a comment -

          Alejandro - sorry I should have explained the HAR example better: consider a subtree which has a file called E that is encrypted and the rest normal. Now the user decides to har the subtree. The file E needs to remain encrypted inside the har; also when E is accessed from the har it needs to be transparently unencrypted. BTW this might be fixable by changing Har.

          Show
          Sanjay Radia added a comment - Alejandro - sorry I should have explained the HAR example better: consider a subtree which has a file called E that is encrypted and the rest normal. Now the user decides to har the subtree. The file E needs to remain encrypted inside the har; also when E is accessed from the har it needs to be transparently unencrypted. BTW this might be fixable by changing Har.
          Hide
          Sanjay Radia added a comment -

          The NN gives the HDFS client the encrypted DEK [unique data encryption key of the file] and the keyVersion ID

          Alejandro - isn't it sufficient to hand out a keyname rather than the encrypted DEK?

          Show
          Sanjay Radia added a comment - The NN gives the HDFS client the encrypted DEK [unique data encryption key of the file] and the keyVersion ID Alejandro - isn't it sufficient to hand out a keyname rather than the encrypted DEK?
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia, now I see what you mean. In that case, all encrypted files will be decrypted at reading time and written decrypted to the HAR file. If the HAR file is being created within an EZ then the whole HAR file will be encrypted.

          Or, as you suggest, modifying the HAR format to copy raw encrypted stream (plus storing the necessary crypto material) would be another option. And on HAR reading, the files should be decrypted using the embedded crypto material.

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , now I see what you mean. In that case, all encrypted files will be decrypted at reading time and written decrypted to the HAR file. If the HAR file is being created within an EZ then the whole HAR file will be encrypted. Or, as you suggest, modifying the HAR format to copy raw encrypted stream (plus storing the necessary crypto material) would be another option. And on HAR reading, the files should be decrypted using the embedded crypto material.
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay Radia, on " isn't it sufficient to hand out a keyname rather than the encrypted DEK?"

          encrypted DEKs are not stored in the KeyProvider, the KeyProvider creates the DEK, encrypts the DEK (EDEK), an returns the EDEK to the NN. The NN stores the EDEK in an xAttr of the file. the EDEK is handed to the HDFS client, who hands it the KeyProvider to get it decrypted DEK. Only the KeyProvider has the key to encrypt/decrypt DEKs.

          Show
          Alejandro Abdelnur added a comment - Sanjay Radia , on " isn't it sufficient to hand out a keyname rather than the encrypted DEK?" encrypted DEKs are not stored in the KeyProvider, the KeyProvider creates the DEK, encrypts the DEK (EDEK), an returns the EDEK to the NN. The NN stores the EDEK in an xAttr of the file. the EDEK is handed to the HDFS client, who hands it the KeyProvider to get it decrypted DEK. Only the KeyProvider has the key to encrypt/decrypt DEKs.
          Hide
          Aaron T. Myers added a comment -

          As Sanjay proposed, I think it would be great to get together and discuss the issues in person. Would a meeting this week work for you Alejandro?

          OK, seems like it could be beneficial. How about this Thursday 6/26 from 2pm-4pm in Cloudera's SF office? We'll of course set up WebEx, etc. for others to be able to join remotely if they'd like.

          Show
          Aaron T. Myers added a comment - As Sanjay proposed, I think it would be great to get together and discuss the issues in person. Would a meeting this week work for you Alejandro? OK, seems like it could be beneficial. How about this Thursday 6/26 from 2pm-4pm in Cloudera's SF office? We'll of course set up WebEx, etc. for others to be able to join remotely if they'd like.
          Hide
          Owen O'Malley added a comment -

          I don’t see a previous -1 in any of the related JIRAs.

          I had consistently stated objections and some of them have been addressed, but the fundamentals have become clear through this jira. I am always hesitant to use a -1 and I certainly don't do so lightly. Through the discussion, my opinion is transparent encryption in HDFS is a really bad idea. Let's run through the case:

          The one claimed benefit of integrating encryption into HDFS is that the user doesn't need to change the URLs that they use. I believe this to be a disadvantage because it hides the fact that these files are encrypted. That said, a better approach if that is the desired goal is to create a NEW filter filesystem that the user can configure to respond to hdfs urls that does silent encryption. This imposes NO penalty on people who don't want encryption and does not require hacks to the FileSystem API.

          FileSystem will had a new create()/open() signature to support this, if you have access to the file but not the key, you can use the new signatures to copy files as per the usecase you are mentioning.

          This will break every backup application. Some of them, such as HAR and DistCp you can hack to handle HDFS as a special case, but this kind of special casing always comes back to haunt us as a project. Changing FileSystem API is a really bad idea and inducing more differences between the various implementations will create many more problems than you are trying to avoid.

          Show
          Owen O'Malley added a comment - I don’t see a previous -1 in any of the related JIRAs. I had consistently stated objections and some of them have been addressed, but the fundamentals have become clear through this jira. I am always hesitant to use a -1 and I certainly don't do so lightly. Through the discussion, my opinion is transparent encryption in HDFS is a really bad idea. Let's run through the case: The one claimed benefit of integrating encryption into HDFS is that the user doesn't need to change the URLs that they use. I believe this to be a disadvantage because it hides the fact that these files are encrypted. That said, a better approach if that is the desired goal is to create a NEW filter filesystem that the user can configure to respond to hdfs urls that does silent encryption. This imposes NO penalty on people who don't want encryption and does not require hacks to the FileSystem API. FileSystem will had a new create()/open() signature to support this, if you have access to the file but not the key, you can use the new signatures to copy files as per the usecase you are mentioning. This will break every backup application. Some of them, such as HAR and DistCp you can hack to handle HDFS as a special case, but this kind of special casing always comes back to haunt us as a project. Changing FileSystem API is a really bad idea and inducing more differences between the various implementations will create many more problems than you are trying to avoid.
          Hide
          Todd Lipcon added a comment -

          The one claimed benefit of integrating encryption into HDFS is that the user doesn't need to change the URLs that they use. I believe this to be a disadvantage because it hides the fact that these files are encrypted

          This is the "transparent" part of the design, and it's billed as a positive feature in many products in the storage market. For example, from the "NetApp Storage Encryption (NSE)" datasheet:

          While higher level SAN and NAS fabric encryption solutions provide more flex-
          ibility, they can also present a challenge to everyday operations. Data
          encrypted before it is sent to the storage module cannot be compressed,
          deduplicated, or scanned for viruses, and it might need to be decrypted before
          it can be replicated to a backup site or archived to tape. Contrast this with
          NSE, which transpar- ently supports these NetApp ® storage efficiency features.
          NSE can help you lower your overall storage costs, while preventing old data
          from being accessed if a drive is repurposed.

          The same advantages hold for HDFS – if we add features such as transparent compression, it's crucial that the encryption be done after compression.

          The other point that this datasheet makes is that transparent at-rest encryption "acts as a backstop in case an administrator forgets to configure or misconfigures higher-level encryption". That is to say, users may still use encrypted file formats on top of HDFS using a schema like you're proposing, but many regulations require that all data at rest is encrypted. Asking users to configure and use wrapper filesystems leaves it quite possible (even likely) that data will land on HDFS without being encrypted.

          Show
          Todd Lipcon added a comment - The one claimed benefit of integrating encryption into HDFS is that the user doesn't need to change the URLs that they use. I believe this to be a disadvantage because it hides the fact that these files are encrypted This is the "transparent" part of the design, and it's billed as a positive feature in many products in the storage market. For example, from the "NetApp Storage Encryption (NSE)" datasheet : While higher level SAN and NAS fabric encryption solutions provide more flex- ibility, they can also present a challenge to everyday operations. Data encrypted before it is sent to the storage module cannot be compressed, deduplicated, or scanned for viruses, and it might need to be decrypted before it can be replicated to a backup site or archived to tape. Contrast this with NSE, which transpar- ently supports these NetApp ® storage efficiency features. NSE can help you lower your overall storage costs, while preventing old data from being accessed if a drive is repurposed. The same advantages hold for HDFS – if we add features such as transparent compression, it's crucial that the encryption be done after compression. The other point that this datasheet makes is that transparent at-rest encryption "acts as a backstop in case an administrator forgets to configure or misconfigures higher-level encryption". That is to say, users may still use encrypted file formats on top of HDFS using a schema like you're proposing, but many regulations require that all data at rest is encrypted. Asking users to configure and use wrapper filesystems leaves it quite possible (even likely) that data will land on HDFS without being encrypted.
          Hide
          Owen O'Malley added a comment -

          Todd, it is still transparent encryption if you use cfs:// instead of hdfs://. The important piece is that the application doesn't need to change to access the decrypted storage.

          My problem is by refusing to layer the change over the storage layer, this jira is making much disruptive and unnecessary changes to the critical infrastructure and its API.

          NSE is whole disk encryption and is equivalent to using lm-crypt to encrypt the block files. That level of encryption is always very transparent and is already available in HDFS without a code change.

          Aaron, I can't do a meeting tomorrow afternoon. How about tomorrow morning? Say 10am-noon?

          Show
          Owen O'Malley added a comment - Todd, it is still transparent encryption if you use cfs:// instead of hdfs://. The important piece is that the application doesn't need to change to access the decrypted storage. My problem is by refusing to layer the change over the storage layer, this jira is making much disruptive and unnecessary changes to the critical infrastructure and its API. NSE is whole disk encryption and is equivalent to using lm-crypt to encrypt the block files. That level of encryption is always very transparent and is already available in HDFS without a code change. Aaron, I can't do a meeting tomorrow afternoon. How about tomorrow morning? Say 10am-noon?
          Hide
          Owen O'Malley added a comment -

          I'll also point out that I've provided a solution that doesn't change the HDFS core and still lets you use your hdfs urls with encryption...

          Finally, adding compression to the crypto file system would be a great addition and still not require any changes to HDFS or its API.

          Show
          Owen O'Malley added a comment - I'll also point out that I've provided a solution that doesn't change the HDFS core and still lets you use your hdfs urls with encryption... Finally, adding compression to the crypto file system would be a great addition and still not require any changes to HDFS or its API.
          Hide
          Alejandro Abdelnur added a comment -

          Todd, it is still transparent encryption if you use cfs:// instead of hdfs://.

          Owen, that is NOT transparent.

          Show
          Alejandro Abdelnur added a comment - Todd, it is still transparent encryption if you use cfs:// instead of hdfs://. Owen, that is NOT transparent.
          Hide
          Owen O'Malley added a comment -

          Owen, that is NOT transparent.

          Transparent means that you shouldn't have to change your application code. Hacking HDFS to add encryption is transparent for one set of apps, but completely breaks others. Changing URLs requires no code changes to any apps.

          Show
          Owen O'Malley added a comment - Owen, that is NOT transparent. Transparent means that you shouldn't have to change your application code. Hacking HDFS to add encryption is transparent for one set of apps, but completely breaks others. Changing URLs requires no code changes to any apps.
          Hide
          Aaron T. Myers added a comment -

          Aaron, I can't do a meeting tomorrow afternoon. How about tomorrow morning? Say 10am-noon?

          Sounds good. Here's the address of Cloudera's SF Office:

          433 California Street, Floor 6
          San Francisco, CA 94104

          I'll post the remote meeting details later today on this JIRA once I get those figured out.

          See you tomorrow!

          Show
          Aaron T. Myers added a comment - Aaron, I can't do a meeting tomorrow afternoon. How about tomorrow morning? Say 10am-noon? Sounds good. Here's the address of Cloudera's SF Office: 433 California Street, Floor 6 San Francisco, CA 94104 I'll post the remote meeting details later today on this JIRA once I get those figured out. See you tomorrow!
          Hide
          Aaron T. Myers added a comment -

          Here's the WebEx information for those who are planning on joining remotely tomorrow from 10am-noon Pacific Time:

          ------------------------------------------------------- 
          To start or join the online meeting 
          ------------------------------------------------------- 
          Go to https://cloudera.webex.com/cloudera/j.php?MTID=me67e0b50829b1dc39077ac5ca323038a 
          
          ------------------------------------------------------- 
          Audio Only conference information 
          ------------------------------------------------------- 
          Call-in toll number (US/Canada): 1-650-479-3208 
          
          Access code:627 373 149 
          Global call-in numbers: https://cloudera.webex.com/cloudera/globalcallin.php?serviceType=MC&ED=321024932&tollFree=0
          
          Show
          Aaron T. Myers added a comment - Here's the WebEx information for those who are planning on joining remotely tomorrow from 10am-noon Pacific Time: ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://cloudera.webex.com/cloudera/j.php?MTID=me67e0b50829b1dc39077ac5ca323038a ------------------------------------------------------- Audio Only conference information ------------------------------------------------------- Call-in toll number (US/Canada): 1-650-479-3208 Access code:627 373 149 Global call-in numbers: https://cloudera.webex.com/cloudera/globalcallin.php?serviceType=MC&ED=321024932&tollFree=0
          Hide
          Owen O'Malley added a comment -

          Any chance for the PA office? Otherwise I'll be dialing in.

          Show
          Owen O'Malley added a comment - Any chance for the PA office? Otherwise I'll be dialing in.
          Hide
          Aaron T. Myers added a comment -

          Unfortunately not, all of Tucu, Andrew, Charlie, Colin, Todd, and I are all based out of the SF office and it's quite a hike for us to get down there. Sure you can't come up to SF? I'll buy you lunch after the meeting.

          Show
          Aaron T. Myers added a comment - Unfortunately not, all of Tucu, Andrew, Charlie, Colin, Todd, and I are all based out of the SF office and it's quite a hike for us to get down there. Sure you can't come up to SF? I'll buy you lunch after the meeting.
          Hide
          Owen O'Malley added a comment -

          Alejandro, you don't need and shouldn't implement any of the DEK stuff. AES-CTR is more than adequate. Rather than use 16 bytes of randomness and 16 bytes of counter, use 32 bytes of randomness and just add the counter to it rather than concatenate.

          Let's take the extreme case of 1million files with the same key version. If you have 32 bits of randomness, that leads you to a collision chance that is basically 100%. With 64 bits of randomness that drops to 2.7e-8, which is close enough to 0.

          Show
          Owen O'Malley added a comment - Alejandro, you don't need and shouldn't implement any of the DEK stuff. AES-CTR is more than adequate. Rather than use 16 bytes of randomness and 16 bytes of counter, use 32 bytes of randomness and just add the counter to it rather than concatenate. Let's take the extreme case of 1million files with the same key version. If you have 32 bits of randomness, that leads you to a collision chance that is basically 100%. With 64 bits of randomness that drops to 2.7e-8, which is close enough to 0.
          Hide
          Owen O'Malley added a comment -

          Sorry, I messed up my math. Assuming that you have 1million files per key and 8 bytes of randomness, you get 2.7e-8, which is close enough to 0. At 16 bytes or 32 bytes of randomness, doubles underflow when calculating the percentage.

          Show
          Owen O'Malley added a comment - Sorry, I messed up my math. Assuming that you have 1million files per key and 8 bytes of randomness, you get 2.7e-8, which is close enough to 0. At 16 bytes or 32 bytes of randomness, doubles underflow when calculating the percentage.
          Hide
          Alejandro Abdelnur added a comment -

          Owen, my understanding is that by cryptographic standards those probabilities are considered high.

          Also, having a DEK per file adds more security to the overall system because a compromised DEK compromises a single file as opposed as many files within the EZ (all the files written with the same keyVersion).

          BTW, is not that we came up with a novelty here, it is how may solution work today.

          Show
          Alejandro Abdelnur added a comment - Owen, my understanding is that by cryptographic standards those probabilities are considered high. Also, having a DEK per file adds more security to the overall system because a compromised DEK compromises a single file as opposed as many files within the EZ (all the files written with the same keyVersion). BTW, is not that we came up with a novelty here, it is how may solution work today.
          Hide
          Mike Yoder added a comment -

          Owen - I can't claim to know much about Hadoop, but have been working with crypto and file systems for awhile now. Cryptographers are very paranoid people, and with good reason - they have seen things that used to be secure turn out to be insecure; they are well aware of Moore's law; and they know there are legions of governments (including our own) and hackers out to get them.

          One way cryptographers think about crypto strength is in 'bits of security'. It relates to how big the key space is to brute-force an attack. For symmetric keys, that's simply the key length - AES 128 provides 128 bits of security, or 2^128 different combinations required to brute force it. For asymmetric keys, it's a little different - RSA1024 has 80 bits of security; RSA2048 has 112 bits of security.

          • 56 bits of security is the original DES - today crackable as a cloud service in under a day for a few bucks
          • 80 bits of security is generally believed to be crackable by the NSA. You don't want to touch this with a ten foot pole; this is why everyone is fleeing RSA 1024.
          • 128 bits is classified by our government as acceptable to protect "SECRET" documents
          • 192 bits is acceptable for "TOP SECRET"

          The original scheme, which used AES-CTR and handed the EZ key out to every client, had some serious problems. The first is the use of CTR stream cipher mode. For reasons also discussed above, the reuse of a key+IV pair ever basically reveals anything ever encrypted with that key+IV pair. With one key (the EZ key) and a 128-bit IV, you might think you've got 128 bits of security - but you don't. Because of the birthday paradox, the chance of any one IV matching any other IV is actually 2^64 - so you've only got 64 bits of security. This is 16 bits below the "don't touch it with a ten foot pole" range.

          With a DEK per file, and using AES256, your key + IV is 256 bits + 128 bits = 384 bits - and with the birthday paradox, this becomes 192 bits of security, which is good enough for "TOP SECRET" - which is where we should shoot for IMHO.

          The other problem with handing the EZ key out to every client is... you're handing the EZ key out to every client. Having one DEK per file dramatically reduces the potential scope for compromise. Plus, it makes for super-easy key rotation - and this is a really big deal.

          The one-key-per-file design is used (amongst other places) in ecryptfs, the linux encrypting file system. See http://ecryptfs.sourceforge.net/ecryptfs.pdf.

          Show
          Mike Yoder added a comment - Owen - I can't claim to know much about Hadoop, but have been working with crypto and file systems for awhile now. Cryptographers are very paranoid people, and with good reason - they have seen things that used to be secure turn out to be insecure; they are well aware of Moore's law; and they know there are legions of governments (including our own) and hackers out to get them. One way cryptographers think about crypto strength is in 'bits of security'. It relates to how big the key space is to brute-force an attack. For symmetric keys, that's simply the key length - AES 128 provides 128 bits of security, or 2^128 different combinations required to brute force it. For asymmetric keys, it's a little different - RSA1024 has 80 bits of security; RSA2048 has 112 bits of security. 56 bits of security is the original DES - today crackable as a cloud service in under a day for a few bucks 80 bits of security is generally believed to be crackable by the NSA. You don't want to touch this with a ten foot pole; this is why everyone is fleeing RSA 1024. 128 bits is classified by our government as acceptable to protect "SECRET" documents 192 bits is acceptable for "TOP SECRET" The original scheme, which used AES-CTR and handed the EZ key out to every client, had some serious problems. The first is the use of CTR stream cipher mode. For reasons also discussed above, the reuse of a key+IV pair ever basically reveals anything ever encrypted with that key+IV pair. With one key (the EZ key) and a 128-bit IV, you might think you've got 128 bits of security - but you don't. Because of the birthday paradox, the chance of any one IV matching any other IV is actually 2^64 - so you've only got 64 bits of security. This is 16 bits below the "don't touch it with a ten foot pole" range. With a DEK per file, and using AES256, your key + IV is 256 bits + 128 bits = 384 bits - and with the birthday paradox, this becomes 192 bits of security, which is good enough for "TOP SECRET" - which is where we should shoot for IMHO. The other problem with handing the EZ key out to every client is... you're handing the EZ key out to every client. Having one DEK per file dramatically reduces the potential scope for compromise. Plus, it makes for super-easy key rotation - and this is a really big deal. The one-key-per-file design is used (amongst other places) in ecryptfs, the linux encrypting file system. See http://ecryptfs.sourceforge.net/ecryptfs.pdf .
          Hide
          Owen O'Malley added a comment -

          Mike, I remember you from when I interviewed you.

          You are talking about collisions between IV's, not key space. By using either 32 bytes of randomness (if someone is worried about crypto attacks there is no excuse not to use AES256), there is NO possibility of collision even assuming an insanely bad practice of using a single key version for a huge number of files. I obviously understand and applied the birthday paradox to get the numbers.

          Note that we already have key rolling and the key is already a random string of bytes. Adding additional layers of randomness just gives the appearance of more security. That may be wonderful in the closed source security world, but it actively harmful in open source. In open source having a clear implementation that is open for inspection is by far the best protection.

          Note that the other issue with not using the keys as intended is that many Hadoop users launch jobs that read millions of files. We can't afford to have the client fetch a different key for each of those millions of files.

          Show
          Owen O'Malley added a comment - Mike, I remember you from when I interviewed you. You are talking about collisions between IV's, not key space. By using either 32 bytes of randomness (if someone is worried about crypto attacks there is no excuse not to use AES256), there is NO possibility of collision even assuming an insanely bad practice of using a single key version for a huge number of files. I obviously understand and applied the birthday paradox to get the numbers. Note that we already have key rolling and the key is already a random string of bytes. Adding additional layers of randomness just gives the appearance of more security. That may be wonderful in the closed source security world, but it actively harmful in open source. In open source having a clear implementation that is open for inspection is by far the best protection. Note that the other issue with not using the keys as intended is that many Hadoop users launch jobs that read millions of files. We can't afford to have the client fetch a different key for each of those millions of files.
          Hide
          Alejandro Abdelnur added a comment -

          Mike, I remember you from when I interviewed you.

          Owen, that was not needed at all. It does not add any value to the technical discussion we are having here.

          Show
          Alejandro Abdelnur added a comment - Mike, I remember you from when I interviewed you. Owen, that was not needed at all. It does not add any value to the technical discussion we are having here.
          Hide
          Owen O'Malley added a comment -

          Alejandro, I was just trying to say that I'd met him and was familiar with his work history. If it sounded rude or dismissive, that was unintended. I'm sorry.

          Show
          Owen O'Malley added a comment - Alejandro, I was just trying to say that I'd met him and was familiar with his work history. If it sounded rude or dismissive, that was unintended. I'm sorry.
          Hide
          Mike Yoder added a comment -

          Hey, no offense taken - didn't think you intended it as such. I remember you, too.

          The problem is that the IV size for AES-CTR (or any cipher mode using AES) is 16 bytes. You actually can't use 32 bytes of randomness. Assuming that I understand what you're saying - that you could use a 32-byte IV instead of a 16-byte IV.

          It is not unlikely that customers will have one encryption zone cover a huge number of files. (This comes directly from experience at Vormetric - we called Encryption Zones "Guard Points" and it was the same idea.) Many customers just want to "encrypt stuff", and don't want to worry about the crypto implementation details. That's our job.

          I confess I don't understand your reference to the openness of the solution - in my opinion, it means that we have to do everything that we can to ensure that we get the crypto right. Since the source is available for viewing by everyone, we really have to be extra careful. In my opinion, having one key per file does not significantly add to the complexity of the solution, and ratchets up the security a few notches. And key rolling is lots and lots easier with the one-key-per-file approach.

          I'll let Alejandro Abdelnur address the scalability issues!

          Show
          Mike Yoder added a comment - Hey, no offense taken - didn't think you intended it as such. I remember you, too. The problem is that the IV size for AES-CTR (or any cipher mode using AES) is 16 bytes. You actually can't use 32 bytes of randomness. Assuming that I understand what you're saying - that you could use a 32-byte IV instead of a 16-byte IV. It is not unlikely that customers will have one encryption zone cover a huge number of files. (This comes directly from experience at Vormetric - we called Encryption Zones "Guard Points" and it was the same idea.) Many customers just want to "encrypt stuff", and don't want to worry about the crypto implementation details. That's our job. I confess I don't understand your reference to the openness of the solution - in my opinion, it means that we have to do everything that we can to ensure that we get the crypto right . Since the source is available for viewing by everyone, we really have to be extra careful. In my opinion, having one key per file does not significantly add to the complexity of the solution, and ratchets up the security a few notches. And key rolling is lots and lots easier with the one-key-per-file approach. I'll let Alejandro Abdelnur address the scalability issues!
          Hide
          Owen O'Malley added a comment -

          In the discussion today, we covered lots of ground. Todd proposed that Alejandro add a virtual ".raw" directory to the top level of each encryption zone. This would allow processes that want access to read or write the data within the encryption zone an access path that doesn't require modifying the FileSystem API. With that change, I'm -0 to adding encryption in to HDFS. I still think that our users would be far better served by adding encryption/compression layers above HDFS rather than baking them into HDFS, but I'm not going to block the work. By adding the work directly into HDFS, Alejandro and the others working on this are signing up for a high level of QA at scale before this is committed.

          A couple of other points came up:

          • symbolic links in conjunction with cryptofs would allow users to use hdfs urls to access encrypted hdfs files.
          • there must be an hdfs admin command to list the crypto zones to support auditing
          • There are significant scalability concerns about each tasks requesting decryption of each file key. In particular, if a job has 100,000 tasks and each opens 1000 files, that is 100 million key requests. The current design is unlikely to scale correctly.
          • the kms needs its own delegation tokens and hooks so that yarn will renew and cancel them.
          • there are three levels of key rolling:
            • leaving old data alone and writing new data with the new key
            • re-writing the data with the new key
            • re-encoding the per file key (personally this seems pointless)
          Show
          Owen O'Malley added a comment - In the discussion today, we covered lots of ground. Todd proposed that Alejandro add a virtual ".raw" directory to the top level of each encryption zone. This would allow processes that want access to read or write the data within the encryption zone an access path that doesn't require modifying the FileSystem API. With that change, I'm -0 to adding encryption in to HDFS. I still think that our users would be far better served by adding encryption/compression layers above HDFS rather than baking them into HDFS, but I'm not going to block the work. By adding the work directly into HDFS, Alejandro and the others working on this are signing up for a high level of QA at scale before this is committed. A couple of other points came up: symbolic links in conjunction with cryptofs would allow users to use hdfs urls to access encrypted hdfs files. there must be an hdfs admin command to list the crypto zones to support auditing There are significant scalability concerns about each tasks requesting decryption of each file key. In particular, if a job has 100,000 tasks and each opens 1000 files, that is 100 million key requests. The current design is unlikely to scale correctly. the kms needs its own delegation tokens and hooks so that yarn will renew and cancel them. there are three levels of key rolling: leaving old data alone and writing new data with the new key re-writing the data with the new key re-encoding the per file key (personally this seems pointless)
          Hide
          Sanjay Radia added a comment -

          Noticed the rename restriction for encryption zone. In the past rename was one of the main objection to volumes (ie volumes should not restrict renames). I think we should bite the bullet and introduce the notion of volumes and use encryption as the first use case for volumes (ie encryption zone become encrypted volume). Snapshot can also benefit from volume-rename restriction because supporting rename across snapshots is very hard to support.

          Show
          Sanjay Radia added a comment - Noticed the rename restriction for encryption zone. In the past rename was one of the main objection to volumes (ie volumes should not restrict renames). I think we should bite the bullet and introduce the notion of volumes and use encryption as the first use case for volumes (ie encryption zone become encrypted volume). Snapshot can also benefit from volume-rename restriction because supporting rename across snapshots is very hard to support.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Correct me if I am wrong: The current design does not prevent a malicious admin who has root access for a node since one can

          1. dump the memory of a running task to find out a plain decryption key of a file;
          2. sudo as a hdfs admin and read the encrypted file in raw format;
          3. decrypt the file with the key.
          Show
          Tsz Wo Nicholas Sze added a comment - Correct me if I am wrong: The current design does not prevent a malicious admin who has root access for a node since one can dump the memory of a running task to find out a plain decryption key of a file; sudo as a hdfs admin and read the encrypted file in raw format; decrypt the file with the key.
          Hide
          Alejandro Abdelnur added a comment -

          afaik, the only way you can protect from root attack is for all encryption/decryption to happen in sealed hardware (an HSM) and the keys never leaving such.

          Show
          Alejandro Abdelnur added a comment - afaik, the only way you can protect from root attack is for all encryption/decryption to happen in sealed hardware (an HSM) and the keys never leaving such.
          Hide
          Andrew Purtell added a comment -

          The current design does not prevent a malicious admin who has root access for a node

          At rest encryption doesn't address memory protection. Hence, at rest. Someone who has root access can read decrypted plaintext out of memory directly, no need for steps 2 and 3 above. It's meant to provide assurance that should a disk be improperly disposed of, or HDFS permissions be improperly set for a given set of files, that no sensitive information can leak in those cases. This is still important. It's commonly viewed as valuable (and required) in various regulatory regimes.

          the only way you can protect from root attack is for all encryption/decryption to happen in sealed hardware (an HSM) and the keys never leaving such.

          You also have to somehow get your program logic onto that hardware to perform useful work on the decrypted data, and do it in a way that you can attest to the integrity of the execution environment.

          Show
          Andrew Purtell added a comment - The current design does not prevent a malicious admin who has root access for a node At rest encryption doesn't address memory protection. Hence, at rest. Someone who has root access can read decrypted plaintext out of memory directly, no need for steps 2 and 3 above. It's meant to provide assurance that should a disk be improperly disposed of, or HDFS permissions be improperly set for a given set of files, that no sensitive information can leak in those cases. This is still important. It's commonly viewed as valuable (and required) in various regulatory regimes. the only way you can protect from root attack is for all encryption/decryption to happen in sealed hardware (an HSM) and the keys never leaving such. You also have to somehow get your program logic onto that hardware to perform useful work on the decrypted data, and do it in a way that you can attest to the integrity of the execution environment.
          Hide
          Alejandro Abdelnur added a comment -

          and even then, without getting the keys, root could harvest from user mem all necessary info to get the hms to decrypt anything.

          wondering if there is a way other than eliminating root access

          Show
          Alejandro Abdelnur added a comment - and even then, without getting the keys, root could harvest from user mem all necessary info to get the hms to decrypt anything. wondering if there is a way other than eliminating root access
          Hide
          Todd Lipcon added a comment -

          I think the "solution" to this issue is by administrative policy - eg no single person has root access to the machine. Two admins each know half of the password, and thus neither can log in without the other one watching over their shoulder. (or some equivalent thereof using split key based access, etc).

          Show
          Todd Lipcon added a comment - I think the "solution" to this issue is by administrative policy - eg no single person has root access to the machine. Two admins each know half of the password, and thus neither can log in without the other one watching over their shoulder. (or some equivalent thereof using split key based access, etc).
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ..., at rest. Someone who has root access can read decrypted plaintext out of memory directly, no need for steps 2 and 3 above.

          There is a distinction – Without steps 2 and 3, the malicious admin can ONLY obtain the data being decrypted in a node. With steps 2 and 3, the admin could read the entire file.

          Ideally, the malicious admin should ONLY obtain the data being decrypted in a node that he/she has root access. Such requirement is achievable – Suppose files are encrypted by stream cipher. When a task need to access a range data of a file, only the corresponding range of keysteam is sent to the task.

          Show
          Tsz Wo Nicholas Sze added a comment - > ..., at rest. Someone who has root access can read decrypted plaintext out of memory directly, no need for steps 2 and 3 above. There is a distinction – Without steps 2 and 3, the malicious admin can ONLY obtain the data being decrypted in a node. With steps 2 and 3, the admin could read the entire file. Ideally, the malicious admin should ONLY obtain the data being decrypted in a node that he/she has root access. Such requirement is achievable – Suppose files are encrypted by stream cipher. When a task need to access a range data of a file, only the corresponding range of keysteam is sent to the task.
          Hide
          Andrew Purtell added a comment -

          Ideally, the malicious admin should ONLY obtain the data being decrypted in a node that he/she has root access. Such requirement is achievable – Suppose files are encrypted by stream cipher. When a task need to access a range data of a file, only the corresponding range of keysteam is sent to the task.

          That is an interesting idea.

          You might also be interested in reading the Horus paper (or perhaps already have): http://www.ssrc.ucsc.edu/pub/rajendran11-pdsw.html I'm not sure of the scalability properties or implementation complexity. My guess as to the answers are: unproven, significant. It's not up to me but I'd imagine consensus would be that could be follow up work.

          Show
          Andrew Purtell added a comment - Ideally, the malicious admin should ONLY obtain the data being decrypted in a node that he/she has root access. Such requirement is achievable – Suppose files are encrypted by stream cipher. When a task need to access a range data of a file, only the corresponding range of keysteam is sent to the task. That is an interesting idea. You might also be interested in reading the Horus paper (or perhaps already have): http://www.ssrc.ucsc.edu/pub/rajendran11-pdsw.html I'm not sure of the scalability properties or implementation complexity. My guess as to the answers are: unproven, significant. It's not up to me but I'd imagine consensus would be that could be follow up work.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Andrew Purtell, thanks for the pointer.

          > ..., without getting the keys, root could harvest from user mem all necessary info to get the hms to decrypt anything.

          Alejandro Abdelnur, could you explain it in more details?

          Show
          Tsz Wo Nicholas Sze added a comment - Andrew Purtell , thanks for the pointer. > ..., without getting the keys, root could harvest from user mem all necessary info to get the hms to decrypt anything. Alejandro Abdelnur , could you explain it in more details?
          Hide
          Alejandro Abdelnur added a comment -

          lets assume you have an HSM that keeps the encryption key inside its own memory and root cannot access that. this means the HSM will provide an API along the lines encrypt/decrypt(KEY_ID, INPUT, OUTPUT) to be used from user space. Root can get the the KEY_ID from the user memory and then ask the HSM to encrypt/decrypt and stream using that key. Root never gets hold of the key itself, but can force the HSM to encrypt/decrypt anything.

          Show
          Alejandro Abdelnur added a comment - lets assume you have an HSM that keeps the encryption key inside its own memory and root cannot access that. this means the HSM will provide an API along the lines encrypt/decrypt(KEY_ID, INPUT, OUTPUT) to be used from user space. Root can get the the KEY_ID from the user memory and then ask the HSM to encrypt/decrypt and stream using that key. Root never gets hold of the key itself, but can force the HSM to encrypt/decrypt anything.
          Hide
          Andrew Wang added a comment -

          Charles posted a design doc for how distcp will work with encryption at HDFS-6509. Sanjay Radia and Owen O'Malley, I think this is essentially the "raw" directory discussed earlier, but it'd be appreciated if you gave it a once over. Thanks!

          Show
          Andrew Wang added a comment - Charles posted a design doc for how distcp will work with encryption at HDFS-6509 . Sanjay Radia and Owen O'Malley , I think this is essentially the "raw" directory discussed earlier, but it'd be appreciated if you gave it a once over. Thanks!
          Hide
          Stephen Chu added a comment -

          I've attached a test plan we will execute for this feature. Feel free to comment and make suggestions.

          Show
          Stephen Chu added a comment - I've attached a test plan we will execute for this feature. Feel free to comment and make suggestions.
          Hide
          Charles Lamb added a comment -

          Submitting a first cut of the branch merge patch just to get a jenkins run going.

          Show
          Charles Lamb added a comment - Submitting a first cut of the branch merge patch just to get a jenkins run going.
          Hide
          Charles Lamb added a comment -

          submitting patch to get a jenkins run.

          Show
          Charles Lamb added a comment - submitting patch to get a jenkins run.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12659689/HDFS-6134.001.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 38 new or modified test files.

          -1 javac. The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 9 new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12659689/HDFS-6134.001.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 38 new or modified test files. -1 javac . The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 9 new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7558//console This message is automatically generated.
          Hide
          Yi Liu added a comment -

          Second cut of the branch merge patch and check the jenkins.

          Show
          Yi Liu added a comment - Second cut of the branch merge patch and check the jenkins.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12660055/HDFS-6134.002.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 38 new or modified test files.

          -1 javac. The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient:

          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//artifact/trunk/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12660055/HDFS-6134.002.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 38 new or modified test files. -1 javac . The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient: org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7567//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          I've attached a document that discusses the general design of this feature.

          Show
          Charles Lamb added a comment - I've attached a document that discusses the general design of this feature.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12660368/HDFSDataatRestEncryption.pdf
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7580//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12660368/HDFSDataatRestEncryption.pdf against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7580//console This message is automatically generated.
          Hide
          Sanjay Radia added a comment - - edited

          Charles posted a design doc for how distcp will work with encryption at HDFS-6509.

          I did a quick glance over it. We also need to do the same for har. I think the same .raw should work ...

          Show
          Sanjay Radia added a comment - - edited Charles posted a design doc for how distcp will work with encryption at HDFS-6509 . I did a quick glance over it. We also need to do the same for har. I think the same .raw should work ...
          Hide
          Sanjay Radia added a comment -

          Wrt to webhdfs, the document says that the decryption/encryption will happen in the Datanode.

          • Will the DN be able to access the key necessary to do this?
          • The data will be transmitted in the clear - is that what we want? For the normal HDFS API the decryption/encryption happens at the client side.
          • There are two aspects to Webhdfs: the rest client and the webhdfs Filesystem. Have you considered both use cases?
          • Will distcp work via webhdfs? Customers often use webhdfs instead of hdfs for cross-cluster copies.
          Show
          Sanjay Radia added a comment - Wrt to webhdfs, the document says that the decryption/encryption will happen in the Datanode. Will the DN be able to access the key necessary to do this? The data will be transmitted in the clear - is that what we want? For the normal HDFS API the decryption/encryption happens at the client side. There are two aspects to Webhdfs: the rest client and the webhdfs Filesystem. Have you considered both use cases? Will distcp work via webhdfs? Customers often use webhdfs instead of hdfs for cross-cluster copies.
          Hide
          Sanjay Radia added a comment -

          One of the items raised at the meeting and summarized by Owen in his meeting minutes comment (june 26) is the scalability concern. How is that being addressed? Can a job client get the keys prior to job submission?

          Show
          Sanjay Radia added a comment - One of the items raised at the meeting and summarized by Owen in his meeting minutes comment (june 26) is the scalability concern. How is that being addressed? Can a job client get the keys prior to job submission?
          Hide
          Andrew Wang added a comment -

          Hey Sanjay, thanks for reviewing things,

          Regarding HAR, could you lay out the usecase you have in mind? When the user makes the HAR, they'll need access to all the input files (encrypted or unencrypted), and then if they write it within an EZ, then it'll be encrypted, else, unencrypted. This behavior seems reasonable to me.

          Regarding webhdfs, it's not a recommended deployment. I'm going to doc this additionally in HDFS-6824. It requires giving the DNs (thus the HDFS superuser) access to EZ keys, which is not particularly secure. There is HTTPS transport via swebhdfs, but that doesn't fix the key access issue. The recommended access method is instead HttpFS, which runs as a non-superuser. So, yes distcp will work too. This will definitely be covered during our testing.

          Regarding scalability, you can put the KMS behind a load balancer, which should make scalability a non-issue. Tucu can comment better on this than me since he's done some KMS benchmarking, but I think a single instance should be able to handle O(1000s) of req/s.

          Show
          Andrew Wang added a comment - Hey Sanjay, thanks for reviewing things, Regarding HAR, could you lay out the usecase you have in mind? When the user makes the HAR, they'll need access to all the input files (encrypted or unencrypted), and then if they write it within an EZ, then it'll be encrypted, else, unencrypted. This behavior seems reasonable to me. Regarding webhdfs, it's not a recommended deployment. I'm going to doc this additionally in HDFS-6824 . It requires giving the DNs (thus the HDFS superuser) access to EZ keys, which is not particularly secure. There is HTTPS transport via swebhdfs, but that doesn't fix the key access issue. The recommended access method is instead HttpFS, which runs as a non-superuser. So, yes distcp will work too. This will definitely be covered during our testing. Regarding scalability, you can put the KMS behind a load balancer, which should make scalability a non-issue. Tucu can comment better on this than me since he's done some KMS benchmarking, but I think a single instance should be able to handle O(1000s) of req/s.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... The recommended access method is instead HttpFS, which runs as a non-superuser. ...

          Could you give more details? Do you mean that each user has to run a HttpFS server for their application?

          Show
          Tsz Wo Nicholas Sze added a comment - > ... The recommended access method is instead HttpFS, which runs as a non-superuser. ... Could you give more details? Do you mean that each user has to run a HttpFS server for their application?
          Hide
          Alejandro Abdelnur added a comment -

          Tsz Wo Nicholas Sze, just a regular HttpFS setup. HttpFS runs a user 'httpfs' which is not an HDFS superuser, just a HDFS proxyuser. Because of this, the 'hfds' user does not have access to decrypted keys, which is the case with WebHDFS running out of NN/DNs.

          Show
          Alejandro Abdelnur added a comment - Tsz Wo Nicholas Sze , just a regular HttpFS setup. HttpFS runs a user 'httpfs' which is not an HDFS superuser, just a HDFS proxyuser. Because of this, the 'hfds' user does not have access to decrypted keys, which is the case with WebHDFS running out of NN/DNs.
          Hide
          Sanjay Radia added a comment -

          Regarding webhdfs, it's not a recommended deployment.

          The design document in this jira already states that webhdfs just works:

          • "This Jira provides encryption for HDFS data at rest and allows any application to access it via the Hadoop Filesystem Java API, Hadoop libhdfs C library, or WebHDFS REST API."
          • "For HDFS WebHDFS, the DataNodes act as the HDFS client reading/writing files since that is where encryption/decryption will happen. For HttpFS, the HttpFS server acts as the HDFS client reading/writing files, since that is where encryption/decryption will happen."

          webhdfs not working is worrying because REST is used by many users who do not want to deploy hadoop binaries or want to use a non-java client.
          Also I do not understand why httpfs works and webhdfs "breaks". Neither will be running as the end-user and hence neither will allow transparent encryption. Am I missing something?

          Show
          Sanjay Radia added a comment - Regarding webhdfs, it's not a recommended deployment. The design document in this jira already states that webhdfs just works: "This Jira provides encryption for HDFS data at rest and allows any application to access it via the Hadoop Filesystem Java API, Hadoop libhdfs C library, or WebHDFS REST API." "For HDFS WebHDFS, the DataNodes act as the HDFS client reading/writing files since that is where encryption/decryption will happen. For HttpFS, the HttpFS server acts as the HDFS client reading/writing files, since that is where encryption/decryption will happen." webhdfs not working is worrying because REST is used by many users who do not want to deploy hadoop binaries or want to use a non-java client. Also I do not understand why httpfs works and webhdfs "breaks". Neither will be running as the end-user and hence neither will allow transparent encryption. Am I missing something?
          Hide
          Sanjay Radia added a comment -

          Regarding HAR, could you lay out the usecase ...

          Alejandro summarize the problem and also the solution of modifying har in his comment of June 24th https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14042797&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14042797
          Andrew you are missing one of the usage models of HAR: The user creating the har is not the only user accessing the har - har is a general tool used by an admin to compact files and replace the original.

          I can think of at least the following use cases so far :

          • A subtree being har'ed has subtree that is EZ - some files in the har will be encrypted and some will not. The reader should be able to transparently read each of the two kinds
          • A subtree being har'ed is part of subtree that is EZ - the whole har should be encrypted and transparently decrypted when its contents are read.
          • A user har's a non-EZ subtree and copies it into a EZ - should just work as you suggest the whole thing is encrypted and requires that the user has access to the keys to read the har.
          Show
          Sanjay Radia added a comment - Regarding HAR, could you lay out the usecase ... Alejandro summarize the problem and also the solution of modifying har in his comment of June 24th https://issues.apache.org/jira/browse/HDFS-6134?focusedCommentId=14042797&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14042797 Andrew you are missing one of the usage models of HAR: The user creating the har is not the only user accessing the har - har is a general tool used by an admin to compact files and replace the original. I can think of at least the following use cases so far : A subtree being har'ed has subtree that is EZ - some files in the har will be encrypted and some will not. The reader should be able to transparently read each of the two kinds A subtree being har'ed is part of subtree that is EZ - the whole har should be encrypted and transparently decrypted when its contents are read. A user har's a non-EZ subtree and copies it into a EZ - should just work as you suggest the whole thing is encrypted and requires that the user has access to the keys to read the har.
          Hide
          Alejandro Abdelnur added a comment -

          Also I do not understand why httpfs works and webhdfs "breaks". Neither will be running as the end-user and hence neither will allow transparent encryption. Am I missing something?

          Both httpfs and webhdfs will work just fine. when reading/writing a file, webhdfs (DN) and httpfs (httpfs) will need to get the file encryption key in decrypted form. httpfs runs as 'httpfs' user, webhdfs runs as 'hdfs' user (embedded in the NN/DNs). Typically KMS would be configured not to decrypt keys for the 'hdfs' user (one of the goals is that the hdfs user should not have access to the keys so it cannot decrypt files). For webhdfs to work, the 'hdfs' user must not be blacklisted in the KMS, thus the 'hdfs' user has access to the decrypted keys for files.

          The point is, if webhdfs is enabled, then KMS has to be configured in a way that the 'hdfs' user can access all files in encrytped form. And this could fail some security audits users may have to do in their clusters.

          Show
          Alejandro Abdelnur added a comment - Also I do not understand why httpfs works and webhdfs "breaks". Neither will be running as the end-user and hence neither will allow transparent encryption. Am I missing something? Both httpfs and webhdfs will work just fine. when reading/writing a file, webhdfs (DN) and httpfs (httpfs) will need to get the file encryption key in decrypted form. httpfs runs as 'httpfs' user, webhdfs runs as 'hdfs' user (embedded in the NN/DNs). Typically KMS would be configured not to decrypt keys for the 'hdfs' user (one of the goals is that the hdfs user should not have access to the keys so it cannot decrypt files). For webhdfs to work, the 'hdfs' user must not be blacklisted in the KMS, thus the 'hdfs' user has access to the decrypted keys for files. The point is, if webhdfs is enabled, then KMS has to be configured in a way that the 'hdfs' user can access all files in encrytped form. And this could fail some security audits users may have to do in their clusters.
          Hide
          Sanjay Radia added a comment - - edited

          Alejandro - for both webhdfs and httpfs to work your proposal is that users "hdfs" and "httpfs" have access to any key (you mention only webhdfs in your comment but I suspect you meant both). However with this approach webhdfs and httpfs will give access to ALL EZ files to users that have read access.
          Correct? This would be unacceptable.

          I believe the better solution is for webhdfs and httpfs to access the file by doing a doAs(endUser).

          Show
          Sanjay Radia added a comment - - edited Alejandro - for both webhdfs and httpfs to work your proposal is that users "hdfs" and "httpfs" have access to any key (you mention only webhdfs in your comment but I suspect you meant both). However with this approach webhdfs and httpfs will give access to ALL EZ files to users that have read access. Correct? This would be unacceptable. I believe the better solution is for webhdfs and httpfs to access the file by doing a doAs(endUser).
          Hide
          Larry McCay added a comment -

          Sanjay Radia that sounds right to me.
          In fact, that would be the only way for Knox to be able to access files in HDFS through webhdfs.
          IMO, relegating webhdfs to being an audit violation should be a showstopper.

          The hdfs user should not be able to access the keys but an enduser with appropriate permissions should be given access through webhdfs.

          Show
          Larry McCay added a comment - Sanjay Radia that sounds right to me. In fact, that would be the only way for Knox to be able to access files in HDFS through webhdfs. IMO, relegating webhdfs to being an audit violation should be a showstopper. The hdfs user should not be able to access the keys but an enduser with appropriate permissions should be given access through webhdfs.
          Hide
          Alejandro Abdelnur added a comment -

          httpfs accesses files using doAs() today, I would assume webhdfs does the same. Still in the case of webhdfs, is the hdfs user authenticating against the KMS. if the 'hdfs' user can be a KMS proxyuser, then an 'hdfs' admin can retrieve file encryption keys and gain access to decrypted content of encrypted files.

          Show
          Alejandro Abdelnur added a comment - httpfs accesses files using doAs() today, I would assume webhdfs does the same. Still in the case of webhdfs, is the hdfs user authenticating against the KMS. if the 'hdfs' user can be a KMS proxyuser, then an 'hdfs' admin can retrieve file encryption keys and gain access to decrypted content of encrypted files.
          Hide
          Larry McCay added a comment -

          Hey Alejandro Abdelnur - I need a little more clarification here. When you describe webhdfs authenticating as 'hdfs while it is accessing a file on behalf of an end user - are you referring to the fact that the services authenticate to one another even though the effective user (via doas) will be the end user and therefore the authorization will be checking the end user's permissions? If so, isn't this the same for httpfs?

          What keeps an admin from using httpfs to gain access to decrypt encrypted files? If an admin can authenticate as an end user to either proxy then it seems they will be able to gain access.

          I must be missing some nuance about webhdfs and hdfs user.
          That doesn't lessen my concern about webhdfs not being considered a trusted API to encrypted files though.

          Show
          Larry McCay added a comment - Hey Alejandro Abdelnur - I need a little more clarification here. When you describe webhdfs authenticating as 'hdfs while it is accessing a file on behalf of an end user - are you referring to the fact that the services authenticate to one another even though the effective user (via doas) will be the end user and therefore the authorization will be checking the end user's permissions? If so, isn't this the same for httpfs? What keeps an admin from using httpfs to gain access to decrypt encrypted files? If an admin can authenticate as an end user to either proxy then it seems they will be able to gain access. I must be missing some nuance about webhdfs and hdfs user. That doesn't lessen my concern about webhdfs not being considered a trusted API to encrypted files though.
          Hide
          Sanjay Radia added a comment -

          Larry I don't completely get the difference between webhdfs and httpfs but I think the cause of the difference is that user hdfs is superuser (note DN runs as hdfs and webhdfs code is executed on behalf of the end-user inside the DN after checking the permissions), Hence I think this would potentially open up access to all encrypted files that are readable. However that should NOT happen if doAs is used (correct?).

          I agree it would be unacceptable to say that if one enables transparent encryption then one should disable webhdfs because it would become insecure, Andrew say that "Regarding webhdfs, it's not a recommended deployment" but Aljeandro say "Both httpfs and webhdfs will work just fine" but then in the same paragraph says "this could fail some security audits".

          Show
          Sanjay Radia added a comment - Larry I don't completely get the difference between webhdfs and httpfs but I think the cause of the difference is that user hdfs is superuser (note DN runs as hdfs and webhdfs code is executed on behalf of the end-user inside the DN after checking the permissions), Hence I think this would potentially open up access to all encrypted files that are readable. However that should NOT happen if doAs is used (correct?). I agree it would be unacceptable to say that if one enables transparent encryption then one should disable webhdfs because it would become insecure, Andrew say that "Regarding webhdfs, it's not a recommended deployment" but Aljeandro say "Both httpfs and webhdfs will work just fine" but then in the same paragraph says "this could fail some security audits".
          Hide
          Larry McCay added a comment -

          I guess if webhdfs is allowed to doAs the end user 'hdfs' then that can be a problem.
          But again, I don't see what keeps an admin from doing that with httpfs as well.

          It seems as though KMS needs to have the ability to not allow 'hdfs' user gain keys through any trusted proxy but still allow a trusted proxy that is running as a superuser to doAs other users.

          Show
          Larry McCay added a comment - I guess if webhdfs is allowed to doAs the end user 'hdfs' then that can be a problem. But again, I don't see what keeps an admin from doing that with httpfs as well. It seems as though KMS needs to have the ability to not allow 'hdfs' user gain keys through any trusted proxy but still allow a trusted proxy that is running as a superuser to doAs other users.
          Hide
          Alejandro Abdelnur added a comment -

          Let me try to explain things a different way.

          When setting up filesystem encryption in HDFS (forget about webhdfs and httpfs for now), things will be configured so the HDFS superuser cannot retrieve decrypted 'file encryption keys'. Because the HDFS superuser has access to the encrypted versions of the files, having access to the decrypted 'file encryption keys' would allow the HDFS superuser to get access to the decrypted file. One of the goals of HDFS encryption is to prevent that.

          This is achieved by blacklisting the HDFS superuser from retrieving decrypted 'file encryption keys' from the KMS. This blacklist is must be enforced on the real UGI hitting the KMS (regardless if it is doing a doAs or not).

          If you set up httpfs, it runs using the 'httpfs' user, a HDFS regular user configured as proxyuser to interact with HDFS and KMS doing doAs calls.

          If you set up webhdfs, it runs using the 'hdfs' user, the HDFS superuser, and this user will have to be configured as proxyuser in the KMS to work with doAs calls. Also the 'hdfs' user will have to be removed from the KMS decrypt-keys blacklist (and this is the problem).

          Even if you audit the webhdfs code running in the DNs to ensure things are always done using doAs and that there is no foul play in the DN code there is an issue. The issue is:

          • An HDFS admin logins to a DN in the cluster as 'hdfs'
          • Then he kinits as 'hdsf/HOST'
          • Then he curls the KMS asking to decrypted keys as user X doing a doAs
          • Because he has access to the encrypted file, and now has the decrypted key, gets access to the file in clear

          hope this clarifies.

          Show
          Alejandro Abdelnur added a comment - Let me try to explain things a different way. When setting up filesystem encryption in HDFS (forget about webhdfs and httpfs for now), things will be configured so the HDFS superuser cannot retrieve decrypted 'file encryption keys'. Because the HDFS superuser has access to the encrypted versions of the files, having access to the decrypted 'file encryption keys' would allow the HDFS superuser to get access to the decrypted file. One of the goals of HDFS encryption is to prevent that. This is achieved by blacklisting the HDFS superuser from retrieving decrypted 'file encryption keys' from the KMS. This blacklist is must be enforced on the real UGI hitting the KMS (regardless if it is doing a doAs or not). If you set up httpfs, it runs using the 'httpfs' user, a HDFS regular user configured as proxyuser to interact with HDFS and KMS doing doAs calls. If you set up webhdfs, it runs using the 'hdfs' user, the HDFS superuser, and this user will have to be configured as proxyuser in the KMS to work with doAs calls. Also the 'hdfs' user will have to be removed from the KMS decrypt-keys blacklist ( and this is the problem ). Even if you audit the webhdfs code running in the DNs to ensure things are always done using doAs and that there is no foul play in the DN code there is an issue. The issue is: An HDFS admin logins to a DN in the cluster as 'hdfs' Then he kinits as 'hdsf/HOST' Then he curls the KMS asking to decrypted keys as user X doing a doAs Because he has access to the encrypted file, and now has the decrypted key, gets access to the file in clear hope this clarifies.
          Hide
          Larry McCay added a comment -

          Thanks Alejandro Abdelnur that is pretty clear.

          The question that remains for me is why this same scenario isn't achievable by the admin kinit'ing as httpfs/HOST or Oozie or some other trusted proxy and then issuing a request with a doAs user X.

          We have to somehow fix this for webhdfs - it is an expected and valuable API and should remain so with encrypted files without introducing a vulnerability.

          Even if we have to do something like use another proxy (like Knox) and a shared secret to ensure that there is additional verification of the origin of a KMS request from webhdfs. This would enable proxies to access webhdfs resources with a signed/encrypted token - if KMS gets a signed request from webhdfs that it can verify then it can proceed. The shared secret can be made available through the credential provider API and webhdfs itself would just see it as an opaque token that needs to be passed in the KMS request. Requiring an extra hop for this access would be unfortunate too but if it is for additional security of the data it may be acceptable.

          Anyway, that's just a thought for keeping webhdfs as a first class citizen. We have to do something.

          Show
          Larry McCay added a comment - Thanks Alejandro Abdelnur that is pretty clear. The question that remains for me is why this same scenario isn't achievable by the admin kinit'ing as httpfs/HOST or Oozie or some other trusted proxy and then issuing a request with a doAs user X. We have to somehow fix this for webhdfs - it is an expected and valuable API and should remain so with encrypted files without introducing a vulnerability. Even if we have to do something like use another proxy (like Knox) and a shared secret to ensure that there is additional verification of the origin of a KMS request from webhdfs. This would enable proxies to access webhdfs resources with a signed/encrypted token - if KMS gets a signed request from webhdfs that it can verify then it can proceed. The shared secret can be made available through the credential provider API and webhdfs itself would just see it as an opaque token that needs to be passed in the KMS request. Requiring an extra hop for this access would be unfortunate too but if it is for additional security of the data it may be acceptable. Anyway, that's just a thought for keeping webhdfs as a first class citizen. We have to do something.
          Hide
          Alejandro Abdelnur added a comment -

          Larry, if the httpfs admin is a different person than the hdfs admin you don't have the problem.

          Show
          Alejandro Abdelnur added a comment - Larry, if the httpfs admin is a different person than the hdfs admin you don't have the problem.
          Hide
          Sanjay Radia added a comment - - edited

          Alejandro, a potential solution: treat user "hdfs" as a special user such that the HDFS system will NOT accept any client connections from "hdfs". An Admin will not be able to connect as user "hdfs" but can connect as user, say, "ClarkKent" where "ClarkKent" is in the superuser group of hdfs so that the admin can do his job as superuser. It does means that we are trusting the HDFS code to be correct in not abusing its access to keys since it has proxy authority with KMS (this was not required so far.)

          Show
          Sanjay Radia added a comment - - edited Alejandro, a potential solution: treat user "hdfs" as a special user such that the HDFS system will NOT accept any client connections from "hdfs". An Admin will not be able to connect as user "hdfs" but can connect as user, say, "ClarkKent" where "ClarkKent" is in the superuser group of hdfs so that the admin can do his job as superuser. It does means that we are trusting the HDFS code to be correct in not abusing its access to keys since it has proxy authority with KMS (this was not required so far.)
          Hide
          Larry McCay added a comment -

          And that is ensured by file permissions on the keytab?

          On Wed, Aug 13, 2014 at 1:14 PM, Alejandro Abdelnur (JIRA) <jira@apache.org>

          Show
          Larry McCay added a comment - And that is ensured by file permissions on the keytab? On Wed, Aug 13, 2014 at 1:14 PM, Alejandro Abdelnur (JIRA) <jira@apache.org>
          Hide
          Alejandro Abdelnur added a comment -

          if httpfs and NN or DNs run in the same box, yes. however, in a prod environment that would not commonly be the case.

          Show
          Alejandro Abdelnur added a comment - if httpfs and NN or DNs run in the same box, yes. however, in a prod environment that would not commonly be the case.
          Hide
          Sanjay Radia added a comment -

          If you set up httpfs, it runs using the 'httpfs' user, a HDFS regular user configured as proxyuser to interact with HDFS and KMS doing doAs calls

          Alejandro , we modified the original design in this Jira so that the NN is not a proxy for the keys but instead the client get the keys directly from the KMS because the best practice in encryption is to eliminate proxies (see Owen's comment of June 11). With your proposal for httpfs, the httpfs server is a proxy to get the keys. Perhaps we are approaching the problem wrong. Consider the following alternative: let webhdfs and httpfs simply send the encrypted raw data to the client. For the hdfs-native filesystem, the encryption and decryption happens on the client side; we should consider the same for the rest protocol. Clearly it requires more code on the rest client side.

          BTW the webhdfs-fileSystem (as opposed to the rest protocol that is discussed about) has a client side library that can mimic the hdfs filesystem's client side.

          Show
          Sanjay Radia added a comment - If you set up httpfs, it runs using the 'httpfs' user, a HDFS regular user configured as proxyuser to interact with HDFS and KMS doing doAs calls Alejandro , we modified the original design in this Jira so that the NN is not a proxy for the keys but instead the client get the keys directly from the KMS because the best practice in encryption is to eliminate proxies (see Owen's comment of June 11). With your proposal for httpfs, the httpfs server is a proxy to get the keys. Perhaps we are approaching the problem wrong. Consider the following alternative: let webhdfs and httpfs simply send the encrypted raw data to the client. For the hdfs-native filesystem, the encryption and decryption happens on the client side; we should consider the same for the rest protocol. Clearly it requires more code on the rest client side. BTW the webhdfs-fileSystem (as opposed to the rest protocol that is discussed about) has a client side library that can mimic the hdfs filesystem's client side.
          Hide
          Alejandro Abdelnur added a comment -

          Sanjay,

          HttpFS is a service that requires to be configured as proxyuser in HDFS. Different from the 'hdfs' user, the 'httpfs' user do not have blanket access to all HDFS files, only to the files of users that can proxy-user as and with HDFS permissions being enforced. Also, the 'httpfs' user does not have access to all encrypted files, which the 'hdfs' user does. The same holds for Oozie, Templeton, HiveServer2, Knox and any other service that needs proxyuser config in HDFS.

          Regarding returning encrypted data back to the HTTP client. Well, that would mean that you cannot simply use tools/libraries like curl/libcurl to integrate via the WebHDFS protocol anymore. You'll need a client library that interacts with KMS to decrypt the encrypted key and use libopenssl to decrypt. And if you are accessing file ranges, you'll have to know how to manipulate the IV. IMO, going this path, completely defeats the motivation out of which WebHDFS came to be.

          Show
          Alejandro Abdelnur added a comment - Sanjay, HttpFS is a service that requires to be configured as proxyuser in HDFS. Different from the 'hdfs' user, the 'httpfs' user do not have blanket access to all HDFS files, only to the files of users that can proxy-user as and with HDFS permissions being enforced. Also, the 'httpfs' user does not have access to all encrypted files, which the 'hdfs' user does. The same holds for Oozie, Templeton, HiveServer2, Knox and any other service that needs proxyuser config in HDFS. Regarding returning encrypted data back to the HTTP client. Well, that would mean that you cannot simply use tools/libraries like curl/libcurl to integrate via the WebHDFS protocol anymore. You'll need a client library that interacts with KMS to decrypt the encrypted key and use libopenssl to decrypt. And if you are accessing file ranges, you'll have to know how to manipulate the IV. IMO, going this path, completely defeats the motivation out of which WebHDFS came to be.
          Hide
          Sanjay Radia added a comment -

          I get your point about client-side code for webhdfs. I do agree that httpfs is a proxy but do you want it to have blanket access to all keys?

          My main concern is that this jira completely breaks webhdfs. Do you find that acceptable?? There are so many users of this protocol.
          BTW did you see my earlier attempt at the solution (13.06 today) - does that work?

          Show
          Sanjay Radia added a comment - I get your point about client-side code for webhdfs. I do agree that httpfs is a proxy but do you want it to have blanket access to all keys? My main concern is that this jira completely breaks webhdfs. Do you find that acceptable?? There are so many users of this protocol. BTW did you see my earlier attempt at the solution (13.06 today) - does that work?
          Hide
          Sanjay Radia added a comment -

          Alejandro, can you please summarize your explanation for why during file creation, NN requests the KMS to create a new EDEK rather then having the client do it. Suresh raised the same concern that I did at our meeting yesterday. Thanks

          Show
          Sanjay Radia added a comment - Alejandro, can you please summarize your explanation for why during file creation, NN requests the KMS to create a new EDEK rather then having the client do it. Suresh raised the same concern that I did at our meeting yesterday. Thanks
          Hide
          Alejandro Abdelnur added a comment -

          on create() the NN creates the INode for the file and sets the IV and the EDEK as xAttrs, then it returns them as part of the create response. A single client to NN RPC is done for the create().

          If the client does is responsible for creating the new EDEK, then you need 2 RPCs on create(), the first one to 'create' the file, the second one is (now that you know that your file is to be encrypted because of the first RPC call response) to set the EDEK in the INode.

          On the current behavior, keep in mind the call to the KMS to get EDEK is not done during file create(), the NN has a warm cache of EDEKs that are being replenished asynchronously from file create() calls.

          Show
          Alejandro Abdelnur added a comment - on create() the NN creates the INode for the file and sets the IV and the EDEK as xAttrs, then it returns them as part of the create response. A single client to NN RPC is done for the create(). If the client does is responsible for creating the new EDEK, then you need 2 RPCs on create(), the first one to 'create' the file, the second one is (now that you know that your file is to be encrypted because of the first RPC call response) to set the EDEK in the INode. On the current behavior, keep in mind the call to the KMS to get EDEK is not done during file create(), the NN has a warm cache of EDEKs that are being replenished asynchronously from file create() calls.
          Hide
          Sanjay Radia added a comment -

          Had a chat with Owen over the wehbhdfs issue and the solution I had proposed in comment . He said that restricting the client connections from user hdfs are not necessary: the DN does a doAs(user) . KMS is configured for hdfs to be proxy but it also blacklists hdfs (and other superusers). That is the DN as a proxy cannot get a key for hdfs but it can get the keys for other users. So this brings the httpfs and webhdfs solutions to be the same.

          Owen proposed another solution where the httpfs or DN daemons do not need to be trusted proxies for the KMS. The user simply passes a KMS delegation token in the REST request (we already pass HDFS delegation tokens).

          Show
          Sanjay Radia added a comment - Had a chat with Owen over the wehbhdfs issue and the solution I had proposed in comment . He said that restricting the client connections from user hdfs are not necessary: the DN does a doAs(user) . KMS is configured for hdfs to be proxy but it also blacklists hdfs (and other superusers). That is the DN as a proxy cannot get a key for hdfs but it can get the keys for other users. So this brings the httpfs and webhdfs solutions to be the same. Owen proposed another solution where the httpfs or DN daemons do not need to be trusted proxies for the KMS. The user simply passes a KMS delegation token in the REST request (we already pass HDFS delegation tokens).
          Hide
          Suresh Srinivas added a comment -

          Finally got some time to catchup on this jira. Nice work team!

          Some comments:

          • Is there a document that covers aspects of how authentication and authorization works with KMS?
          • Also consolidating how the extended attributes is used by the feature into the main design document or one design document will help.
          • Now a file has ownership which decides access and keys which decide permissions to decrypt. An admin can change file ownership. Now the new owner may not be able to decrypt is and the original owner who can decrypt may not be able to access the file. Since these kinds of scenarios can be a management nightmare, we need the following (not sure they already exist):
            • Ability to list a key ID information for a give file. For that given key ID ability to get ownership information from the KMS (not sure it is part of standard interface or we need a tool that goes to HDFS, followed by KMS).
            • Ability to run some kind of audit to detect conditions where key ownership and file permissions have discrepancies.
          • Only a user with permissions to key can create a file under EZone. This could be an issue for distcp that is run by a user. Do we need to think about this restriction or would this be solved by /.reserved/.raw scheme?

          These issues can be addressed post merge. We need to see what the pending work is and come up with a list of what needs to be done before merge and what can be moved to after the merge with some due date. This avoids situation where a patch committed to trunk and then people get very busy and do not work on follow up jiras.

          Show
          Suresh Srinivas added a comment - Finally got some time to catchup on this jira. Nice work team! Some comments: Is there a document that covers aspects of how authentication and authorization works with KMS? Also consolidating how the extended attributes is used by the feature into the main design document or one design document will help. Now a file has ownership which decides access and keys which decide permissions to decrypt. An admin can change file ownership. Now the new owner may not be able to decrypt is and the original owner who can decrypt may not be able to access the file. Since these kinds of scenarios can be a management nightmare, we need the following (not sure they already exist): Ability to list a key ID information for a give file. For that given key ID ability to get ownership information from the KMS (not sure it is part of standard interface or we need a tool that goes to HDFS, followed by KMS). Ability to run some kind of audit to detect conditions where key ownership and file permissions have discrepancies. Only a user with permissions to key can create a file under EZone. This could be an issue for distcp that is run by a user. Do we need to think about this restriction or would this be solved by /.reserved/.raw scheme? These issues can be addressed post merge. We need to see what the pending work is and come up with a list of what needs to be done before merge and what can be moved to after the merge with some due date. This avoids situation where a patch committed to trunk and then people get very busy and do not work on follow up jiras.
          Hide
          Sanjay Radia added a comment -

          Context: making things work for cp, distcp, har, etc.
          Is the following true:
          the EZ master key (EZKey) is only needed for file creation in EZ subtree. After that for reading or appending to a file, one simple needs the file's individual key. If that is true then one can copy raw encrypted files and their keys from an EZ to tape, har, tar, etc and then restore them later and things would just work. Also can one copy raw encrypted files and their keys from an EZ to another EZ which has a different EZKey and again things would work?

          Show
          Sanjay Radia added a comment - Context: making things work for cp, distcp, har, etc. Is the following true: the EZ master key (EZKey) is only needed for file creation in EZ subtree. After that for reading or appending to a file, one simple needs the file's individual key. If that is true then one can copy raw encrypted files and their keys from an EZ to tape, har, tar, etc and then restore them later and things would just work. Also can one copy raw encrypted files and their keys from an EZ to another EZ which has a different EZKey and again things would work?
          Hide
          Charles Lamb added a comment -

          the EZ master key (EZKey) is only needed for file creation in EZ subtree. After that for reading or appending to a file, one simple needs the file's individual key. If that is true then one can copy raw encrypted files and their keys from an EZ to tape, har, tar, etc and then restore them later and things would just work. Also can one copy raw encrypted files and their keys from an EZ to another EZ which has a different EZKey and again things would work?

          Not exactly. Each file has an EDEK associated with it. That's a key (the DEK) which has been encrypted with the ez-key. To read the file, you need to turn the EDEK into a DEK by decrypting the EDEK with the ez-key.

          That said, you can still read back tape, har, tar, etc later as long as you still have access to the ez-key (which presumably you do).

          Show
          Charles Lamb added a comment - the EZ master key (EZKey) is only needed for file creation in EZ subtree. After that for reading or appending to a file, one simple needs the file's individual key. If that is true then one can copy raw encrypted files and their keys from an EZ to tape, har, tar, etc and then restore them later and things would just work. Also can one copy raw encrypted files and their keys from an EZ to another EZ which has a different EZKey and again things would work? Not exactly. Each file has an EDEK associated with it. That's a key (the DEK) which has been encrypted with the ez-key. To read the file, you need to turn the EDEK into a DEK by decrypting the EDEK with the ez-key. That said, you can still read back tape, har, tar, etc later as long as you still have access to the ez-key (which presumably you do).
          Hide
          Suresh Srinivas added a comment -

          To read the file, you need to turn the EDEK into a DEK by decrypting the EDEK with the ez-key.

          FileEncryptionInfo stores EZKeyID right along with EDEK. So using EZKeyID and EDEK one can get at DEK right?

          So I think Sanjay's assertion is right. When you want to create a file, you need encryption zone's EZKey. For reading the FileEncruptionInfo is self sufficient. Am I missing something?

          Show
          Suresh Srinivas added a comment - To read the file, you need to turn the EDEK into a DEK by decrypting the EDEK with the ez-key. FileEncryptionInfo stores EZKeyID right along with EDEK. So using EZKeyID and EDEK one can get at DEK right? So I think Sanjay's assertion is right. When you want to create a file, you need encryption zone's EZKey. For reading the FileEncruptionInfo is self sufficient. Am I missing something?
          Hide
          Sanjay Radia added a comment - - edited

          Some thoughts on the Har use cases and possible outcomes:
          1) Har a subtree and the subtree contains an EZ.
          2) Har a subtree rooted at the EZ
          3) Har a subtree within an EZ
          Typically the subtree is replaced by the har itself, though not required. The Har is read only.
          The operation can be performed by an admin or by a user.

          Use case 1 - copy the raw files and the keys into the HAR (ie the files inside the HAR remain encrypted). When files are accessed from the Har filesystem the same machinery as for HDFS EZ should come to play to allow transparent decryption of the files. A user with no KMS permission will not be able to decrypt. Someone with read access to the HAR will be able to get to the raw files and their keys (how does this compare to the normal HDFS EZ?)
          Use case 2 - same as 1.
          Use case 3. If the har is copied elsewhere (ie it does not replace the subtree) then same as 1. If it does replace subtree the HAR will be encrypted once again (ie double encryption).

          Show
          Sanjay Radia added a comment - - edited Some thoughts on the Har use cases and possible outcomes: 1) Har a subtree and the subtree contains an EZ. 2) Har a subtree rooted at the EZ 3) Har a subtree within an EZ Typically the subtree is replaced by the har itself, though not required. The Har is read only. The operation can be performed by an admin or by a user. Use case 1 - copy the raw files and the keys into the HAR (ie the files inside the HAR remain encrypted). When files are accessed from the Har filesystem the same machinery as for HDFS EZ should come to play to allow transparent decryption of the files. A user with no KMS permission will not be able to decrypt. Someone with read access to the HAR will be able to get to the raw files and their keys (how does this compare to the normal HDFS EZ?) Use case 2 - same as 1. Use case 3. If the har is copied elsewhere (ie it does not replace the subtree) then same as 1. If it does replace subtree the HAR will be encrypted once again (ie double encryption).
          Hide
          Charles Lamb added a comment -

          FileEncryptionInfo stores EZKeyID right along with EDEK. So using EZKeyID and EDEK one can get at DEK right?

          So I think Sanjay's assertion is right. When you want to create a file, you need encryption zone's EZKey. For reading the FileEncruptionInfo is self sufficient. Am I missing something?

          I might be misunderstanding what you're saying. You are correct that to create a file you need the ez-keyid (actually you need the credentials to access the ez-key since the KMS is the only one who touches the ezkey on the users behalf – the actual ezkey never leaves the KMS). The ezkey-id is required because you need to turn an EDEK into a DEK. But it is the EDEK, not a DEK, that is stored in the xattr. After the file has been created (and encrypted), to read it back, you need the EDEK to be decrypted and turned into a DEK. The client asks the KMS to do that decryption using the EZ-key (the id of which is also stored in the FEInfo). But the FEInfo does not have a DEK – it has an EDEK and to do any encryption/decryption you need the DEK. Remember that we never want to store a DEK in the xattr because that would be giving the unencrypted key to the NN.

          The FEInfo is sufficient as long as you still have the KMS around to do the decryption of the EDEK into a DEK. You will always need the KMS to make that conversion and the KMS will do that based on whatever ezkeyid you hand it.

          Show
          Charles Lamb added a comment - FileEncryptionInfo stores EZKeyID right along with EDEK. So using EZKeyID and EDEK one can get at DEK right? So I think Sanjay's assertion is right. When you want to create a file, you need encryption zone's EZKey. For reading the FileEncruptionInfo is self sufficient. Am I missing something? I might be misunderstanding what you're saying. You are correct that to create a file you need the ez-keyid (actually you need the credentials to access the ez-key since the KMS is the only one who touches the ezkey on the users behalf – the actual ezkey never leaves the KMS). The ezkey-id is required because you need to turn an EDEK into a DEK. But it is the EDEK, not a DEK, that is stored in the xattr. After the file has been created (and encrypted), to read it back, you need the EDEK to be decrypted and turned into a DEK. The client asks the KMS to do that decryption using the EZ-key (the id of which is also stored in the FEInfo). But the FEInfo does not have a DEK – it has an EDEK and to do any encryption/decryption you need the DEK. Remember that we never want to store a DEK in the xattr because that would be giving the unencrypted key to the NN. The FEInfo is sufficient as long as you still have the KMS around to do the decryption of the EDEK into a DEK. You will always need the KMS to make that conversion and the KMS will do that based on whatever ezkeyid you hand it.
          Hide
          Suresh Srinivas added a comment -

          Charles Lamb, we are saying the same thing. For any file creation in one needs to get EZKey from the directory marked as encryption zone. However to read an encrypted file FileEncryptionInfo has all the required information. All of this of course assumes that user has credentials to access KMS.

          Show
          Suresh Srinivas added a comment - Charles Lamb , we are saying the same thing. For any file creation in one needs to get EZKey from the directory marked as encryption zone. However to read an encrypted file FileEncryptionInfo has all the required information. All of this of course assumes that user has credentials to access KMS.
          Hide
          Charles Lamb added a comment -

          Charles Lamb, we are saying the same thing. For any file creation in one needs to get EZKey from the directory marked as encryption zone. However to read an encrypted file FileEncryptionInfo has all the required information. All of this of course assumes that user has credentials to access KMS.

          Thanks for clarifying. Yes, we are in agreement. Sorry to belabor - just trying to be precise and make sure we are all on the same page.

          Show
          Charles Lamb added a comment - Charles Lamb, we are saying the same thing. For any file creation in one needs to get EZKey from the directory marked as encryption zone. However to read an encrypted file FileEncryptionInfo has all the required information. All of this of course assumes that user has credentials to access KMS. Thanks for clarifying. Yes, we are in agreement. Sorry to belabor - just trying to be precise and make sure we are all on the same page.
          Hide
          Sanjay Radia added a comment -

          Had a chat with Owen over the wehbhdfs issue and the solution I had proposed in comment . He said that restricting the client connections from user hdfs are not necessary: the DN does a doAs(user) . KMS is configured for hdfs to be proxy but it also blacklists hdfs (and other superusers). That is the DN as a proxy cannot get a key for hdfs but it can get the keys for other users. So this brings the httpfs and webhdfs solutions to be the same.

          The above does not work: an admin can login in as "hdfs" and then pretend to be the NN/DN and use the proxy privilege to get DEKs from EDEKs (an admin can read EDEKs easily). (Alejandro - thanks for the explanation - i finally get the distinction between webhdfs and httpfs)

          Show
          Sanjay Radia added a comment - Had a chat with Owen over the wehbhdfs issue and the solution I had proposed in comment . He said that restricting the client connections from user hdfs are not necessary: the DN does a doAs(user) . KMS is configured for hdfs to be proxy but it also blacklists hdfs (and other superusers). That is the DN as a proxy cannot get a key for hdfs but it can get the keys for other users. So this brings the httpfs and webhdfs solutions to be the same. The above does not work: an admin can login in as "hdfs" and then pretend to be the NN/DN and use the proxy privilege to get DEKs from EDEKs (an admin can read EDEKs easily). (Alejandro - thanks for the explanation - i finally get the distinction between webhdfs and httpfs)
          Hide
          Suresh Srinivas added a comment - - edited

          We had conversation about the finer details of the feature and follow up work items. Here are some of my comments and points we discussed from that meeting:

          1. Need a consolidated documents on what extended attributes are introduced. How they are used and are restricted.
          2. Need a way to turn off encryption feature.
          3. Need description on how for hadoop jobs KMS credential check and decryption works. Not sure if there is a design document that covers delegation token from KMS.
          4. All files in encryption in zone must be encrypted. We need to clarify in the design if encrypted files can only be in encryption zone and is not allowed outside of encryption zone. There are pros and cons to this decision. It would be good to decide what to do in this regard and capture it in the design.
          5. Sequence related to file creation in the design document mentions namenode as client. It may be confused with DFS client. Also it is a good idea to discuss why this is chosen as opposed to all the KMS interaction in the DFS client. Need a discussion on how namenode handles unresponsive KMS and how it affects service availability
          6. Is EDEK creation idempotent? How editlog operations is logged related to this should be discussed in the design.
          7. KMS must be global in the current phase across all the clusters. This should be documented in the design as assumption. In future, we could have multiple KMS. But that can be enabled in a backward compatible way.

          Some the actionable items above can be addresses after merging to trunk. I have +1 ed the merge vote.

          Show
          Suresh Srinivas added a comment - - edited We had conversation about the finer details of the feature and follow up work items. Here are some of my comments and points we discussed from that meeting: Need a consolidated documents on what extended attributes are introduced. How they are used and are restricted. Need a way to turn off encryption feature. Need description on how for hadoop jobs KMS credential check and decryption works. Not sure if there is a design document that covers delegation token from KMS. All files in encryption in zone must be encrypted. We need to clarify in the design if encrypted files can only be in encryption zone and is not allowed outside of encryption zone. There are pros and cons to this decision. It would be good to decide what to do in this regard and capture it in the design. Sequence related to file creation in the design document mentions namenode as client. It may be confused with DFS client. Also it is a good idea to discuss why this is chosen as opposed to all the KMS interaction in the DFS client. Need a discussion on how namenode handles unresponsive KMS and how it affects service availability Is EDEK creation idempotent? How editlog operations is logged related to this should be discussed in the design. KMS must be global in the current phase across all the clusters. This should be documented in the design as assumption. In future, we could have multiple KMS. But that can be enabled in a backward compatible way. Some the actionable items above can be addresses after merging to trunk. I have +1 ed the merge vote.
          Hide
          Sanjay Radia added a comment -

          We have made very good progress over the last few days. Thanks for taking the time for the offline technical discussions. Below is a summary of the concerns I have raised previously in this Jira.

          1. Fix distcp and cp to automatically deal with EZ using /r/r internally. Initially we need to support only row 1 and row 4 in the table I attached in Hadoop-10919
          2. Fix Webhdfs to use KMS delegation tokens so that webhdfs can be used with transparent encryption without giving user "hdfs" KMS proxy permission (and as a result to admins). Rest is a key protocol for HDFS and for many Hadoop use cases, an Admin should not have access to the keys of encrypted files.
          3. Further work on specifying what HAR should do (I have listed some use cases and proposed solutions ), and then follow it up with a fix to har.
          4. Some work on understanding availability and scalability on KMS for medium to large clusters. Perhaps we need to explore getting the keys ahead of time when a job is submitted.

          Lets complete Items 1 and 2 promptly. Before we publish transparent encryption in a 2.x release for pubic consumption, let us at least complete item 1 (ie distcp and cp) and the flag to turn this feature on/of.

          Show
          Sanjay Radia added a comment - We have made very good progress over the last few days. Thanks for taking the time for the offline technical discussions. Below is a summary of the concerns I have raised previously in this Jira. Fix distcp and cp to automatically deal with EZ using /r/r internally. Initially we need to support only row 1 and row 4 in the table I attached in Hadoop-10919 Fix Webhdfs to use KMS delegation tokens so that webhdfs can be used with transparent encryption without giving user "hdfs" KMS proxy permission (and as a result to admins). Rest is a key protocol for HDFS and for many Hadoop use cases, an Admin should not have access to the keys of encrypted files. Further work on specifying what HAR should do (I have listed some use cases and proposed solutions ), and then follow it up with a fix to har. Some work on understanding availability and scalability on KMS for medium to large clusters. Perhaps we need to explore getting the keys ahead of time when a job is submitted. Lets complete Items 1 and 2 promptly. Before we publish transparent encryption in a 2.x release for pubic consumption, let us at least complete item 1 (ie distcp and cp) and the flag to turn this feature on/of.
          Hide
          Sanjay Radia added a comment -

          Alejandro. Wrt to the subtle difference between webhfs vs httpfs, can an admin grab the EDEKs and raw files and then log into the httpfs machine become user "httpfs" and then trick the KMS to decrypt the keys because httpfs has proxy setting?

          Show
          Sanjay Radia added a comment - Alejandro. Wrt to the subtle difference between webhfs vs httpfs, can an admin grab the EDEKs and raw files and then log into the httpfs machine become user "httpfs" and then trick the KMS to decrypt the keys because httpfs has proxy setting?
          Hide
          Andrew Wang added a comment -

          New consolidated patch attached. The merge vote has passed, so if this passes Jenkins, I'll merge the branch into trunk.

          Show
          Andrew Wang added a comment - New consolidated patch attached. The merge vote has passed, so if this passes Jenkins, I'll merge the branch into trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12662536/fs-encryption.2014-08-18.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 47 new or modified test files.

          -1 javac. The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-tools/hadoop-distcp:

          org.apache.hadoop.ha.TestActiveStandbyElector
          org.apache.hadoop.ipc.TestDecayRpcScheduler
          org.apache.hadoop.ha.TestZKFailoverController
          org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat
          org.apache.hadoop.tools.TestOptionsParser
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12662536/fs-encryption.2014-08-18.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 47 new or modified test files. -1 javac . The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-tools/hadoop-distcp: org.apache.hadoop.ha.TestActiveStandbyElector org.apache.hadoop.ipc.TestDecayRpcScheduler org.apache.hadoop.ha.TestZKFailoverController org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat org.apache.hadoop.tools.TestOptionsParser org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7670//console This message is automatically generated.
          Hide
          Charles Lamb added a comment -

          I have run all of the failed unit tests on my local machine and either observed them to pass or fail identically on a pre-patch sandbox. The one exception is TestOptionsParser which is a real failure. I will create a diff and patch for that.

          Show
          Charles Lamb added a comment - I have run all of the failed unit tests on my local machine and either observed them to pass or fail identically on a pre-patch sandbox. The one exception is TestOptionsParser which is a real failure. I will create a diff and patch for that.
          Hide
          Charles Lamb added a comment -

          wrt the Release audit warnings, I believe this is ok:

          !????? /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          Lines that start with ????? in the release audit report indicate files that do not have an Apache license header.

          Show
          Charles Lamb added a comment - wrt the Release audit warnings, I believe this is ok: !????? /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt Lines that start with ????? in the release audit report indicate files that do not have an Apache license header.
          Hide
          Charles Lamb added a comment -

          It looks like the 3 new javac warnings are these:

          > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java:[39,36] sun.nio.ch.DirectBuffer is Sun proprietary API and may be removed in a future release
          > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java:[40,20] sun.misc.Cleaner is Sun proprietary API and may be removed in a future release
          > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java:[41,22] sun.nio.ch.DirectBuffer is Sun proprietary API and may be removed in a future release

          Show
          Charles Lamb added a comment - It looks like the 3 new javac warnings are these: > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java: [39,36] sun.nio.ch.DirectBuffer is Sun proprietary API and may be removed in a future release > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java: [40,20] sun.misc.Cleaner is Sun proprietary API and may be removed in a future release > [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java: [41,22] sun.nio.ch.DirectBuffer is Sun proprietary API and may be removed in a future release
          Hide
          Charles Lamb added a comment -

          Suresh Srinivas,

          Need a way to turn off encryption feature.

          Do you mean a config switch to completely disable encryption completely in the NN? Wouldn't someone just not create an encryption zone to achieve the same thing?

          Show
          Charles Lamb added a comment - Suresh Srinivas , Need a way to turn off encryption feature. Do you mean a config switch to completely disable encryption completely in the NN? Wouldn't someone just not create an encryption zone to achieve the same thing?
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12662536/fs-encryption.2014-08-18.patch
          against trunk revision .

          -1 patch. The patch command could not apply the patch.

          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7678//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12662536/fs-encryption.2014-08-18.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7678//console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          Do you mean a config switch to completely disable encryption completely in the NN? Wouldn't someone just not create an encryption zone to achieve the same thing?

          Actually you are right. We decided the same thing for Snapshots. If snapshottable directories are not created (similar to encryption zone), the snapshot feature is not in use. So I think it is okay not to have the feature flag to turn on or off the feature. A related question, how does the CLIs introduced in this feature work when no encryption zones are created?

          Show
          Suresh Srinivas added a comment - Do you mean a config switch to completely disable encryption completely in the NN? Wouldn't someone just not create an encryption zone to achieve the same thing? Actually you are right. We decided the same thing for Snapshots. If snapshottable directories are not created (similar to encryption zone), the snapshot feature is not in use. So I think it is okay not to have the feature flag to turn on or off the feature. A related question, how does the CLIs introduced in this feature work when no encryption zones are created?
          Hide
          Andrew Wang added a comment -

          New consolidated patch, Charles cleared out the Jenkins issues.

          Show
          Andrew Wang added a comment - New consolidated patch, Charles cleared out the Jenkins issues.
          Hide
          Charles Lamb added a comment -

          Suresh,

          There are only two CLI commands under 'hdfs crypto': createEncryptionZone and listEncryptionZones. If you don't use the former, then the latter will just do return without displaying anything.

          Show
          Charles Lamb added a comment - Suresh, There are only two CLI commands under 'hdfs crypto': createEncryptionZone and listEncryptionZones. If you don't use the former, then the latter will just do return without displaying anything.
          Hide
          Andrew Wang added a comment -

          I've filed MAPREDUCE-6040 to address Sanjay's point about automatically using /.reserved/raw if running distcp as root.

          I also filed HADOOP-10983 to address Suresh's point about auditing FS vs. KMS permissions, we basically just need a new KMS API to fetch the ACLs for a key. Then such an auditing tool could be built.

          Hopefully this will take care of the most pressing feedback. I would like to backport this to branch-2 relatively soon after the trunk merge to keep the diff down, since it is a fairly large patch. Of course, we will address the rest of the feedback before an actual branch-2 release.

          Show
          Andrew Wang added a comment - I've filed MAPREDUCE-6040 to address Sanjay's point about automatically using /.reserved/raw if running distcp as root. I also filed HADOOP-10983 to address Suresh's point about auditing FS vs. KMS permissions, we basically just need a new KMS API to fetch the ACLs for a key. Then such an auditing tool could be built. Hopefully this will take care of the most pressing feedback. I would like to backport this to branch-2 relatively soon after the trunk merge to keep the diff down, since it is a fairly large patch. Of course, we will address the rest of the feedback before an actual branch-2 release.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12662944/fs-encryption.2014-08-19.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 47 new or modified test files.

          -1 javac. The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings).

          +1 javadoc. There were no new javadoc warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 2.0.3) warnings.

          -1 release audit. The applied patch generated 1 release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-tools/hadoop-distcp:

          org.apache.hadoop.ha.TestActiveStandbyElector
          org.apache.hadoop.ha.TestZKFailoverControllerStress
          org.apache.hadoop.ha.TestZKFailoverController
          org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA
          org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
          org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//testReport/
          Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
          Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//artifact/trunk/patchprocess/diffJavacWarnings.txt
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12662944/fs-encryption.2014-08-19.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 47 new or modified test files. -1 javac . The applied patch generated 1262 javac compiler warnings (more than the trunk's current 1259 warnings). +1 javadoc . There were no new javadoc warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 2.0.3) warnings. -1 release audit . The applied patch generated 1 release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-tools/hadoop-distcp: org.apache.hadoop.ha.TestActiveStandbyElector org.apache.hadoop.ha.TestZKFailoverControllerStress org.apache.hadoop.ha.TestZKFailoverController org.apache.hadoop.hdfs.tools.TestDFSAdminWithHA org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Javac warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//artifact/trunk/patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7686//console This message is automatically generated.
          Hide
          Andrew Wang added a comment -

          These tests all passed locally for me, so I'm going to proceed with the merge.

          Show
          Andrew Wang added a comment - These tests all passed locally for me, so I'm going to proceed with the merge.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6089 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6089/)
          HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)

          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6089 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6089/ ) HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Andrew Wang added a comment -

          I've merged this to trunk and moved out all unresolved subtasks to a new umbrella JIRA for tracking, HDFS-6891. Let's try to take further discussion there.

          Thanks again to everyone for their contributions to the development of this feature.

          Show
          Andrew Wang added a comment - I've merged this to trunk and moved out all unresolved subtasks to a new umbrella JIRA for tracking, HDFS-6891 . Let's try to take further discussion there. Thanks again to everyone for their contributions to the development of this feature.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6090 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6090/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6090 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6090/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Konstantin Shvachko added a comment -

          Guys, looks like you ignored javac compiler warnings from Jenkins. I see unused members and imports in FSNamesystem after the merge. May be you can fix it in some follow up jira, rather than creating a separate one.

          Show
          Konstantin Shvachko added a comment - Guys, looks like you ignored javac compiler warnings from Jenkins. I see unused members and imports in FSNamesystem after the merge. May be you can fix it in some follow up jira, rather than creating a separate one.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Charles Lamb added a comment -

          Konstantin Shvachko,

          Guys, looks like you ignored javac compiler warnings from Jenkins. I see unused members and imports in FSNamesystem after the merge. May be you can fix it in some follow up jira, rather than creating a separate one.

          I've created HDFS-6938 to cover this issue. Thanks for catching this.

          Show
          Charles Lamb added a comment - Konstantin Shvachko , Guys, looks like you ignored javac compiler warnings from Jenkins. I see unused members and imports in FSNamesystem after the merge. May be you can fix it in some follow up jira, rather than creating a separate one. I've created HDFS-6938 to cover this issue. Thanks for catching this.
          Hide
          Alejandro Abdelnur added a comment -

          merged to branch-2.

          Show
          Alejandro Abdelnur added a comment - merged to branch-2.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/663/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/663/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6163/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6163/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-mapreduce-project/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/698/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/698/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt

            People

            • Assignee:
              Charles Lamb
              Reporter:
              Alejandro Abdelnur
            • Votes:
              2 Vote for this issue
              Watchers:
              58 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development