Details

    • Type: New Feature New Feature
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 3.0.0
    • Fix Version/s: 2.6.0
    • Component/s: security
    • Labels:

      Description

      There is an increasing need for securing data when Hadoop customers use various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so on.

      HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based on HADOOP “FilterFileSystem” decorating DFS or other file systems, and transparent to upper layer applications. It’s configurable, scalable and fast.

      High level requirements:
      1. Transparent to and no modification required for upper layer applications.
      2. “Seek”, “PositionedReadable” are supported for input stream of CFS if the wrapped file system supports them.
      3. Very high performance for encryption and decryption, they will not become bottleneck.
      4. Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS.
      5. Admin can configure encryption policies, such as which directory will be encrypted.
      6. A robust key management framework.
      7. Support Pread and append operations if the wrapped file system supports them.

      1. cfs.patch
        104 kB
        Yi Liu
      2. CryptographicFileSystem.patch
        287 kB
        Yi Liu
      3. extended information based on INode feature.patch
        128 kB
        Yi Liu
      4. HADOOP cryptographic file system.pdf
        561 kB
        Yi Liu
      5. HADOOP cryptographic file system-V2.docx
        103 kB
        Yi Liu
      6. HDFSDataAtRestEncryptionAlternatives.pdf
        321 kB
        Alejandro Abdelnur
      7. HDFSDataatRestEncryptionAttackVectors.pdf
        131 kB
        Alejandro Abdelnur
      8. HDFSDataatRestEncryptionProposal.pdf
        219 kB
        Alejandro Abdelnur

        Issue Links

        1.
        Crypto input and output streams implementing Hadoop stream interfaces Sub-task Resolved Yi Liu
         
        2.
        Tests for Crypto input and output streams using fake streams implementing Hadoop streams interfaces. Sub-task Resolved Yi Liu
         
        3.
        Javadoc and few code style improvement for Crypto input and output streams Sub-task Resolved Yi Liu
         
        4.
        Minor improvements to Crypto input and output streams Sub-task Closed Yi Liu
         
        5.
        Add a method to CryptoCodec to generate SRNs for IV Sub-task Closed Yi Liu
         
        6.
        Add a new constructor for CryptoInputStream that receives current position of wrapped stream. Sub-task Resolved Yi Liu
         
        7.
        NullPointerException in CryptoInputStream while wrapped stream is not ByteBufferReadable. Add tests using normal stream. Sub-task Resolved Yi Liu
         
        8.
        Implementation of AES-CTR CryptoCodec using JNI to OpenSSL Sub-task Resolved Yi Liu
         
        9.
        Refactor CryptoCodec#generateSecureRandom to take a byte[] Sub-task Resolved Andrew Wang
         
        10.
        Implement high-performance secure random number sources Sub-task Resolved Yi Liu
         
        11.
        Fall back AesCtrCryptoCodec implementation from OpenSSL to JCE if non native support. Sub-task Resolved Yi Liu
         
        12.
        UnsatisfiedLinkError in cryptocodec tests with OpensslCipher#initContext Sub-task Resolved Uma Maheswara Rao G
         
        13.
        Update OpensslCipher#getInstance to accept CipherSuite#name format. Sub-task Resolved Yi Liu
         
        14.
        Refactor get instance of CryptoCodec and support create via algorithm/mode/padding. Sub-task Resolved Yi Liu
         
        15.
        Failed to load OpenSSL cipher error logs on systems with old openssl versions Sub-task Resolved Colin Patrick McCabe
         
        16.
        incorrect prototype in OpensslSecureRandom.c Sub-task Resolved Colin Patrick McCabe
         
        17.
        CryptoCodec#getCodecclasses throws NPE when configurations not loaded. Sub-task Closed Uma Maheswara Rao G
         

          Activity

          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1914 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1914/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1889 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1889/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/698/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #698 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/698/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6163/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)

          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6163/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1) hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          Hide
          Terence Spies added a comment -

          After looking at the encryption proposal, I’m very concerned about the security of the chosen mechanism. As I understand it, the idea is to use AES in counter mode to encrypt the data (which offers no integrity protection) and rely on the existing CRC32 checksums to detect data tampering. The problem here is that the CRC32 checksums are unkeyed, and are quite easy to defeat by an active attacker. The net result is that the attacker can, even through they cannot know the value of individual bits, can trivially flip the value of any bit they desire in the file. This may be detected by the CRC32 checksum, but it’s not difficult to defeat this checksum mechanism by making trial bit flips to compensate.

          At a bare minimum, the checksum mechanism should be replaced with a HMAC/CMAC based keyed checksum mechanism. (The one thing I could not find is if the checksum file is encrypted — if it isn’t, then the plaintext checksums leak information about the underlying plaintext. HMAC/CMACs would prevent that from happening.)

          In general, not using an authenticated encryption mode is pretty dangerous here. Separated MACs enable detect of tampering, but the mechanism needs to be carefully implemented to prevent attackers from using error or timing information to insert tampered data into files.

          I understand the desire to keep the file seekable, and also not change the size of the underlying file. My suggestion would be stay with a mode that encrypts and decrypts as a block cipher, but keep the block size small enough that you can seek with a smallish buffer. In terms of file size, most of the block modes will support ciphertext stealing, which enables the block size to be changed to whatever byte size is required.

          One suggestion would be to look at a mode like OCB, which gives an authenticated mode with very little overhead, and also supports associated data. The associated data feature would enable the block position and IV to be incorporated, giving seekability. As an example, if the file was encrypted with a 128 byte block size, the associated data (similarly to the CTR mode index) would be the position of the block within the file and the IV for the file.

          This would also have the upside of producing an authentication tag for each block, which could at some point be added to some metadata to give cryptographic integrity. Note that we would also want to turn off the CRC32 checksums, as they would leak data about the underlying plaintext.

          Show
          Terence Spies added a comment - After looking at the encryption proposal, I’m very concerned about the security of the chosen mechanism. As I understand it, the idea is to use AES in counter mode to encrypt the data (which offers no integrity protection) and rely on the existing CRC32 checksums to detect data tampering. The problem here is that the CRC32 checksums are unkeyed, and are quite easy to defeat by an active attacker. The net result is that the attacker can, even through they cannot know the value of individual bits, can trivially flip the value of any bit they desire in the file. This may be detected by the CRC32 checksum, but it’s not difficult to defeat this checksum mechanism by making trial bit flips to compensate. At a bare minimum, the checksum mechanism should be replaced with a HMAC/CMAC based keyed checksum mechanism. (The one thing I could not find is if the checksum file is encrypted — if it isn’t, then the plaintext checksums leak information about the underlying plaintext. HMAC/CMACs would prevent that from happening.) In general, not using an authenticated encryption mode is pretty dangerous here. Separated MACs enable detect of tampering, but the mechanism needs to be carefully implemented to prevent attackers from using error or timing information to insert tampered data into files. I understand the desire to keep the file seekable, and also not change the size of the underlying file. My suggestion would be stay with a mode that encrypts and decrypts as a block cipher, but keep the block size small enough that you can seek with a smallish buffer. In terms of file size, most of the block modes will support ciphertext stealing, which enables the block size to be changed to whatever byte size is required. One suggestion would be to look at a mode like OCB, which gives an authenticated mode with very little overhead, and also supports associated data. The associated data feature would enable the block position and IV to be incorporated, giving seekability. As an example, if the file was encrypted with a 128 byte block size, the associated data (similarly to the CTR mode index) would be the position of the block within the file and the IV for the file. This would also have the upside of producing an authentication tag for each block, which could at some point be added to some metadata to give cryptographic integrity. Note that we would also want to turn off the CRC32 checksums, as they would leak data about the underlying plaintext.
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1880 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1880/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          • hadoop-common-project/hadoop-common/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1854 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1854/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt hadoop-common-project/hadoop-common/CHANGES.txt
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/663/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56)

          • hadoop-common-project/hadoop-common/CHANGES.txt
          • hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #663 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/663/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs following merge to branch-2 (tucu: rev d9a7404c389ea1adffe9c13f7178b54678577b56) hadoop-common-project/hadoop-common/CHANGES.txt hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt hadoop-mapreduce-project/CHANGES.txt
          Hide
          Alejandro Abdelnur added a comment -

          merged to branch-2.

          Show
          Alejandro Abdelnur added a comment - merged to branch-2.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1870 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1870/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Hdfs-trunk #1844 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1844/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
            HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)
          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-Yarn-trunk #653 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/653/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Yi Liu added a comment -

          Thanks Andrew Wang.

          Show
          Yi Liu added a comment - Thanks Andrew Wang .
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6090 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6090/)
          Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203)

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6090 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6090/ ) Fix up CHANGES.txt for HDFS-6134 , HADOOP-10150 and related JIRAs. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619203 ) /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          Hide
          Andrew Wang added a comment -

          I've committed this to trunk as part of merging fs-encryption. Thanks for all the work from all contributors here, especially Yi Liu!

          Show
          Andrew Wang added a comment - I've committed this to trunk as part of merging fs-encryption. Thanks for all the work from all contributors here, especially Yi Liu !
          Hide
          Hudson added a comment -

          FAILURE: Integrated in Hadoop-trunk-Commit #6089 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6089/)
          HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197)

          • /hadoop/common/trunk
          • /hadoop/common/trunk/BUILDING.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml
          • /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt
          • /hadoop/common/trunk/hadoop-mapreduce-project/conf
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java
          • /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java
          • /hadoop/common/trunk/hadoop-project-dist/pom.xml
          • /hadoop/common/trunk/hadoop-project/pom.xml
          • /hadoop/common/trunk/hadoop-project/src/site/site.xml
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java
          • /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Show
          Hudson added a comment - FAILURE: Integrated in Hadoop-trunk-Commit #6089 (See https://builds.apache.org/job/Hadoop-trunk-Commit/6089/ ) HDFS-6134 and HADOOP-10150 subtasks. Merge fs-encryption branch to trunk. (wang: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1619197 ) /hadoop/common/trunk /hadoop/common/trunk/BUILDING.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/CMakeLists.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/AesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoStreamUtils.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Decryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/Encryptor.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/JceAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/OpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileEncryptionInfo.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandWithDestination.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CopyCommands.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCodeLoader.c /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/FileSystemShell.apt.vm /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/core /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestOpensslCipher.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/random /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeCodeLoader.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/testConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/Hdfs.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/fs/XAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/UnknownCipherSuiteException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZone.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithId.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/EncryptionZoneWithIdIterator.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsLocatedFileStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlocks.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshottableDirectoryStatus.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionFaultInjector.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RetryStartFileException.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/XAttrPermissionFilter.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/ClientNamenodeProtocol.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/encryption.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/xattr.proto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/ExtendedAttributes.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/apt/TransparentEncryption.apt.vm /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/TestCryptoAdminCLI.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CLICommandCryptoAdmin.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/cli/util/CryptoAdminCmdExecutor.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestXAttr.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSClientRetries.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileCreation.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestLease.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReservedRawPaths.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/crypto /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSXAttrBaseTest.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAddBlockRetry.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNamenodeRetryCache.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestRetryCacheWithHA.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestJsonUtil.java /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testCryptoConf.xml /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testXAttrConf.xml /hadoop/common/trunk/hadoop-mapreduce-project /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES-fs-encryption.txt /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt /hadoop/common/trunk/hadoop-mapreduce-project/conf /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/BackupStore.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/MapTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Merger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmitter.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/Fetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/LocalFetcher.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/MergeManagerImpl.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/reduce/OnDiskMapOutput.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown/DistCp.md.vm /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestIFile.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMRIntermediateDataEncryption.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestReduceTask.java /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/pipes/TestPipeApplication.java /hadoop/common/trunk/hadoop-project-dist/pom.xml /hadoop/common/trunk/hadoop-project/pom.xml /hadoop/common/trunk/hadoop-project/src/site/site.xml /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithRawXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDistCpWithXAttrs.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestOptionsParser.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/DistCpTestUtils.java /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
          Hide
          Yi Liu added a comment -

          Thanks Alejandro Abdelnur for creating these sub-tasks, let's use them.

          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur for creating these sub-tasks, let's use them.
          Hide
          Alejandro Abdelnur added a comment -

          Yi Liu, I've created sub-tasks 7, 8 & 9. They are somehow repeated from existing ones, would you please do a clean up pass and leave the ones that make sense based on the current proposal?

          Show
          Alejandro Abdelnur added a comment - Yi Liu , I've created sub-tasks 7, 8 & 9. They are somehow repeated from existing ones, would you please do a clean up pass and leave the ones that make sense based on the current proposal?
          Hide
          Alejandro Abdelnur added a comment -

          [cross-posting with HDFS-6134]

          Reopening HDFS-6134

          After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles we think is makes more sense to implement encryption for HDFS directly into the DistributedFileSystem client and to use CryptoFileSystem support encryption for FileSystems that don’t support native encryption.

          The reasons for this change of course are:

          • If we want to we add support for HDFS transparent compression, the compression should be done before the encryption (implying less entropy). If compression is to be handled by HDFS DistributedFileSystem, then the encryption has to be handled afterwards (in the write path).
          • The proposed CryptoSupport abstraction significantly complicates the implementation of CryptoFileSystem and the wiring in HDFS FileSystem client.
          • Building it directly into HDFS FileSystem client may allow us to avoid an extra copy of data.

          Because of this, the idea is now:

          • A common set of Crypto Input/Output streams. They would be used by CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. Note we cannot use the JDK Cipher Input/Output streams directly because we need to support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind).
          • CryptoFileSystem.
            To support encryption in arbitrary FileSystems.
          • HDFS client encryption. To support transparent HDFS encryption.

          Both CryptoFilesystem and HDFS client encryption implementations would be built using the Crypto Input/Output streams, xAttributes and KeyProvider API.

          Show
          Alejandro Abdelnur added a comment - [cross-posting with HDFS-6134] Reopening HDFS-6134 After some offline discussions with Yi, Tianyou, ATM, Todd, Andrew and Charles we think is makes more sense to implement encryption for HDFS directly into the DistributedFileSystem client and to use CryptoFileSystem support encryption for FileSystems that don’t support native encryption. The reasons for this change of course are: If we want to we add support for HDFS transparent compression, the compression should be done before the encryption (implying less entropy). If compression is to be handled by HDFS DistributedFileSystem, then the encryption has to be handled afterwards (in the write path). The proposed CryptoSupport abstraction significantly complicates the implementation of CryptoFileSystem and the wiring in HDFS FileSystem client. Building it directly into HDFS FileSystem client may allow us to avoid an extra copy of data. Because of this, the idea is now: A common set of Crypto Input/Output streams. They would be used by CryptoFileSystem, HDFS encryption, MapReduce intermediate data and spills. Note we cannot use the JDK Cipher Input/Output streams directly because we need to support the additional interfaces that the Hadoop FileSystem streams implement (Seekable, PositionedReadable, ByteBufferReadable, HasFileDescriptor, CanSetDropBehind, CanSetReadahead, HasEnhancedByteBufferAccess, Syncable, CanSetDropBehind). CryptoFileSystem. To support encryption in arbitrary FileSystems. HDFS client encryption. To support transparent HDFS encryption. Both CryptoFilesystem and HDFS client encryption implementations would be built using the Crypto Input/Output streams, xAttributes and KeyProvider API.
          Hide
          Yi Liu added a comment -

          Steve, thank you for the comments.

          About blobstores, I remember you brought out this before, it's very good and can inspire us. A few shortcomings are 1) it's a third part and standalone service, increase deployment and management. 2) authentication/authorization issue, for example integration and management. 3) rely on maturity of blobstores. I'm not saying it's not good, just compared to xattr, the latter has more merits.

          generates a per-file key, encrypts it in the public key of users and admins.

          Agree, having two layer keys is necessary, for example convenient for key rotation.

          It'd be good if the mechanism to store/retrieve keys worked with all the filesystems -even if they didn't have full xattr support. Maybe this could be done if the design supported a very simple adapter for each FS, which only handled the read/write of crypto keys

          Agree, support all the filesystems is one target, and decouple is a basic rule for programing.
          For xattr, it is widely supported on different OS/FS. If some underlying file system doesn't have full xattr support, we can have fallback in different ways, but the interfaces can be xattr interfaces, just the implementation, one way is to use a blobstores. This will guarantee CFS works efficiently and easily management on top of most filesystems if they support xattr.

          Show
          Yi Liu added a comment - Steve, thank you for the comments. About blobstores, I remember you brought out this before, it's very good and can inspire us. A few shortcomings are 1) it's a third part and standalone service, increase deployment and management. 2) authentication/authorization issue, for example integration and management. 3) rely on maturity of blobstores. I'm not saying it's not good, just compared to xattr, the latter has more merits. generates a per-file key, encrypts it in the public key of users and admins. Agree, having two layer keys is necessary, for example convenient for key rotation. It'd be good if the mechanism to store/retrieve keys worked with all the filesystems -even if they didn't have full xattr support. Maybe this could be done if the design supported a very simple adapter for each FS, which only handled the read/write of crypto keys Agree, support all the filesystems is one target, and decouple is a basic rule for programing. For xattr, it is widely supported on different OS/FS. If some underlying file system doesn't have full xattr support, we can have fallback in different ways, but the interfaces can be xattr interfaces, just the implementation, one way is to use a blobstores. This will guarantee CFS works efficiently and easily management on top of most filesystems if they support xattr.
          Hide
          Yi Liu added a comment -

          Owen, thanks a lot for the comments and ideas. Thanks Andrew for the explanation too.

          We have two metadata items that we need for each file:

          the key name and version
          the iv
          Note that the current patches only store the iv, but we really need to store the key name and version. The version is absolutely critical because if you roll a new key version you don't want to re-write all of the current data.

          Right, I agree. It's also included in latest doc posted by Alejandro Abdelnur.

          It seems to me there are three reasonable places to store the small amount of metadata:
          at the beginning of the file
          in a side file
          encoded using a filename mangling scheme

          • At the beginning of the file, as you said, it has some weakness and we did use this way in the earliest patch and you also commented that it was not good enough.
          • A side file, does double the amount of traffic and storage.
          • Encoded using a filename mangling schema, you have brought out this idea in previous comment, and I did think about this carefully and tried, but I found few issues: It can do transformation one way, that means we can create crypto file and do encryption easily since we know the IV/Key, then we can encoded them to file name easily; but there is problem while decrypting, when upper layer application reads a file which is transparent encrypted, it's hard for crypto file system mapping it to the encoded file name, since crypto file system doesn't know the IV/key, one possible way is to iterate the directory to get possible file name and it's not efficient and mapping is not accurate enough. Furthermore, the crypto file name is longer than original one and different, it may be not good well in some case.

          As Andrew explained, we will use the extended attributes feature of filesystem(HDFS-2006), which is common feature in traditional OS/FS, suitable to store extended information of file/directory, especially for short attributes like small amount of crypto metadata.

          Show
          Yi Liu added a comment - Owen, thanks a lot for the comments and ideas. Thanks Andrew for the explanation too. We have two metadata items that we need for each file: the key name and version the iv Note that the current patches only store the iv, but we really need to store the key name and version. The version is absolutely critical because if you roll a new key version you don't want to re-write all of the current data. Right, I agree. It's also included in latest doc posted by Alejandro Abdelnur . It seems to me there are three reasonable places to store the small amount of metadata: at the beginning of the file in a side file encoded using a filename mangling scheme At the beginning of the file, as you said, it has some weakness and we did use this way in the earliest patch and you also commented that it was not good enough. A side file, does double the amount of traffic and storage. Encoded using a filename mangling schema, you have brought out this idea in previous comment, and I did think about this carefully and tried, but I found few issues: It can do transformation one way, that means we can create crypto file and do encryption easily since we know the IV/Key, then we can encoded them to file name easily; but there is problem while decrypting, when upper layer application reads a file which is transparent encrypted, it's hard for crypto file system mapping it to the encoded file name, since crypto file system doesn't know the IV/key, one possible way is to iterate the directory to get possible file name and it's not efficient and mapping is not accurate enough. Furthermore, the crypto file name is longer than original one and different, it may be not good well in some case. As Andrew explained, we will use the extended attributes feature of filesystem( HDFS-2006 ), which is common feature in traditional OS/FS, suitable to store extended information of file/directory, especially for short attributes like small amount of crypto metadata.
          Hide
          Steve Loughran added a comment -

          the blobstores normally support some form of metadata, which could be used for the data, as do things like NTFS, HDFS+. Indeed, this is how NTFS encryption works: generates a per-file key, encrypts it in the public key of users and admins, attaches them all as independent metadata entries.

          It'd be good if the mechanism to store/retrieve keys worked with all the filesystems -even if they didn't have full xattr support. Maybe this could be done if the design supported a very simple adapter for each FS, which only handled the read/write of crypto keys

          Show
          Steve Loughran added a comment - the blobstores normally support some form of metadata, which could be used for the data, as do things like NTFS, HDFS+. Indeed, this is how NTFS encryption works: generates a per-file key, encrypts it in the public key of users and admins, attaches them all as independent metadata entries. It'd be good if the mechanism to store/retrieve keys worked with all the filesystems -even if they didn't have full xattr support. Maybe this could be done if the design supported a very simple adapter for each FS, which only handled the read/write of crypto keys
          Hide
          Andrew Wang added a comment -

          Hey Owen,

          I think the plan here is to use xattrs to store this additional data. Is that satisfactory? This means it wouldn't be a pure wrapper since it'd require the underlying filesystem to implement xattrs (HDFS-2006 is linked as "requires"). The upside is that the design is nicer, and we can do tighter integration with HDFS.

          Show
          Andrew Wang added a comment - Hey Owen, I think the plan here is to use xattrs to store this additional data. Is that satisfactory? This means it wouldn't be a pure wrapper since it'd require the underlying filesystem to implement xattrs ( HDFS-2006 is linked as "requires"). The upside is that the design is nicer, and we can do tighter integration with HDFS.
          Hide
          Owen O'Malley added a comment -

          I've been working through this. We have two metadata items that we need for each file:

          • the key name and version
          • the iv
            Note that the current patches only store the iv, but we really need to store the key name and version. The version is absolutely critical because if you roll a new key version you don't want to re-write all of the current data.

          It seems to me there are three reasonable places to store the small amount of metadata:

          • at the beginning of the file
          • in a side file
          • encoded using a filename mangling scheme

          The beginning of the file creates trouble because it throws off the block calculations that are done by mapreduce. (In other words, if we slide all of the data down by 1k, then each input split will always cross HDFS block boundaries.) On the other hand, it doesn't add any load to the namenode and will always be consistent with the file.

          A side file doesn't change the offsets into the file, but does double the amount of traffic and storage required on the namenode.

          Doing name mangling means the underlying HDFS file names are more complicated, but it doesn't mess with either the file offsets or increase the load on the namenode.

          I think we should do the name mangling. What do others think?

          Show
          Owen O'Malley added a comment - I've been working through this. We have two metadata items that we need for each file: the key name and version the iv Note that the current patches only store the iv, but we really need to store the key name and version. The version is absolutely critical because if you roll a new key version you don't want to re-write all of the current data. It seems to me there are three reasonable places to store the small amount of metadata: at the beginning of the file in a side file encoded using a filename mangling scheme The beginning of the file creates trouble because it throws off the block calculations that are done by mapreduce. (In other words, if we slide all of the data down by 1k, then each input split will always cross HDFS block boundaries.) On the other hand, it doesn't add any load to the namenode and will always be consistent with the file. A side file doesn't change the offsets into the file, but does double the amount of traffic and storage required on the namenode. Doing name mangling means the underlying HDFS file names are more complicated, but it doesn't mess with either the file offsets or increase the load on the namenode. I think we should do the name mangling. What do others think?
          Hide
          Andrew Purtell added a comment -

          there's one more layer to consider: virtualized hadoop clusters.

          An interesting paper on this topic is http://eprint.iacr.org/2014/248.pdf, which discusses side channel attacks on AES on Xen and VMWare platforms. JCE ciphers were not included in the analysis but should be suspect until proven otherwise. JRE >= 8 will accelerate AES using AES-NI instructions. Since AES-NI performs each full round of AES in a hardware register all known side channel attacks are prevented.

          Show
          Andrew Purtell added a comment - there's one more layer to consider: virtualized hadoop clusters. An interesting paper on this topic is http://eprint.iacr.org/2014/248.pdf , which discusses side channel attacks on AES on Xen and VMWare platforms. JCE ciphers were not included in the analysis but should be suspect until proven otherwise. JRE >= 8 will accelerate AES using AES-NI instructions. Since AES-NI performs each full round of AES in a hardware register all known side channel attacks are prevented.
          Hide
          Alejandro Abdelnur added a comment -

          Steve, that's good, we should make sure that ends up in the final documentation as well.

          Show
          Alejandro Abdelnur added a comment - Steve, that's good, we should make sure that ends up in the final documentation as well.
          Hide
          Steve Loughran added a comment -

          I like the document on attack vectors, including that on hardware and networking.

          If we're going down to that level, there's one more layer to consider: virtualized hadoop clusters.

          1. even don't swap Memory could be swapped out by the host OS
          2. pagefile secrets could be preserved after VM destruction
          3. disks may not be wiped

          Fixes

          1. Don't give a transient cluster access to keys needed to decrypt persistent data other than that needed by specific jobs
          2. explore with your virtualization/cloud service provider what their VM and virtual disk security policies are: when do the virtual disks get wiped, and how rigorously.

          Other things to worry about

          1. malicious DNs joining the cluster. Again, it's hard to block this in a cloud, as hostnames aren't known in advance (so you cant have them on the included host list). Fix: Use a VPN and not any datacentre-wide network.
          2. fundamental security holes in core dependency libraries (OS & JVM layer). Keep your machines up to date, have mechanisms for renewing anbd revoking certificates,...
          Show
          Steve Loughran added a comment - I like the document on attack vectors, including that on hardware and networking. If we're going down to that level, there's one more layer to consider: virtualized hadoop clusters. even don't swap Memory could be swapped out by the host OS pagefile secrets could be preserved after VM destruction disks may not be wiped Fixes Don't give a transient cluster access to keys needed to decrypt persistent data other than that needed by specific jobs explore with your virtualization/cloud service provider what their VM and virtual disk security policies are: when do the virtual disks get wiped, and how rigorously. Other things to worry about malicious DNs joining the cluster. Again, it's hard to block this in a cloud, as hostnames aren't known in advance (so you cant have them on the included host list). Fix: Use a VPN and not any datacentre-wide network. fundamental security holes in core dependency libraries (OS & JVM layer). Keep your machines up to date, have mechanisms for renewing anbd revoking certificates,...
          Hide
          Alejandro Abdelnur added a comment -

          [Cross-posting with HDFS-6134, closed HDFS-6134 as duplicate, discussion to continue here]

          Larry, Steve, Nicholas, thanks for your comments.

          Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey to see if we could combine what HADOOP-10150 and HDFS-6134 into one proposal while supporting both, encryption for multiple filesystems and transparent encryption for HDFS.

          Also, following Steve’s suggestion, I’ve put together a Attack Vectors Matrix for all approaches.

          I think both documents, the proposal and the attack vectors, address most if not all the questions/concerns raised in the JIRA.

          Attaching 3 documents:

          • Alternatives: the original doc posted in HDFS-6134
          • Proposal: the combined proposal
          • Attack Vectors: a matrix with the different attacks for the alternatives and the proposal
          Show
          Alejandro Abdelnur added a comment - [Cross-posting with HDFS-6134, closed HDFS-6134 as duplicate, discussion to continue here] Larry, Steve, Nicholas, thanks for your comments. Todd Lipcon and I had an offline discussion with Andrew Purtell, Yi Liu and Avik Dey to see if we could combine what HADOOP-10150 and HDFS-6134 into one proposal while supporting both, encryption for multiple filesystems and transparent encryption for HDFS. Also, following Steve’s suggestion, I’ve put together a Attack Vectors Matrix for all approaches. I think both documents, the proposal and the attack vectors, address most if not all the questions/concerns raised in the JIRA. Attaching 3 documents: Alternatives: the original doc posted in HDFS-6134 Proposal: the combined proposal Attack Vectors: a matrix with the different attacks for the alternatives and the proposal
          Hide
          Yi Liu added a comment - - edited

          Todd, thanks for your comments.

          A few questions here...
          First, let me confirm my understanding of the key structure and storage:
          Client master key: this lives on the Key Management Server, and might be different from application to application.

          Yes.

          In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one per tenant.

          It depends on the KeyProvider implementation, these kinds of details can be encapsulated into the KeyProvider implementation which could be pluggable in CFS. Thus, the user can use their own strategy to deploy one master key or multiple master key, by application or by user-group etc.

          Data key: this is set per encrypted directory. This key is stored in the directory xattr on the NN, but encrypted by the client master key (which the NN doesn't know).

          Yes.

          So, when a client wants to read a file, the following is the process:
          1) Notices that the file is in an encrypted directory. Fetches the encrypted data key from the NN's xattr on the directory.
          2) Somehow associates this encrypted data key with the master key that was used to encrypt it (perhaps it's tagged with some identifier). > Fetches the appropriate master key from the key store.
          2a) The keystore somehow authenticates and authorizes the client's access to this key
          3) The client decrypts the data key using the master key, and is now able to set up a decrypting stream for the file itself. (I've ignored the IV here, but assume it's also stored in an xattr)

          Yes.

          In terms of attack vectors:
          let's say that the NN disk is stolen. The thief now has access to a bunch of keys, but they're all encrypted by various master keys. So we're OK.

          Yes.

          let's say that a client is malicious. It can get whichever master keys it has access to from the KMS. If we only have one master key per cluster, then the combination of one malicious client plus stealing the fsimage will give up all the keys

          When a client get access to master key and fsimage, there is nothing we can do to protected those data. The separation of data encryption key and master key is for master key rotation. So that one does not need to decrypt all data file then encrypt it with new encryption key again.

          let's say that a client has escalated to root access on one of the slave nodes in the cluster, or otherwise has malicious access to a NodeManager process. By looking at a running MR task, it could steal whatever credentials the task is using to access the KMS, and/or dump the memory of the client process in order to give up the master key above.

          When a client has root access, all information can be dumped from any process, right? I remember Nicholas asked the similar question on HDFS-6134. If a client has escalated to root access on slave nodes, how can we assume the namenode, standby namenode/secondary namenode are secure in the same cluster? On the other hand, as long as data keys remain in encrypted form in the process memory of the NameNode and DataNodes, and they don't have access to the wrapping keys, then there is no attack vector there.

          How does the MR task in this context get the credentials to fetch keys from the KMS? If the KMS accepts the same authentication tokens as the NameNode, then is there any reason that this is more secure than having the NameNode supply the keys? Or is it just that decoupling the NameNode and the key server allows this approach to work for non-HDFS filesystems, at the expense of an additional daemon running a key distribution service?

          It is a good question. Securely distributing the secrets as you mentioned among the cluster nodes will always be a hard problem to solve. Without adequate hardware support, it could possibly be a weak point during operations like unwrapping key. We want to leave options to KeyProvider implementation to decouple the key protection mechanism and data encryption mechanism, and to make above two work on top of any filesystem. It is possible to have a KeyProvider implementation which use NN as KMS as we already discussed, and leave room for other parties to plug their own solution?

          Show
          Yi Liu added a comment - - edited Todd, thanks for your comments. A few questions here... First, let me confirm my understanding of the key structure and storage: Client master key: this lives on the Key Management Server, and might be different from application to application. Yes. In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one per tenant. It depends on the KeyProvider implementation, these kinds of details can be encapsulated into the KeyProvider implementation which could be pluggable in CFS. Thus, the user can use their own strategy to deploy one master key or multiple master key, by application or by user-group etc. Data key: this is set per encrypted directory. This key is stored in the directory xattr on the NN, but encrypted by the client master key (which the NN doesn't know). Yes. So, when a client wants to read a file, the following is the process: 1) Notices that the file is in an encrypted directory. Fetches the encrypted data key from the NN's xattr on the directory. 2) Somehow associates this encrypted data key with the master key that was used to encrypt it (perhaps it's tagged with some identifier). > Fetches the appropriate master key from the key store. 2a) The keystore somehow authenticates and authorizes the client's access to this key 3) The client decrypts the data key using the master key, and is now able to set up a decrypting stream for the file itself. (I've ignored the IV here, but assume it's also stored in an xattr) Yes. In terms of attack vectors: let's say that the NN disk is stolen. The thief now has access to a bunch of keys, but they're all encrypted by various master keys. So we're OK. Yes. let's say that a client is malicious. It can get whichever master keys it has access to from the KMS. If we only have one master key per cluster, then the combination of one malicious client plus stealing the fsimage will give up all the keys When a client get access to master key and fsimage, there is nothing we can do to protected those data. The separation of data encryption key and master key is for master key rotation. So that one does not need to decrypt all data file then encrypt it with new encryption key again. let's say that a client has escalated to root access on one of the slave nodes in the cluster, or otherwise has malicious access to a NodeManager process. By looking at a running MR task, it could steal whatever credentials the task is using to access the KMS, and/or dump the memory of the client process in order to give up the master key above. When a client has root access, all information can be dumped from any process, right? I remember Nicholas asked the similar question on HDFS-6134 . If a client has escalated to root access on slave nodes, how can we assume the namenode, standby namenode/secondary namenode are secure in the same cluster? On the other hand, as long as data keys remain in encrypted form in the process memory of the NameNode and DataNodes, and they don't have access to the wrapping keys, then there is no attack vector there. How does the MR task in this context get the credentials to fetch keys from the KMS? If the KMS accepts the same authentication tokens as the NameNode, then is there any reason that this is more secure than having the NameNode supply the keys? Or is it just that decoupling the NameNode and the key server allows this approach to work for non-HDFS filesystems, at the expense of an additional daemon running a key distribution service? It is a good question. Securely distributing the secrets as you mentioned among the cluster nodes will always be a hard problem to solve. Without adequate hardware support, it could possibly be a weak point during operations like unwrapping key. We want to leave options to KeyProvider implementation to decouple the key protection mechanism and data encryption mechanism, and to make above two work on top of any filesystem. It is possible to have a KeyProvider implementation which use NN as KMS as we already discussed, and leave room for other parties to plug their own solution?
          Hide
          Todd Lipcon added a comment -

          A few questions here...

          First, let me confirm my understanding of the key structure and storage:

          • Client master key: this lives on the Key Management Server, and might be different from application to application. In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one per tenant.
          • Data key: this is set per encrypted directory. This key is stored in the directory xattr on the NN, but encrypted by the client master key (which the NN doesn't know).

          So, when a client wants to read a file, the following is the process:
          1) Notices that the file is in an encrypted directory. Fetches the encrypted data key from the NN's xattr on the directory.
          2) Somehow associates this encrypted data key with the master key that was used to encrypt it (perhaps it's tagged with some identifier). Fetches the appropriate master key from the key store.
          2a) The keystore somehow authenticates and authorizes the client's access to this key
          3) The client decrypts the data key using the master key, and is now able to set up a decrypting stream for the file itself. (I've ignored the IV here, but assume it's also stored in an xattr)

          In terms of attack vectors:

          • let's say that the NN disk is stolen. The thief now has access to a bunch of keys, but they're all encrypted by various master keys. So we're OK.
          • let's say that a client is malicious. It can get whichever master keys it has access to from the KMS. If we only have one master key per cluster, then the combination of one malicious client plus stealing the fsimage will give up all the keys
          • let's say that a client has escalated to root access on one of the slave nodes in the cluster, or otherwise has malicious access to a NodeManager process. By looking at a running MR task, it could steal whatever credentials the task is using to access the KMS, and/or dump the memory of the client process in order to give up the master key above.

          Does the above look right? It would be nice to add to the design doc a clear description of the threat model here. Do we assume that the adversary will never have root on the cluster? Do we assume the adversary won't have access to the "mapred" user (or whoever runs the NM?)

          How does the MR task in this context get the credentials to fetch keys from the KMS? If the KMS accepts the same authentication tokens as the NameNode, then is there any reason that this is more secure than having the NameNode supply the keys? Or is it just that decoupling the NameNode and the key server allows this approach to work for non-HDFS filesystems, at the expense of an additional daemon running a key distribution service?

          Show
          Todd Lipcon added a comment - A few questions here... First, let me confirm my understanding of the key structure and storage: Client master key: this lives on the Key Management Server, and might be different from application to application. In many cases there may be just one per cluster, though in a multitenant cluster, perhaps we could have one per tenant. Data key: this is set per encrypted directory. This key is stored in the directory xattr on the NN, but encrypted by the client master key (which the NN doesn't know). So, when a client wants to read a file, the following is the process: 1) Notices that the file is in an encrypted directory. Fetches the encrypted data key from the NN's xattr on the directory. 2) Somehow associates this encrypted data key with the master key that was used to encrypt it (perhaps it's tagged with some identifier). Fetches the appropriate master key from the key store. 2a) The keystore somehow authenticates and authorizes the client's access to this key 3) The client decrypts the data key using the master key, and is now able to set up a decrypting stream for the file itself. (I've ignored the IV here, but assume it's also stored in an xattr) In terms of attack vectors: let's say that the NN disk is stolen. The thief now has access to a bunch of keys, but they're all encrypted by various master keys. So we're OK. let's say that a client is malicious. It can get whichever master keys it has access to from the KMS. If we only have one master key per cluster, then the combination of one malicious client plus stealing the fsimage will give up all the keys let's say that a client has escalated to root access on one of the slave nodes in the cluster, or otherwise has malicious access to a NodeManager process. By looking at a running MR task, it could steal whatever credentials the task is using to access the KMS, and/or dump the memory of the client process in order to give up the master key above. Does the above look right? It would be nice to add to the design doc a clear description of the threat model here. Do we assume that the adversary will never have root on the cluster? Do we assume the adversary won't have access to the "mapred" user (or whoever runs the NM?) How does the MR task in this context get the credentials to fetch keys from the KMS? If the KMS accepts the same authentication tokens as the NameNode, then is there any reason that this is more secure than having the NameNode supply the keys? Or is it just that decoupling the NameNode and the key server allows this approach to work for non-HDFS filesystems, at the expense of an additional daemon running a key distribution service?
          Hide
          Yi Liu added a comment -

          Alejandro Abdelnur, thanks for comments.

          Regarding hflush, hsync. Unless I’m missing something, if the hflush/hsync is done at an offset which is not MOD of 16, things will break as the IV advancing is done on per encryption block (16 bytes).

          Hflush/Hsync will work well in CFS. The key point is in CTR mode, it could have some characteristics of stream cipher, such like encryption can be done for any size of data, and we can decrypt any random bytes, counter is calculated using the formula in our design doc.

          The Cfs.getDataKey(), it is not clear how the master key is to be fetched by clients and by job tasks. Plus, it seems that the idea is that every client job task will get hold of the master key (which could decrypt all stored keys).

          cfs.getDataKey() could be refactored to use Owen’s HADOOP-10141 key provider interface, thus decouple with the underlying KMS system. In the patch attached, we’d like to show the master key which served from client side could be used to decrypt the data encryption key. This client master key could be different from user to user. The master key can be retrieved from KMS as well and served via Owen’s HADOOP-10141 key provider interface as well, and it is pluggable and end user can provide his own implementation. The similar approach can be seen from Hadoop-9333 and MAPREDUCE-4491 which we have quite a lot discussion with @Benoy Antony.

          Also, there is no provision to allow master key rotation.

          Since the client master key is controlled by client, client is responsible for the key rotation.

          Show
          Yi Liu added a comment - Alejandro Abdelnur , thanks for comments. Regarding hflush, hsync. Unless I’m missing something, if the hflush/hsync is done at an offset which is not MOD of 16, things will break as the IV advancing is done on per encryption block (16 bytes). Hflush/Hsync will work well in CFS. The key point is in CTR mode, it could have some characteristics of stream cipher, such like encryption can be done for any size of data, and we can decrypt any random bytes, counter is calculated using the formula in our design doc. The Cfs.getDataKey(), it is not clear how the master key is to be fetched by clients and by job tasks. Plus, it seems that the idea is that every client job task will get hold of the master key (which could decrypt all stored keys). cfs.getDataKey() could be refactored to use Owen’s HADOOP-10141 key provider interface, thus decouple with the underlying KMS system. In the patch attached, we’d like to show the master key which served from client side could be used to decrypt the data encryption key. This client master key could be different from user to user. The master key can be retrieved from KMS as well and served via Owen’s HADOOP-10141 key provider interface as well, and it is pluggable and end user can provide his own implementation. The similar approach can be seen from Hadoop-9333 and MAPREDUCE-4491 which we have quite a lot discussion with @Benoy Antony. Also, there is no provision to allow master key rotation. Since the client master key is controlled by client, client is responsible for the key rotation.
          Hide
          Alejandro Abdelnur added a comment -

          Yi Liu, thanks for the detailed answers.

          I’ll answer in more detail later, just a couple of things now that jumped out after a quick look at the patches.

          I like the use of xAttr.

          Regarding hflush, hsync. Unless I’m missing something, if the hflush/hsync is done at an offset which is not MOD of 16, things will break as the IV advancing is done on per encryption block (16 bytes).

          The Cfs.getDataKey(), it is not clear how the master key is to be fetched by clients and by job tasks. Plus, it seems that the idea is that every client job task will get hold of the master key (which could decrypt all stored keys). Also, there is no provision to allow master key rotation.

          More later.

          Show
          Alejandro Abdelnur added a comment - Yi Liu , thanks for the detailed answers. I’ll answer in more detail later, just a couple of things now that jumped out after a quick look at the patches. I like the use of xAttr. Regarding hflush, hsync. Unless I’m missing something, if the hflush/hsync is done at an offset which is not MOD of 16, things will break as the IV advancing is done on per encryption block (16 bytes). The Cfs.getDataKey(), it is not clear how the master key is to be fetched by clients and by job tasks. Plus, it seems that the idea is that every client job task will get hold of the master key (which could decrypt all stored keys). Also, there is no provision to allow master key rotation. More later.
          Hide
          Yi Liu added a comment -

          Thanks Alejandro Abdelnur for your comment.
          We less concern the internal use of HDFS client, on the contrary we care more about encrypted data easy for clients. Even though we found that in webhdfs it should use DistributedFileSystem as well to remove the symlink issue as HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when getting HDFS symlink file through HDFS REST API”, and there is no “statistics” for HDFS REST which is inconsistent with behavior of DistributedFileSystem, suppose this JIRA will resolve it).

          “Transparent” or “at rest” encryption usually means that the server handles encrypting data for persistence, but does not manage keys for particular clients or applications, nor require applications to even be aware that encryption is in use. Hence how it can be described as transparent. This type of solution distributes secret keys within the secure enclave (not to clients), or might employ a two tier key architecture (data keys wrapped by the cluster secret key) but with keys managed per application typically. E.g. in a database system, per table. The goal here is to avoid data leakage from the server by universally encrypting data “at rest”.

          Other cryptographic application architectures handle use cases where clients or applications want to protect data with encryption from other clients or applications. For those use cases encryption and decryption is done on the client, and the scope of key sharing should be minimized to where the cryptographic operations take place. In this type of solution the server becomes an unnecessary central point of compromise for user or application keys, so sharing there should be avoided. This isn’t really an “at rest” solution because the client may or may not choose to encrypt, and because key sharing is minimized, the server cannot and should not be able to distinguish encrypted data from random bytes, so cannot guarantee all persisted data is encrypted.

          Therefore we have two different types of solutions useful for different reasons, with different threat models. Combinations of the two must be carefully done (or avoided) so as not to end up with something combining the worst of both threat models.

          HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when viewed in this light. HDFS-6134, as described at least by the JIRA title, wants to introduce transparent encryption within HDFS. In my opinion, it shouldn’t attempt “client side encryption on the server” for reasons mentioned above. HADOOP-10150 wants to make management of partially encrypted data easy for clients, for the client side encryption use cases, by presenting a filtered view over base Hadoop filesystems like HDFS.

          in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. "

          We assume HDFS-2006 could help, that’s why we put separate patches. In that the CFS patch it was decoupled with underlying filesystem if xattr present. And it could be end user’s choice to decide whether store key alias or data encryption key.

          (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads?

          For hflush, hsync, actually it's very simple. In cryptographic output stream of CFS, we buffer the plain text in cache and do encryption until data size reaches buffer length to improve performance. So for hflush /hsync, we just need to flush the buffer and do encryption immediately, and then call FSDataOutputStream.hfulsh/hsync which will handle the remaining thing.

          Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)

          There is no URL changed, please see latest design doc and test case.
          We have considered HADOOP-9534 and HADOOP-10141, encryption of key material could be handled by the implementation of key providers according to customers environment.

          Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)

          AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity was ensured by underlying filesystem like HDFS in this scenario. We decide to use AES-CTR for best performance.
          Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be EOL but plenty of Hadoopers are still running it. It's not even listed on the Java 7 Sun provider document (http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html) but that may be an omission.

          By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements.

          Actually we designed like this much earlier before we updated, just look at the patch.

          Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both.

          I agree.

          Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone

          Rename is atomic operation in Hadoop, so we only allow move between one directory/file and another directory/file if they share same data key, then no decryption is required. Please see my MAR/21 patch.
          Actually we have not mentioned rename in the earlier doc, we just discussed it in review comments, since @Steve had the same questions, and we covered this in the comments of discussion with him.

          Explicit auditing on encrypted files access does not seem handled

          The auditing could be another topic we need to address especially when discussing the client side encryption. One possible way is to add a pluggable point that customer can route audit event to their existing auditing system. On that above points discussion conclusion we think on this point later.

          Show
          Yi Liu added a comment - Thanks Alejandro Abdelnur for your comment. We less concern the internal use of HDFS client, on the contrary we care more about encrypted data easy for clients. Even though we found that in webhdfs it should use DistributedFileSystem as well to remove the symlink issue as HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when getting HDFS symlink file through HDFS REST API”, and there is no “statistics” for HDFS REST which is inconsistent with behavior of DistributedFileSystem, suppose this JIRA will resolve it). “Transparent” or “at rest” encryption usually means that the server handles encrypting data for persistence, but does not manage keys for particular clients or applications, nor require applications to even be aware that encryption is in use. Hence how it can be described as transparent. This type of solution distributes secret keys within the secure enclave (not to clients), or might employ a two tier key architecture (data keys wrapped by the cluster secret key) but with keys managed per application typically. E.g. in a database system, per table. The goal here is to avoid data leakage from the server by universally encrypting data “at rest”. Other cryptographic application architectures handle use cases where clients or applications want to protect data with encryption from other clients or applications. For those use cases encryption and decryption is done on the client, and the scope of key sharing should be minimized to where the cryptographic operations take place. In this type of solution the server becomes an unnecessary central point of compromise for user or application keys, so sharing there should be avoided. This isn’t really an “at rest” solution because the client may or may not choose to encrypt, and because key sharing is minimized, the server cannot and should not be able to distinguish encrypted data from random bytes, so cannot guarantee all persisted data is encrypted. Therefore we have two different types of solutions useful for different reasons, with different threat models. Combinations of the two must be carefully done (or avoided) so as not to end up with something combining the worst of both threat models. HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when viewed in this light. HDFS-6134 , as described at least by the JIRA title, wants to introduce transparent encryption within HDFS. In my opinion, it shouldn’t attempt “client side encryption on the server” for reasons mentioned above. HADOOP-10150 wants to make management of partially encrypted data easy for clients, for the client side encryption use cases, by presenting a filtered view over base Hadoop filesystems like HDFS. in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. " We assume HDFS-2006 could help, that’s why we put separate patches. In that the CFS patch it was decoupled with underlying filesystem if xattr present. And it could be end user’s choice to decide whether store key alias or data encryption key. (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads? For hflush, hsync, actually it's very simple. In cryptographic output stream of CFS, we buffer the plain text in cache and do encryption until data size reaches buffer length to improve performance. So for hflush /hsync, we just need to flush the buffer and do encryption immediately, and then call FSDataOutputStream.hfulsh/hsync which will handle the remaining thing. Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes) There is no URL changed, please see latest design doc and test case. We have considered HADOOP-9534 and HADOOP-10141 , encryption of key material could be handled by the implementation of key providers according to customers environment. Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM) AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity was ensured by underlying filesystem like HDFS in this scenario. We decide to use AES-CTR for best performance. Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be EOL but plenty of Hadoopers are still running it. It's not even listed on the Java 7 Sun provider document ( http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html ) but that may be an omission. By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements. Actually we designed like this much earlier before we updated, just look at the patch. Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both. I agree. Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone Rename is atomic operation in Hadoop, so we only allow move between one directory/file and another directory/file if they share same data key, then no decryption is required. Please see my MAR/21 patch. Actually we have not mentioned rename in the earlier doc, we just discussed it in review comments, since @Steve had the same questions, and we covered this in the comments of discussion with him. Explicit auditing on encrypted files access does not seem handled The auditing could be another topic we need to address especially when discussing the client side encryption. One possible way is to add a pluggable point that customer can route audit event to their existing auditing system. On that above points discussion conclusion we think on this point later.
          Hide
          Yi Liu added a comment -

          We less concern the internal use of HDFS client, on the contrary we care more about encrypted data easy for clients. Even though we found that in webhdfs it should use DistributedFileSystem as well to remove the symlink issue as HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when getting HDFS symlink file through HDFS REST API”, and there is no “statistics” for HDFS REST which is inconsistent with behavior of DistributedFileSystem, suppose this JIRA will resolve it).

          “Transparent” or “at rest” encryption usually means that the server handles encrypting data for persistence, but does not manage keys for particular clients or applications, nor require applications to even be aware that encryption is in use. Hence how it can be described as transparent. This type of solution distributes secret keys within the secure enclave (not to clients), or might employ a two tier key architecture (data keys wrapped by the cluster secret key) but with keys managed per application typically. E.g. in a database system, per table. The goal here is to avoid data leakage from the server by universally encrypting data “at rest”.

          Other cryptographic application architectures handle use cases where clients or applications want to protect data with encryption from other clients or applications. For those use cases encryption and decryption is done on the client, and the scope of key sharing should be minimized to where the cryptographic operations take place. In this type of solution the server becomes an unnecessary central point of compromise for user or application keys, so sharing there should be avoided. This isn’t really an “at rest” solution because the client may or may not choose to encrypt, and because key sharing is minimized, the server cannot and should not be able to distinguish encrypted data from random bytes, so cannot guarantee all persisted data is encrypted.

          Therefore we have two different types of solutions useful for different reasons, with different threat models. Combinations of the two must be carefully done (or avoided) so as not to end up with something combining the worst of both threat models.

          HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when viewed in this light. HDFS-6134, as described at least by the JIRA title, wants to introduce transparent encryption within HDFS. In my opinion, it shouldn’t attempt “client side encryption on the server” for reasons mentioned above. HADOOP-10150 wants to make management of partially encrypted data easy for clients, for the client side encryption use cases, by presenting a filtered view over base Hadoop filesystems like HDFS..

          { in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. "}

          We assume HDFS-2006 could help, that’s why we put separate patches. In that the CFS patch it was decoupled with underlying filesystem if xattr present. And it could be end user’s choice to decide whether store key alias or data encryption key.

          {(Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads?}

          For hflush, hsync, actually it's very simple. In cryptographic output stream of CFS, we buffer the plain text in cache and do encryption until data size reaches buffer length to improve performance. So for hflush /hsync, we just need to flush the buffer and do encryption immediately, and then call FSDataOutputStream.hfulsh/hsync which will handle the remaining thing.

          {Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)}

          There is no URL changed, please see latest design doc and test case.
          We have considered HADOOP-9534 and HADOOP-10141, encryption of key material could be handled by the implementation of key providers according to customers environment.

          {Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)}

          AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity was ensured by underlying filesystem like HDFS in this scenario. We decide to use AES-CTR for best performance.
          Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be EOL but plenty of Hadoopers are still running it. It's not even listed on the Java 7 Sun provider document (http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html) but that may be an omission.

          {By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements.}

          Actually we designed like this much earlier before we updated, just look at the patch.

          {Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both.}

          I agree.

          {Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone}

          Rename is atomic operation in Hadoop, so we only allow move between one directory/file and another directory/file if they share same data key, then no decryption is required. Please see my MAR/21 patch.
          Actually we have not mentioned rename in the earlier doc, we just discussed it in review comments, since @Steve had the same questions, and we covered this in the comments of discussion with him.

          {Explicit auditing on encrypted files access does not seem handled}

          The auditing could be another topic we need to address especially when discussing the client side encryption. One possible way is to add a pluggable point that customer can route audit event to their existing auditing system. On that above points discussion conclusion we think on this point later.

          Show
          Yi Liu added a comment - We less concern the internal use of HDFS client, on the contrary we care more about encrypted data easy for clients. Even though we found that in webhdfs it should use DistributedFileSystem as well to remove the symlink issue as HDFS-4933 stated(The issue we found is “Throwing UnresolvedPathException when getting HDFS symlink file through HDFS REST API”, and there is no “statistics” for HDFS REST which is inconsistent with behavior of DistributedFileSystem, suppose this JIRA will resolve it). “Transparent” or “at rest” encryption usually means that the server handles encrypting data for persistence, but does not manage keys for particular clients or applications, nor require applications to even be aware that encryption is in use. Hence how it can be described as transparent. This type of solution distributes secret keys within the secure enclave (not to clients), or might employ a two tier key architecture (data keys wrapped by the cluster secret key) but with keys managed per application typically. E.g. in a database system, per table. The goal here is to avoid data leakage from the server by universally encrypting data “at rest”. Other cryptographic application architectures handle use cases where clients or applications want to protect data with encryption from other clients or applications. For those use cases encryption and decryption is done on the client, and the scope of key sharing should be minimized to where the cryptographic operations take place. In this type of solution the server becomes an unnecessary central point of compromise for user or application keys, so sharing there should be avoided. This isn’t really an “at rest” solution because the client may or may not choose to encrypt, and because key sharing is minimized, the server cannot and should not be able to distinguish encrypted data from random bytes, so cannot guarantee all persisted data is encrypted. Therefore we have two different types of solutions useful for different reasons, with different threat models. Combinations of the two must be carefully done (or avoided) so as not to end up with something combining the worst of both threat models. HDFS-6134 and HADOOP-10150 are orthogonal and complimentary solutions when viewed in this light. HDFS-6134 , as described at least by the JIRA title, wants to introduce transparent encryption within HDFS. In my opinion, it shouldn’t attempt “client side encryption on the server” for reasons mentioned above. HADOOP-10150 wants to make management of partially encrypted data easy for clients, for the client side encryption use cases, by presenting a filtered view over base Hadoop filesystems like HDFS.. { in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. "} We assume HDFS-2006 could help, that’s why we put separate patches. In that the CFS patch it was decoupled with underlying filesystem if xattr present. And it could be end user’s choice to decide whether store key alias or data encryption key. {(Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads?} For hflush, hsync, actually it's very simple. In cryptographic output stream of CFS, we buffer the plain text in cache and do encryption until data size reaches buffer length to improve performance. So for hflush /hsync, we just need to flush the buffer and do encryption immediately, and then call FSDataOutputStream.hfulsh/hsync which will handle the remaining thing. {Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)} There is no URL changed, please see latest design doc and test case. We have considered HADOOP-9534 and HADOOP-10141 , encryption of key material could be handled by the implementation of key providers according to customers environment. {Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)} AES-GCM was introduce addition CPU cycles by GHASH - 2.5x additional cycles in Sandy-Bridge and Ivy-Bridge, 0.6x additional cycle in Haswell. Data integrity was ensured by underlying filesystem like HDFS in this scenario. We decide to use AES-CTR for best performance. Furthermore, AES-GCM mode is not available as a JCE cipher in Java 6. It may be EOL but plenty of Hadoopers are still running it. It's not even listed on the Java 7 Sun provider document ( http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html ) but that may be an omission. {By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements.} Actually we designed like this much earlier before we updated, just look at the patch. {Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both.} I agree. {Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone} Rename is atomic operation in Hadoop, so we only allow move between one directory/file and another directory/file if they share same data key, then no decryption is required. Please see my MAR/21 patch. Actually we have not mentioned rename in the earlier doc, we just discussed it in review comments, since @Steve had the same questions, and we covered this in the comments of discussion with him. {Explicit auditing on encrypted files access does not seem handled} The auditing could be another topic we need to address especially when discussing the client side encryption. One possible way is to add a pluggable point that customer can route audit event to their existing auditing system. On that above points discussion conclusion we think on this point later.
          Hide
          Avik Dey added a comment -

          Alejandro Abdelnur there are two patches posted by Yi Liu. if you apply the xattrs patch first, the cfs patch should then apply cleanly:
          https://issues.apache.org/jira/secure/attachment/12636026/extended%20information%20based%20on%20INode%20feature.patch

          in the posted cfs patch you will see there is no need to change HDFS URI e.g.. i thought that was in the latest doc, but guess not. anyway let me know if you are still unable to apply the patches, that i think may help clear up a few of the questions you have posted.

          Show
          Avik Dey added a comment - Alejandro Abdelnur there are two patches posted by Yi Liu . if you apply the xattrs patch first, the cfs patch should then apply cleanly: https://issues.apache.org/jira/secure/attachment/12636026/extended%20information%20based%20on%20INode%20feature.patch in the posted cfs patch you will see there is no need to change HDFS URI e.g.. i thought that was in the latest doc, but guess not. anyway let me know if you are still unable to apply the patches, that i think may help clear up a few of the questions you have posted.
          Hide
          Alejandro Abdelnur added a comment -

          (Cross-posting HADOOP-10150 & HDFS-6134]

          Avik Dey, I’ve just looked at the MAR/21 proposal in HADOOP-10150 (the patches uploaded on MAR/21 do not apply on trunk cleanly, so I cannot look at them easily. It seems to have missing pieces, like getXAttrs() and wiring to KeyProvider API. Would be possible to rebased them so they apply to trunk?)

          do we need a new proposal for the work already being done on HADOOP-10150?

          HADOOP-10150 aims to provide encryption for any filesystem implementation as a decorator filesystem. While HDFS-6134 aims to provide encryption for HDFS.

          The 2 approaches differ on the level of transparency you get. The comparison table in the "HDFS Data at Rest Encryption" attachment (https://issues.apache.org/jira/secure/attachment/12635964/HDFSDataAtRestEncryption.pdf) highlights the differences.

          Particularly, the things I’m concerned the most with HADOOP-10150 are:

          • All clients (doing encryption/decryption) must have access the key management service.
          • Secure key propagation to tasks running in the cluster (i.e. mapper and reducer tasks)
          • Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM)
          • Not clear how hflush()

          are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150?

          IMO, a consolidated access/distribution of keys by the NN (as opposed to every client) improves the security of the system.

          do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150,

          They are enumerated in the "HDFS Data at Rest Encryption" attachment. The ones I don’t see them address in HADOOP-10150 are: #6, #8.A. And it is not clear how #4 & #5 can be achieved.

          so we can collaborate and not duplicate?

          Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both.

          Happy to jump on a call to discuss things and the report back to the community if you think that will speed up the discussion.

          ----------
          By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements.

          Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes)

          Requirement #4 "Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS." This is contradicted by the design, in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. "

          Requirement #5 "Admin can configure encryption policies, such as which directory will be encrypted.", this seems driven by HDFS client configuration file (hdfs-site.xml). This is not really admin driven as clients could break this by configuring their hdfs-site.xml file)

          Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone.

          (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads?

          Explicit auditing on encrypted files access does not seem handled.

          Show
          Alejandro Abdelnur added a comment - (Cross-posting HADOOP-10150 & HDFS-6134 ] Avik Dey , I’ve just looked at the MAR/21 proposal in HADOOP-10150 (the patches uploaded on MAR/21 do not apply on trunk cleanly, so I cannot look at them easily. It seems to have missing pieces, like getXAttrs() and wiring to KeyProvider API. Would be possible to rebased them so they apply to trunk?) do we need a new proposal for the work already being done on HADOOP-10150 ? HADOOP-10150 aims to provide encryption for any filesystem implementation as a decorator filesystem. While HDFS-6134 aims to provide encryption for HDFS. The 2 approaches differ on the level of transparency you get. The comparison table in the "HDFS Data at Rest Encryption" attachment ( https://issues.apache.org/jira/secure/attachment/12635964/HDFSDataAtRestEncryption.pdf ) highlights the differences. Particularly, the things I’m concerned the most with HADOOP-10150 are: All clients (doing encryption/decryption) must have access the key management service. Secure key propagation to tasks running in the cluster (i.e. mapper and reducer tasks) Use of AES-CTR (instead of an authenticated encryption mode such as AES-GCM) Not clear how hflush() are there design choices in this proposal that are superior to the patch already provided on HADOOP-10150 ? IMO, a consolidated access/distribution of keys by the NN (as opposed to every client) improves the security of the system. do you have additional requirement listed in this JIRA that could be incorporated in to HADOOP-10150 , They are enumerated in the "HDFS Data at Rest Encryption" attachment. The ones I don’t see them address in HADOOP-10150 are: #6, #8.A. And it is not clear how #4 & #5 can be achieved. so we can collaborate and not duplicate? Definitely, I want to work together with you guys to leverage as much as posible. Either by unifying the 2 proposal or by sharing common code if we think both approaches have merits and we decide to move forward with both. Happy to jump on a call to discuss things and the report back to the community if you think that will speed up the discussion. ---------- By looking at the latest design doc of HADOOP-10150 I can see that things have been modified a bit (from the original design doc) bringing it a bit closer to some of the HDFS-6134 requirements. Still, it is not clear how transparency will be achieved for existing applications: HDFS URI changes, clients must connect to the Key store to retrieve the encryption key (clients will need key store principals). The encryption key must be propagated to jobs tasks (i.e. Mapper/Reducer processes) Requirement #4 "Can decorate HDFS and all other file systems in Hadoop, and will not modify existing structure of file system, such as namenode and datanode structure if the wrapped file system is HDFS." This is contradicted by the design, in the "Storage of IV and data key" is stated "So we implement extended information based on INode feature, and use it to store data key and IV. " Requirement #5 "Admin can configure encryption policies, such as which directory will be encrypted.", this seems driven by HDFS client configuration file (hdfs-site.xml). This is not really admin driven as clients could break this by configuring their hdfs-site.xml file) Restrictions of move operations for files within an encrypted directory. The original design had something about it (not entirely correct), now is gone. (Mentioned before), how thing flush() operations will be handled as the encryption block will be cut short? How this is handled on writes? How this is handled on reads? Explicit auditing on encrypted files access does not seem handled.
          Hide
          Yi Liu added a comment -

          The update includes two part patches.
          Add “fs.encryption” and “fs.encryption.dirs” properties in core-site.xml. If “fs.encryption=true”, then filesystem is encrypted. “fs.encryption.dirs” indicates which directories are configured encrypted. Don’t modify URL(fs.defaultFS) in core-site.xml, and CFS is transparent to upper layer applications.

          Each encrypted file has separate IV, and each configured encryption directory has data key. HDFS-2006 is expected and used to save IV and data key, but it’s not ready currently. So we implement extended information based on INode feature, and use it to store data key and IV. In our case, only directories and files which are configured encrypted need to use this feature, if there are 1,000,000 files which are encrypted, about 8MB memory is required, so these information are stored in NN’s memory and will be serialized to edit log and finally in FSImage.

          For key management, we use key provider API in HADOOP-10141, and Key rotation: data key will be decrypted using the original master key and then encrypted using the new master key.
          For more information, please refer to the updated design doc.
          The first part of patch is “extended information” based on INode feature, and used to save IV and data key. The second part patch is cfs patch.

          I’m splitting these patches to the sub JIRAs.

          Show
          Yi Liu added a comment - The update includes two part patches. Add “fs.encryption” and “fs.encryption.dirs” properties in core-site.xml. If “fs.encryption=true”, then filesystem is encrypted. “fs.encryption.dirs” indicates which directories are configured encrypted. Don’t modify URL(fs.defaultFS) in core-site.xml, and CFS is transparent to upper layer applications. Each encrypted file has separate IV, and each configured encryption directory has data key. HDFS-2006 is expected and used to save IV and data key, but it’s not ready currently. So we implement extended information based on INode feature, and use it to store data key and IV. In our case, only directories and files which are configured encrypted need to use this feature, if there are 1,000,000 files which are encrypted, about 8MB memory is required, so these information are stored in NN’s memory and will be serialized to edit log and finally in FSImage. For key management, we use key provider API in HADOOP-10141 , and Key rotation: data key will be decrypted using the original master key and then encrypted using the new master key. For more information, please refer to the updated design doc. The first part of patch is “extended information” based on INode feature, and used to save IV and data key. The second part patch is cfs patch. I’m splitting these patches to the sub JIRAs.
          Hide
          Yi Liu added a comment -

          Larry, the patch attached to HADOOP-10156 as subtask of HADOOP-10150 is a pure java implementation without any external dependencies. The first patch we put up did contain hadoop-crypto, a crypto codec framework which includes some non-Java code implemented using C. However, the latest patch on HADOOP-10156 instead provides ciphers using the standard javax.security.Cipher interface, and cipher implementations that are shipped with the JRE by default, instead of hadoop-crypto. Java by itself provides the mechanism to allow supplement Cipher implementations, the JCE (Java Cryptography Extension).

          Because the default JCE provider shipped with common JREs do not utilize hardware acceleration (AES-NI) that has been available for years, we have also developed a pure open source Apache 2 licensed JCE provider named Diceros to mitigate the performance penalties. Our initial tests shows 20x improvement over ciphers shipped with JRE 7. We would like to contribute Diceros also, but to simply review for now we are hosting Diceros on GitHub. The code submitted for HADOOP-10156 allows the end user to configure any kind of JCE provider - for example, it can be the default JCE provider shipped with JREs, Diceros ("DC") or BouncyCastle ("BC"). Please let me know if you have any other concerns about this approach. Thanks.

          Show
          Yi Liu added a comment - Larry, the patch attached to HADOOP-10156 as subtask of HADOOP-10150 is a pure java implementation without any external dependencies. The first patch we put up did contain hadoop-crypto, a crypto codec framework which includes some non-Java code implemented using C. However, the latest patch on HADOOP-10156 instead provides ciphers using the standard javax.security.Cipher interface, and cipher implementations that are shipped with the JRE by default, instead of hadoop-crypto. Java by itself provides the mechanism to allow supplement Cipher implementations, the JCE (Java Cryptography Extension). Because the default JCE provider shipped with common JREs do not utilize hardware acceleration (AES-NI) that has been available for years, we have also developed a pure open source Apache 2 licensed JCE provider named Diceros to mitigate the performance penalties. Our initial tests shows 20x improvement over ciphers shipped with JRE 7. We would like to contribute Diceros also, but to simply review for now we are hosting Diceros on GitHub. The code submitted for HADOOP-10156 allows the end user to configure any kind of JCE provider - for example, it can be the default JCE provider shipped with JREs, Diceros ("DC") or BouncyCastle ("BC"). Please let me know if you have any other concerns about this approach. Thanks.
          Hide
          Larry McCay added a comment -

          Hi Yi -
          I am a bit confused by this latest comment.
          Can you please clarify "hadoop-crypto component was removed from latest patch as a result of Diceros emerging. "?

          Are you saying that initially you had a cipher provider implementation but have decided not to provide one since there is one available in yet another non-apache project? I don't believe that these sorts of external references are really appropriate. Neither Rhino or Diceros are a TLP or incubation project in Apache. Since it appears to be an intel specific implementation, it seems appropriate to remove it from the patch though.

          Do you plan to provide an all java implementation for this work?

          Show
          Larry McCay added a comment - Hi Yi - I am a bit confused by this latest comment. Can you please clarify "hadoop-crypto component was removed from latest patch as a result of Diceros emerging. "? Are you saying that initially you had a cipher provider implementation but have decided not to provide one since there is one available in yet another non-apache project? I don't believe that these sorts of external references are really appropriate. Neither Rhino or Diceros are a TLP or incubation project in Apache. Since it appears to be an intel specific implementation, it seems appropriate to remove it from the patch though. Do you plan to provide an all java implementation for this work?
          Hide
          Yi Liu added a comment -

          Create a sub task: HADOOP-10156:
          This JIRA defines Encryptor and Decryptor which are buffer-based interfaces for encryption and decryption. Standard javax.security.Cipher interface was employed to provide AES/CTR encryption/decryption implemention. In this way, one can replace javax.security.Cipher implementation by plug other JCE provider such as Diceros. Diceros was opensource project under Rhino project, implement a set of Cipher interface which provide high performance encyption/decryption compared to default JCE provider. The initial performance test result shows 20x speedup in CTR mode compared to default JCE provider in JDK 1.7_u45.

          Moreover, Encryptor/Decryptor interfaces implements a internal buffer to further improve the performance over javax.security.Cipher.

          hadoop-crypto component was removed from latest patch as a result of Diceros emerging.
          One can use "cfs.cipher.provider" to specify the JCE provider, for example, ....

          Diceros project link: https://github.com/intel-hadoop/diceros

          Show
          Yi Liu added a comment - Create a sub task: HADOOP-10156 : This JIRA defines Encryptor and Decryptor which are buffer-based interfaces for encryption and decryption. Standard javax.security.Cipher interface was employed to provide AES/CTR encryption/decryption implemention. In this way, one can replace javax.security.Cipher implementation by plug other JCE provider such as Diceros. Diceros was opensource project under Rhino project, implement a set of Cipher interface which provide high performance encyption/decryption compared to default JCE provider. The initial performance test result shows 20x speedup in CTR mode compared to default JCE provider in JDK 1.7_u45. Moreover, Encryptor/Decryptor interfaces implements a internal buffer to further improve the performance over javax.security.Cipher. hadoop-crypto component was removed from latest patch as a result of Diceros emerging. One can use "cfs.cipher.provider" to specify the JCE provider, for example, .... Diceros project link: https://github.com/intel-hadoop/diceros
          Hide
          Yi Liu added a comment -

          Hi Owen, I have filed 5 sub tasks, and initial patches will be attached later. I want to use HADOOP-10149 to attach ByteBufferCipher API patch.

          Show
          Yi Liu added a comment - Hi Owen, I have filed 5 sub tasks, and initial patches will be attached later. I want to use HADOOP-10149 to attach ByteBufferCipher API patch.
          Hide
          Yi Liu added a comment -

          Thanks Uma, I am working on breakdown patches and creating sub-task Jiras. I will convert this JIRA to common project.

          Show
          Yi Liu added a comment - Thanks Uma, I am working on breakdown patches and creating sub-task Jiras. I will convert this JIRA to common project.
          Hide
          Yi Liu added a comment -

          Hi Owen, thanks for bringing it up here. I am working on breaking down the patches and creating sub-task JIRAs as already mentioned in my previous response. Rest of your comment seems to be about a different JIRA and is probably best discussed on that JIRA.

          • HADOOP-10149: since I have that patch already implemented, do you mind assigning it to me? I will take that piece of code and apply there for review.
          • Since HADOOP-10141 tries to improve on HADOOP-9333, why not provide your feedback on HADOOP-9333 instead of opening a JIRA that duplicates part of that work?
          Show
          Yi Liu added a comment - Hi Owen, thanks for bringing it up here. I am working on breaking down the patches and creating sub-task JIRAs as already mentioned in my previous response. Rest of your comment seems to be about a different JIRA and is probably best discussed on that JIRA. HADOOP-10149 : since I have that patch already implemented, do you mind assigning it to me? I will take that piece of code and apply there for review. Since HADOOP-10141 tries to improve on HADOOP-9333 , why not provide your feedback on HADOOP-9333 instead of opening a JIRA that duplicates part of that work?
          Hide
          Uma Maheswara Rao G added a comment -

          Hi Yi Liu, I think you can file sub tasks under this JIRAs . From the patch you submitted, much of the code goes under common package,So, I think you have to move this JIRA to common project right? Splitting into tasks would help people to review/comment on patches easily.

          Show
          Uma Maheswara Rao G added a comment - Hi Yi Liu , I think you can file sub tasks under this JIRAs . From the patch you submitted, much of the code goes under common package,So, I think you have to move this JIRA to common project right? Splitting into tasks would help people to review/comment on patches easily.
          Hide
          Owen O'Malley added a comment -

          We need to break this work down in to smaller units of work. Jiras with a tighter focus will provide a more focused discussion and allow us to make progress and accomplish our shared goal of enabling Hadoop users to use encryption in their applications without changing each individual input and output format.

          • The key management needs to be much more flexible and I've created HADOOP-10141 to work on it.
          • The ByteBufferCipher API should be a separate jira, so I've created HADOOP-10149.
          • Once HADOOP-10149 is resolved, we can work together on a jni-based implementation of it.
          Show
          Owen O'Malley added a comment - We need to break this work down in to smaller units of work. Jiras with a tighter focus will provide a more focused discussion and allow us to make progress and accomplish our shared goal of enabling Hadoop users to use encryption in their applications without changing each individual input and output format. The key management needs to be much more flexible and I've created HADOOP-10141 to work on it. The ByteBufferCipher API should be a separate jira, so I've created HADOOP-10149 . Once HADOOP-10149 is resolved, we can work together on a jni-based implementation of it.
          Hide
          Avik Dey added a comment -

          @Owen If you don't think you are misquoting me, then you must be confused. Don't confuse our discussion on CFS and the other discussion on encryption support in various file formats. Just because we are working on the later does not mean one should conclude we are not working on the former. As you can see a patch of this size could hardly have been produced overnight.

          I don't think I can add any more to what I have said, so this will be my last post on the topic.

          Show
          Avik Dey added a comment - @Owen If you don't think you are misquoting me, then you must be confused. Don't confuse our discussion on CFS and the other discussion on encryption support in various file formats. Just because we are working on the later does not mean one should conclude we are not working on the former. As you can see a patch of this size could hardly have been produced overnight. I don't think I can add any more to what I have said, so this will be my last post on the topic.
          Hide
          Owen O'Malley added a comment -

          Yi Liu In the design document, the IV was always 0, but in the comments you are suggesting putting a random IV in the start of the underlying file. I think that the security advantage of having a random IV is relatively small and we'd do better without it. It only protects against having multiple files with the same key and the same plain text co-located in the file.

          I think that putting it at the front of the file has a couple of disadvantages:

          • Any read of the file has to read the beginning 16 bytes of the file.
          • Block boundaries are offset from the expectation. This will cause MapReduce input splits to straddle blocks in cases that wouldn't otherwise require it.

          I think we should always have an IV of 0 or alternatively encode it in the underlying filesystem's filenames. In particular, we could base 64 encode the IV and append it onto the filename. If we add 16 characters of base64 that would give use 96 bits of IV and it would be easy to strip off. It would look like:

          cfs://hdfs@nn/dir1/dir2/file -> hdfs://nn/dir1/dir2/file_1234567890ABCDEF

          Show
          Owen O'Malley added a comment - Yi Liu In the design document, the IV was always 0, but in the comments you are suggesting putting a random IV in the start of the underlying file. I think that the security advantage of having a random IV is relatively small and we'd do better without it. It only protects against having multiple files with the same key and the same plain text co-located in the file. I think that putting it at the front of the file has a couple of disadvantages: Any read of the file has to read the beginning 16 bytes of the file. Block boundaries are offset from the expectation. This will cause MapReduce input splits to straddle blocks in cases that wouldn't otherwise require it. I think we should always have an IV of 0 or alternatively encode it in the underlying filesystem's filenames. In particular, we could base 64 encode the IV and append it onto the filename. If we add 16 characters of base64 that would give use 96 bits of IV and it would be easy to strip off. It would look like: cfs://hdfs@nn/dir1/dir2/file -> hdfs://nn/dir1/dir2/file_1234567890ABCDEF
          Hide
          Owen O'Malley added a comment -

          Avik Dey I'm not misquoting you. You were very clear that you weren't planning on working on this in the immediate future and that instead you wanted to change all of the file formats.

          Show
          Owen O'Malley added a comment - Avik Dey I'm not misquoting you. You were very clear that you weren't planning on working on this in the immediate future and that instead you wanted to change all of the file formats.
          Hide
          Avik Dey added a comment -

          @Owen - Not only did we talk at Strata we talked last night as well. In both of those, I confirmed that Yi would make the patch available shortly. Don't misquote me please. Thanks for assigning it to Yi.

          Show
          Avik Dey added a comment - @Owen - Not only did we talk at Strata we talked last night as well. In both of those, I confirmed that Yi would make the patch available shortly. Don't misquote me please. Thanks for assigning it to Yi.
          Hide
          Owen O'Malley added a comment -

          It should only be marked Patch Available when Yi thinks it is ready to commit.

          Show
          Owen O'Malley added a comment - It should only be marked Patch Available when Yi thinks it is ready to commit.
          Hide
          Owen O'Malley added a comment -

          It wasn't assigned and no one seemed to be working on this. Talking to Avik at Strata, he said no one was going to be working on this for 9 months. I'm glad to see that Yi has posted a patch.

          Show
          Owen O'Malley added a comment - It wasn't assigned and no one seemed to be working on this. Talking to Avik at Strata, he said no one was going to be working on this for 9 months. I'm glad to see that Yi has posted a patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12613629/CryptographicFileSystem.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-crypto hadoop-dist hadoop-hdfs-project/hadoop-hdfs:

          org.apache.hadoop.conf.TestConfiguration

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5422//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5422//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613629/CryptographicFileSystem.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. -1 core tests . The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-common-project/hadoop-crypto hadoop-dist hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.conf.TestConfiguration +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5422//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5422//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          Shouldn't this issue be assigned to the reporter, who has done all the work and submitted the patch for consideration?

          Show
          Andrew Purtell added a comment - Shouldn't this issue be assigned to the reporter, who has done all the work and submitted the patch for consideration?
          Hide
          Yi Liu added a comment -

          This patch is an initial version(still need to refine) of implementation for cryptographic file system aligned with design doc discussed in this jira:
          1) Basic functionalities of cryptographic file system, including transparently read/write data to HDFS(currently only HDFS has been tested) using filesystem API, transparently using cryptographic filesystem in upper layer applications(MapReduce has been tested), hdfs commands support(ls, du, etc.) and so on.
          2) Currently different IV are used for encryption files to enhance security. And Length of IV is fixed 16 bytes and is stored at the beginning of encryption file.
          3) In the patch, crypto policy interface is defined, developers/users can implement their own crypto policy to decide how and while files/directories will be encrypted. By default, a simple crypto policy is implemented and admin can configured the encrypted directory list and encrypted file list, each encrypted directory has different encryption key, and the file stored into this directory will be automatically encrypted.
          4) For key management, in the patch, key management protocol interface is defined, and there is default implementation and users/developers can have their own implementation. In the patch a simple key management server is implemented which uses java keystore to store keys. Currently the key management server is still under development.
          5) The patch includes a mvn project: hadoop-crypto, it uses OpenSSL to implement Cipher which is much more faster than java cipher, especially when AES-NI is enabled.
          6) This patch also includes Encryptor/Decryptor interfaces and other encryption facility, such as buffered EncryptorStream and DecryptorStream.
          7) fs.default.name is “cfs://hdfs@hostname:9000” when cryptographi filesystem is used on hdfs, and additionally “cfs-site.xml” need to be configured.

          This is an all-in-one patch, and later I will create several sub JIRAs and split this patch for convenience of code review. I will make the patch stable and extend the functionalities in further steps.

          Show
          Yi Liu added a comment - This patch is an initial version(still need to refine) of implementation for cryptographic file system aligned with design doc discussed in this jira: 1) Basic functionalities of cryptographic file system, including transparently read/write data to HDFS(currently only HDFS has been tested) using filesystem API, transparently using cryptographic filesystem in upper layer applications(MapReduce has been tested), hdfs commands support(ls, du, etc.) and so on. 2) Currently different IV are used for encryption files to enhance security. And Length of IV is fixed 16 bytes and is stored at the beginning of encryption file. 3) In the patch, crypto policy interface is defined, developers/users can implement their own crypto policy to decide how and while files/directories will be encrypted. By default, a simple crypto policy is implemented and admin can configured the encrypted directory list and encrypted file list, each encrypted directory has different encryption key, and the file stored into this directory will be automatically encrypted. 4) For key management, in the patch, key management protocol interface is defined, and there is default implementation and users/developers can have their own implementation. In the patch a simple key management server is implemented which uses java keystore to store keys. Currently the key management server is still under development. 5) The patch includes a mvn project: hadoop-crypto, it uses OpenSSL to implement Cipher which is much more faster than java cipher, especially when AES-NI is enabled. 6) This patch also includes Encryptor/Decryptor interfaces and other encryption facility, such as buffered EncryptorStream and DecryptorStream. 7) fs.default.name is “cfs://hdfs@hostname:9000” when cryptographi filesystem is used on hdfs, and additionally “cfs-site.xml” need to be configured. This is an all-in-one patch, and later I will create several sub JIRAs and split this patch for convenience of code review. I will make the patch stable and extend the functionalities in further steps.
          Hide
          Todd Lipcon added a comment -

          Hi folks. I read through the design document and the discussion above.

          One question: is there an industry standard key management system that would plug in, here? I imagine that key management for symmetric encryption schemes is a common problem that must be solved in some other storage systems, databases, etc, and if we can plug into some existing software that enterprises already have in place, that would be preferable to building Yet Another Daemon.

          Show
          Todd Lipcon added a comment - Hi folks. I read through the design document and the discussion above. One question: is there an industry standard key management system that would plug in, here? I imagine that key management for symmetric encryption schemes is a common problem that must be solved in some other storage systems, databases, etc, and if we can plug into some existing software that enterprises already have in place, that would be preferable to building Yet Another Daemon.
          Hide
          Binglin Chang added a comment -

          Hi Yi Liu,
          Nice document. I think one drawback of 16 bytes header is it breaks original block boundaries assumptions, user think they are visit block 0 may visiting block 1 instead. I wonder if each file has a different key in key store, we can just put 16 bytes header in key store? I see you may want to manage key in per dir level or per file level, how about make file header optional, and when key is per file, we can omit 16 byte headers?

          Show
          Binglin Chang added a comment - Hi Yi Liu, Nice document. I think one drawback of 16 bytes header is it breaks original block boundaries assumptions, user think they are visit block 0 may visiting block 1 instead. I wonder if each file has a different key in key store, we can just put 16 bytes header in key store? I see you may want to manage key in per dir level or per file level, how about make file header optional, and when key is per file, we can omit 16 byte headers?
          Hide
          Steve Loughran added a comment -

          Yi Liu -thanks; having a fixed size increase of only 16 bytes would make that mapping from encrypted to actual length trivial, and be so small the quotas shouldn't be a problem. Even if you encrypt a directory with 1M files, it's only 16MB extra of data.

          Storing the data in the NN tends to meet resistance from anyone who runs a large NN; an extra 16 bytes/file would massively increase the cost of storing file metadata, and so reduce the max #of files you can store in a single HDFS instance. And as you say, its not portable -yet encrypting data at rest in a blobstore is a use case I'd consider important. Maybe we need some FS independent get/set metadata interface that blobstores could support with in-blobstore metadata ops (which S3 & Swift both support), raw filesystems could support with metadata file(s) alongside the data files, and HDFS could support how it chose.

          Show
          Steve Loughran added a comment - Yi Liu -thanks; having a fixed size increase of only 16 bytes would make that mapping from encrypted to actual length trivial, and be so small the quotas shouldn't be a problem. Even if you encrypt a directory with 1M files, it's only 16MB extra of data. Storing the data in the NN tends to meet resistance from anyone who runs a large NN; an extra 16 bytes/file would massively increase the cost of storing file metadata, and so reduce the max #of files you can store in a single HDFS instance. And as you say, its not portable -yet encrypting data at rest in a blobstore is a use case I'd consider important. Maybe we need some FS independent get/set metadata interface that blobstores could support with in-blobstore metadata ops (which S3 & Swift both support), raw filesystems could support with metadata file(s) alongside the data files, and HDFS could support how it chose.
          Hide
          Dilli Arumugam added a comment -

          Thanks Yi for the response.

          <context>
          >>> Wait, you still have to progagate the encryption key into mapper/reducer task to let them read the file from file system. Right?

          We don’t need to propagate the encryption key into mapper/reducer task. When upper layer applications use CFS interfaces to write data, CFS will get key from Key management service which will authenticate the user firstly. So mapper/reducer task will be unaware of encryption, procedure of getting encryption key is done in CFS.
          </context>

          Many map, reduce task would need to create, write and read files.

          One use case:
          The content of the file is generated in the reduce task.
          The reduce task would have to write to CFS.

          Now reduce task has to consult the 'CFS configuration provider service' to detect whether the file or the directory implies that the file has to be encrypted?

          If it detects the file has to be encrypted, it has to get hold of the encryption key? That is why, I think key has to be propagated to map/reduce task.

          Flag indicating whether a file is encrypted and encryption key has to be propogated to map/reduce task OR map/reduce task has to call into 'Configuration Provider Service' and 'Key lookup service'.

          If we decide that client would propagate the encryption key and encryption flag, we would have a problem with files do not exist now but would be created while map/reduce task is running. The filename would not be known to client in advance.

          ----------------------------------------------------
          <context>
          >>> How is the client supposed to choose plain HDFS protocol versus CFS? In other words, how would the client detect whether the file is encrypted?

          It’s in the configuration. Admin configures whether file/directory is encrypted in configuration file. CFS will choose plain HDFS protocol if a file/directory is not configured to be encrypted.
          </context>

          As seen from the client, this would be a service, say, 'CFS Configuration Provider Service'?

          Being a service, it can provide rich configuration options of sepcifications at directories level with inheritable, overridable configurations. You can get very fancy or very simple. In the simplest case it can be a simple table of filenames requiring encryption.

          Would this be distinct from key lookup service or would be one part of key look up service?

          Show
          Dilli Arumugam added a comment - Thanks Yi for the response. <context> >>> Wait, you still have to progagate the encryption key into mapper/reducer task to let them read the file from file system. Right? We don’t need to propagate the encryption key into mapper/reducer task. When upper layer applications use CFS interfaces to write data, CFS will get key from Key management service which will authenticate the user firstly. So mapper/reducer task will be unaware of encryption, procedure of getting encryption key is done in CFS. </context> Many map, reduce task would need to create, write and read files. One use case: The content of the file is generated in the reduce task. The reduce task would have to write to CFS. Now reduce task has to consult the 'CFS configuration provider service' to detect whether the file or the directory implies that the file has to be encrypted? If it detects the file has to be encrypted, it has to get hold of the encryption key? That is why, I think key has to be propagated to map/reduce task. Flag indicating whether a file is encrypted and encryption key has to be propogated to map/reduce task OR map/reduce task has to call into 'Configuration Provider Service' and 'Key lookup service'. If we decide that client would propagate the encryption key and encryption flag, we would have a problem with files do not exist now but would be created while map/reduce task is running. The filename would not be known to client in advance. ---------------------------------------------------- <context> >>> How is the client supposed to choose plain HDFS protocol versus CFS? In other words, how would the client detect whether the file is encrypted? It’s in the configuration. Admin configures whether file/directory is encrypted in configuration file. CFS will choose plain HDFS protocol if a file/directory is not configured to be encrypted. </context> As seen from the client, this would be a service, say, 'CFS Configuration Provider Service'? Being a service, it can provide rich configuration options of sepcifications at directories level with inheritable, overridable configurations. You can get very fancy or very simple. In the simplest case it can be a simple table of filenames requiring encryption. Would this be distinct from key lookup service or would be one part of key look up service?
          Hide
          Yi Liu added a comment -

          Thanks Dilli, your comments and questions are very good:

          >>> Wait, you still have to progagate the encryption key into mapper/reducer task to let them read the file from file system. Right?

          We don’t need to propagate the encryption key into mapper/reducer task. When upper layer applications use CFS interfaces to write data, CFS will get key from Key management service which will authenticate the user firstly. So mapper/reducer task will be unaware of encryption, procedure of getting encryption key is done in CFS.

          >>> How is the client supposed to choose plain HDFS protocol versus CFS? In other words, how would the client detect whether the file is encrypted?

          It’s in the configuration. Admin configures whether file/directory is encrypted in configuration file. CFS will choose plain HDFS protocol if a file/directory is not configured to be encrypted.

          >>> Would this play nicely with hadoop command line: “hadoop fs –cat File1”, “hadoop fs –cat File2”

          Yes, that would play nicely. The plain text content will be shown if the user has right to access the encrypted data, otherwise cipher text content will be shown if the user has not right to access the encrypted data.

          >>> I am wondering whether we should consider adding metadata to filesystem namespace…

          This is very good, actually we have discussed this carefully internally before. As I have replied to Steve, if we put “encryption” flag, IV in namenode, we don’t need to store key name(alias) in namenode since we can get the key name through the file name, that will be great for HDFS, but many people may not like the idea of modification to namenode inodes and code. Furthermore, CFS can decorate other file system besides HDFS, so we are proposing not to modify structure of namenode.
          In additional, we can wait and see other comments about whether to do some modification in namenode, in our design, there is no modification required for namenode, but if many people support this, we can add it too.

          Show
          Yi Liu added a comment - Thanks Dilli, your comments and questions are very good: >>> Wait, you still have to progagate the encryption key into mapper/reducer task to let them read the file from file system. Right? We don’t need to propagate the encryption key into mapper/reducer task. When upper layer applications use CFS interfaces to write data, CFS will get key from Key management service which will authenticate the user firstly. So mapper/reducer task will be unaware of encryption, procedure of getting encryption key is done in CFS. >>> How is the client supposed to choose plain HDFS protocol versus CFS? In other words, how would the client detect whether the file is encrypted? It’s in the configuration. Admin configures whether file/directory is encrypted in configuration file. CFS will choose plain HDFS protocol if a file/directory is not configured to be encrypted. >>> Would this play nicely with hadoop command line: “hadoop fs –cat File1”, “hadoop fs –cat File2” Yes, that would play nicely. The plain text content will be shown if the user has right to access the encrypted data, otherwise cipher text content will be shown if the user has not right to access the encrypted data. >>> I am wondering whether we should consider adding metadata to filesystem namespace… This is very good, actually we have discussed this carefully internally before. As I have replied to Steve, if we put “encryption” flag, IV in namenode, we don’t need to store key name(alias) in namenode since we can get the key name through the file name, that will be great for HDFS, but many people may not like the idea of modification to namenode inodes and code. Furthermore, CFS can decorate other file system besides HDFS, so we are proposing not to modify structure of namenode. In additional, we can wait and see other comments about whether to do some modification in namenode, in our design, there is no modification required for namenode, but if many people support this, we can add it too.
          Hide
          Yi Liu added a comment -

          Steve, Thanks for your comments.

          >>> Is there going to be a difference between the listable length of a file (FileSystem.listStatus(), and the user-code visible length of a file

          The user will see no difference between these two in our design choice, and they will be the same length as original file.

          As you know, for most encryption modes of various encryption algorithms, the length of cipher text is different from the length of original plain text. But in our design, the length of cipher text is the same length as plain text, more importantly, the bytes have 1:1 correspondence .

          To make the encryption more secure, we use different IV(Initialization Vector) in encryption algorithm, and IV is fixed size of 16bytes. We store the IV at the header of encrypted file, so Length of encrypted file = Length of original file + 16 bytes. However, we will implement listStatus/getFileStatus and other related interfaces of FileSystem in CFS to ensure the length returned is always the original length of the file.

          The key point is that length of encrypted file equals length of plain text file + 16bytes, the bytes have 1:1 correspondence, and our design allows a random access property during decryption. So we can easily get the length of plain text file and easily handle other operations of file system.
          Actually, if we put “encryption” flag and IV in namenode, then length of encrypted file equals to length of plain text file. That will be great for HDFS, but many people may not like the idea of modification to namenode inodes and code. Furthermore, CFS can decorate other file system besides HDFS, so we are proposing not to modify structure of namenode.

          >>> Is it that the cfs:// view is consistent across all file stat operations, seek() etc.?

          Right, it’s consistent. They are regard to plain text file, since upper layer applications should be unaware of encryption which is transparent.

          Furthermore, for du, df and other related commands of file system, since Length of encrypted file = Length of original file + 16bytes, “du” will count the plain text file size, and it’s consistent with the file size listed in “ls”, but “df” e.g. will count the encrypted file size.

          >>> I’m curious about how this interacts with quotas.

          This is a good question. HDFS Quotas includes Name Quotas and Space Quotas. We just need to discuss Space Quotas, as described above, length of encrypted file equals length of plain text file + 16 bytes, so the required space of encrypted directory is a bit larger than unencrypted directory, but I don’t think this affects usage, when copying a file from unencrypted directory to an encrypted one, if space quotas is not enough and the copying directory contains encrypted file, we will prompt with a message like “The directory contains encrypted file, since 16 additional bytes are required per encrypted file, the space quota for the target directory is insufficient”.

          >>> Are all operations that are atomic today, e.g. renaming one directory under another going to remain atomic?

          It depends. If renaming one directory under another, and both the source and target are unencrypted directory, then the operations are still atomic. However, we do not intend to allow renaming an unencrypted directory to encrypted one, instead, user should create the encrypted directory first and then copy files to it.

          Show
          Yi Liu added a comment - Steve, Thanks for your comments. >>> Is there going to be a difference between the listable length of a file (FileSystem.listStatus(), and the user-code visible length of a file The user will see no difference between these two in our design choice, and they will be the same length as original file. As you know, for most encryption modes of various encryption algorithms, the length of cipher text is different from the length of original plain text. But in our design, the length of cipher text is the same length as plain text, more importantly, the bytes have 1:1 correspondence . To make the encryption more secure, we use different IV(Initialization Vector) in encryption algorithm, and IV is fixed size of 16bytes. We store the IV at the header of encrypted file, so Length of encrypted file = Length of original file + 16 bytes. However, we will implement listStatus/getFileStatus and other related interfaces of FileSystem in CFS to ensure the length returned is always the original length of the file. The key point is that length of encrypted file equals length of plain text file + 16bytes, the bytes have 1:1 correspondence, and our design allows a random access property during decryption. So we can easily get the length of plain text file and easily handle other operations of file system. Actually, if we put “encryption” flag and IV in namenode, then length of encrypted file equals to length of plain text file. That will be great for HDFS, but many people may not like the idea of modification to namenode inodes and code. Furthermore, CFS can decorate other file system besides HDFS, so we are proposing not to modify structure of namenode. >>> Is it that the cfs:// view is consistent across all file stat operations, seek() etc.? Right, it’s consistent. They are regard to plain text file, since upper layer applications should be unaware of encryption which is transparent. Furthermore, for du, df and other related commands of file system, since Length of encrypted file = Length of original file + 16bytes, “du” will count the plain text file size, and it’s consistent with the file size listed in “ls”, but “df” e.g. will count the encrypted file size. >>> I’m curious about how this interacts with quotas. This is a good question. HDFS Quotas includes Name Quotas and Space Quotas. We just need to discuss Space Quotas, as described above, length of encrypted file equals length of plain text file + 16 bytes, so the required space of encrypted directory is a bit larger than unencrypted directory, but I don’t think this affects usage, when copying a file from unencrypted directory to an encrypted one, if space quotas is not enough and the copying directory contains encrypted file, we will prompt with a message like “The directory contains encrypted file, since 16 additional bytes are required per encrypted file, the space quota for the target directory is insufficient”. >>> Are all operations that are atomic today, e.g. renaming one directory under another going to remain atomic? It depends. If renaming one directory under another, and both the source and target are unencrypted directory, then the operations are still atomic. However, we do not intend to allow renaming an unencrypted directory to encrypted one, instead, user should create the encrypted directory first and then copy files to it.
          Hide
          Dilli Arumugam added a comment -

          Couple of questions:

          Use case:
          A client program has to read 2 files File1, File2.
          File1 is ecnrypted.
          File2 is not encrypted.

          How is the client supposed to choose plain HDFS protocol versus CFS?
          In other words, how would the client detect whether the file is encrypted?

          Would this play nicely with hadoop command line

          hadoop fs -cat File1
          hadoop fs -cat File2

          I am wondering whether we should consider adding metadata to filesystem namespace, an attribute(s) such as encrypted:Boolean, encryptionKeyAlias:String. With this approach, namenode could return these attributes to authenticated and authorized client. The client can look up the key from keylookup service passing the keyAlias. Key look up service would do required authentication, authroization checks on the client before returning the key. This of course requires changes to core hadoop and have to be considered carefully against pros and cons.

          Thanks

          Show
          Dilli Arumugam added a comment - Couple of questions: Use case: A client program has to read 2 files File1, File2. File1 is ecnrypted. File2 is not encrypted. How is the client supposed to choose plain HDFS protocol versus CFS? In other words, how would the client detect whether the file is encrypted? Would this play nicely with hadoop command line hadoop fs -cat File1 hadoop fs -cat File2 I am wondering whether we should consider adding metadata to filesystem namespace, an attribute(s) such as encrypted:Boolean, encryptionKeyAlias:String. With this approach, namenode could return these attributes to authenticated and authorized client. The client can look up the key from keylookup service passing the keyAlias. Key look up service would do required authentication, authroization checks on the client before returning the key. This of course requires changes to core hadoop and have to be considered carefully against pros and cons. Thanks
          Hide
          Dilli Arumugam added a comment -

          Thanks Yi for the clarification that encryption/decryption happens at client.

          That is good - encryption key does not have to be propagated from the client to any other layer into hadoop.
          Wait, you still have to propagate the encryption key into mapper/reducer task to let them read the file from file system. Right?

          Show
          Dilli Arumugam added a comment - Thanks Yi for the clarification that encryption/decryption happens at client. That is good - encryption key does not have to be propagated from the client to any other layer into hadoop. Wait, you still have to propagate the encryption key into mapper/reducer task to let them read the file from file system. Right?
          Hide
          Steve Loughran added a comment -

          I'm confused about one issue: is there going to be a difference between the listable length of a file (FileSystem.listStatus(), and the user-code visible length of a file:

          For example, if the decorated file system is HDFS, the file length stored in namenode should be the length of the encrypted file, but when getting FileStatus using CFS API in Map-Reduce, the file length should be the length of decrypted file.

          Or is it that the cfs:// view is consistent across all file stat operations, seek() etc.?

          Either way, I'm curious about how this interacts with quotas. Presumably the HDFS quota for a specific storage tier applies. This could lead to some failures converting/copying a file from an unencrypted directory to an encrypted one.

          Finally, are all operations that are atomic today e.g renaming one directory under another going to remain atomic?

          Show
          Steve Loughran added a comment - I'm confused about one issue: is there going to be a difference between the listable length of a file ( FileSystem.listStatus() , and the user-code visible length of a file: For example, if the decorated file system is HDFS, the file length stored in namenode should be the length of the encrypted file, but when getting FileStatus using CFS API in Map-Reduce, the file length should be the length of decrypted file. Or is it that the cfs:// view is consistent across all file stat operations, seek() etc.? Either way, I'm curious about how this interacts with quotas. Presumably the HDFS quota for a specific storage tier applies. This could lead to some failures converting/copying a file from an unencrypted directory to an encrypted one. Finally, are all operations that are atomic today e.g renaming one directory under another going to remain atomic?
          Hide
          Yi Liu added a comment -

          Dilli, thanks for your interest.

          Encryption happens when upper layer applications use OutputStream of CFS to write data, and decryption happens when upper layer applications use InputStream of CFS to read data. In other words, encryption/decryption happens on client side when upper layer applications use Hadoop filesystem API. Since CFS can decorate other file system besides HDFS, the encryption/decryption is not intended to be a datanode process or another process fronting datanode.

          Show
          Yi Liu added a comment - Dilli, thanks for your interest. Encryption happens when upper layer applications use OutputStream of CFS to write data, and decryption happens when upper layer applications use InputStream of CFS to read data. In other words, encryption/decryption happens on client side when upper layer applications use Hadoop filesystem API. Since CFS can decorate other file system besides HDFS, the encryption/decryption is not intended to be a datanode process or another process fronting datanode.
          Hide
          Dilli Arumugam added a comment -

          Reviewed the attached pdf.
          Sounds good and interesting.

          Could you clarify where exactly does the encryption/decryption happen?

          Is it in datanode process? Or do you have another process fronting datanode to do encryption/decryption?

          Show
          Dilli Arumugam added a comment - Reviewed the attached pdf. Sounds good and interesting. Could you clarify where exactly does the encryption/decryption happen? Is it in datanode process? Or do you have another process fronting datanode to do encryption/decryption?
          Hide
          Yi Liu added a comment -

          Add the design document.

          Show
          Yi Liu added a comment - Add the design document.

            People

            • Assignee:
              Yi Liu
              Reporter:
              Yi Liu
            • Votes:
              0 Vote for this issue
              Watchers:
              58 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development