Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.98.0
    • Component/s: HFile, io
    • Labels:
      None
    • Hadoop Flags:
      Reviewed
    • Release Note:
      This change introduces a transparent encryption feature for protecting HFile and WAL data at rest. For detailed information including configuration examples see the Security section of the HBase manual.

      Description

      Introduce transparent encryption of HBase on disk data.

      Depends on a separate contribution of an encryption codec framework to Hadoop core and an AES-NI (native code) codec. This is work done in the context of MAPREDUCE-4491 but I'd gather there will be additional JIRAs for common and HDFS parts of it.

      Requirements:

      • Transparent encryption at the CF or table level
      • Protect against all data leakage from files at rest
      • Two-tier key architecture for consistency with best practices for this feature in the RDBMS world
      • Built-in key management
      • Flexible and non-intrusive key rotation
      • Mechanisms not exposed to or modifiable by users
      • Hardware security module integration (via Java KeyStore)
      • HBCK support for transparently encrypted files (+ plugin architecture for HBCK)

      Additional goals:

      • Shell support for administrative functions
      • Avoid performance impact for the null crypto codec case
      • Play nicely with other changes underway: in HFile, block coding, etc.

      We're aiming for rough parity with Oracle's transparent tablespace encryption feature, described in http://www.oracle.com/technetwork/database/owp-security-advanced-security-11gr-133411.pdf as

      “Transparent Data Encryption uses a 2-tier key architecture for flexible and non-intrusive key rotation and least operational and performance impact: Each application table with at least one encrypted column has its own table key, which is applied to all encrypted columns in that table. Equally, each encrypted tablespace has its own tablespace key. Table keys are stored in the data dictionary of the database, while tablespace keys are stored in the header of the tablespace and additionally, the header of each underlying OS file that makes up the tablespace. Each of these keys is encrypted with the TDE master encryption key, which is stored outside of the database in an external security module: either the Oracle Wallet (a PKCS#12 formatted file that is encrypted using a passphrase supplied either by the designated security administrator or DBA during setup), or a Hardware Security Module (HSM) device for higher assurance […]

      Further design details forthcoming in a design document and patch as soon as we have all of the clearances in place.

      1. historical-7544.patch
        259 kB
        Andrew Purtell
      2. historical-7544.pdf
        1.06 MB
        Andrew Purtell
      3. historical-shell.patch
        4 kB
        Andrew Purtell
      4. 7544p2.patch
        38 kB
        Andrew Purtell
      5. 7544p3.patch
        68 kB
        Andrew Purtell
      6. 7544p1.patch
        79 kB
        Andrew Purtell
      7. 7544p1.patch
        81 kB
        Andrew Purtell
      8. 7544p2.patch
        43 kB
        Andrew Purtell
      9. 7544p3.patch
        93 kB
        Andrew Purtell
      10. 7544p4.patch
        54 kB
        Andrew Purtell
      11. 7544.patch
        259 kB
        Andrew Purtell
      12. 7544.patch
        267 kB
        Andrew Purtell
      13. 7544.patch
        314 kB
        Andrew Purtell
      14. 7544.patch
        313 kB
        Andrew Purtell
      15. 7544.patch
        312 kB
        Andrew Purtell
      16. 7544.patch
        313 kB
        Andrew Purtell
      17. latency-single.7544.xlsx
        121 kB
        Andrew Purtell
      18. 7544-final.patch
        309 kB
        Andrew Purtell
      19. 7544-addendum-1.patch
        0.6 kB
        Andrew Purtell

        Issue Links

          Activity

          Hide
          Andrew Purtell added a comment -

          For an upcoming talk on security features I went back and looked at the impact of WAL encryption on more recent JVMs and after the changes to the WAL threading model that went in to 0.98+. I had to resort to a dual core mobile CPU with hyperthreading from ~2010 (with cpufreq locked at max) at the moment since Amazon HVMs don't give access to perf hw registers, but I plan to retest on bare Haswell server hardware.

          Three runs averaged, HLogPerformanceEvaluation -keySize 50 -valueSize 100 -threads 100 -iterations 1000000 ( -encryption AES )
          VM flags: -XX:+UseG1GC -XX:+UseAES -XX:+UseAESIntrinsics (AES flags where supported)

          Test Throughput ops/sec Total cycles Insns per cycle
          Oracle Java 1.7.0_45-b18 - None 52658.302 8878179986750 0.47
          Oracle Java 1.7.0_45-b18 - AES WAL encryption 48045.834 9911748458387 0.57
          OpenJDK 1.8.0_20-b09 - None 54874.125 8662634367005 0.46
          OpenJDK 1.8.0_20-b09 - AES WAL encryption 50659.507 9668111259270 0.61

          What is interesting are the relative differences in later test cases from the first test case. Though there is more work per edit to do with encryption enabled by definition, for this microbenchmark the throughput of 8u20 with WAL encryption and AES intrinsics enabled is only ~4% off from 7u45 with no WAL encryption because of native code generation improvements on AES-NI capable hardware. Ops/sec measurements vary ~1.5% from run to run.

          Show
          Andrew Purtell added a comment - For an upcoming talk on security features I went back and looked at the impact of WAL encryption on more recent JVMs and after the changes to the WAL threading model that went in to 0.98+. I had to resort to a dual core mobile CPU with hyperthreading from ~2010 (with cpufreq locked at max) at the moment since Amazon HVMs don't give access to perf hw registers, but I plan to retest on bare Haswell server hardware. Three runs averaged, HLogPerformanceEvaluation -keySize 50 -valueSize 100 -threads 100 -iterations 1000000 ( -encryption AES ) VM flags: -XX:+UseG1GC -XX:+UseAES -XX:+UseAESIntrinsics (AES flags where supported) Test Throughput ops/sec Total cycles Insns per cycle Oracle Java 1.7.0_45-b18 - None 52658.302 8878179986750 0.47 Oracle Java 1.7.0_45-b18 - AES WAL encryption 48045.834 9911748458387 0.57 OpenJDK 1.8.0_20-b09 - None 54874.125 8662634367005 0.46 OpenJDK 1.8.0_20-b09 - AES WAL encryption 50659.507 9668111259270 0.61 What is interesting are the relative differences in later test cases from the first test case. Though there is more work per edit to do with encryption enabled by definition, for this microbenchmark the throughput of 8u20 with WAL encryption and AES intrinsics enabled is only ~4% off from 7u45 with no WAL encryption because of native code generation improvements on AES-NI capable hardware. Ops/sec measurements vary ~1.5% from run to run.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/)
          Amend HBASE-7544. Fix javadoc typo for Cipher#createDecryptionStream (apurtell: rev 1545790)

          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #853 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/853/ ) Amend HBASE-7544 . Fix javadoc typo for Cipher#createDecryptionStream (apurtell: rev 1545790) /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK #4699 (See https://builds.apache.org/job/HBase-TRUNK/4699/)
          Amend HBASE-7544. Fix javadoc typo for Cipher#createDecryptionStream (apurtell: rev 1545790)

          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java
            HBASE-7544. Transparent CF encryption (apurtell: rev 1545536)
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java
          • /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security
          • /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Decryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AES.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESDecryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESEncryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/EncryptionProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Encryption.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/HFile.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/WAL.proto
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WriterBase.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/HLogPerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureHLog.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java
          • /hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK #4699 (See https://builds.apache.org/job/HBase-TRUNK/4699/ ) Amend HBASE-7544 . Fix javadoc typo for Cipher#createDecryptionStream (apurtell: rev 1545790) /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java HBASE-7544 . Transparent CF encryption (apurtell: rev 1545536) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Decryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AES.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESDecryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESEncryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/EncryptionProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java /hbase/trunk/hbase-protocol/src/main/protobuf/Encryption.proto /hbase/trunk/hbase-protocol/src/main/protobuf/HFile.proto /hbase/trunk/hbase-protocol/src/main/protobuf/WAL.proto /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WriterBase.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/HLogPerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureHLog.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java /hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb
          Hide
          Andrew Purtell added a comment - - edited

          Committed trivial javadoc typo fix. Attached as '7544-addendum-1.patch'. Thanks for spotting it Ted.

          Show
          Andrew Purtell added a comment - - edited Committed trivial javadoc typo fix. Attached as '7544-addendum-1.patch'. Thanks for spotting it Ted.
          Hide
          Ted Yu added a comment -

          From https://builds.apache.org/job/PreCommit-HBASE-Build/7997/artifact/trunk/patchprocess/patchJavadocWarnings.txt :

          [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java:119: warning - @param argument "encryptor" is not a parameter name.
          
          Show
          Ted Yu added a comment - From https://builds.apache.org/job/PreCommit-HBASE-Build/7997/artifact/trunk/patchprocess/patchJavadocWarnings.txt : [WARNING] /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java:119: warning - @param argument "encryptor" is not a parameter name.
          Hide
          Hudson added a comment -

          SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #852 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/852/)
          HBASE-7544. Transparent CF encryption (apurtell: rev 1545536)

          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
          • /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java
          • /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security
          • /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Decryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AES.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESDecryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESEncryptor.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java
          • /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes
          • /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java
          • /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/EncryptionProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java
          • /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java
          • /hbase/trunk/hbase-protocol/src/main/protobuf/Encryption.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/HFile.proto
          • /hbase/trunk/hbase-protocol/src/main/protobuf/WAL.proto
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogReader.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogWriter.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WriterBase.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java
          • /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/HLogPerformanceEvaluation.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureHLog.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java
          • /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java
          • /hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb
          Show
          Hudson added a comment - SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #852 (See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/852/ ) HBASE-7544 . Transparent CF encryption (apurtell: rev 1545536) /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java /hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security /hbase/trunk/hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Cipher.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Decryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AES.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESDecryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/aes/AESEncryptor.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultDecodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes /hbase/trunk/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/EncryptionProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HFileProtos.java /hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/WALProtos.java /hbase/trunk/hbase-protocol/src/main/protobuf/Encryption.proto /hbase/trunk/hbase-protocol/src/main/protobuf/HFile.proto /hbase/trunk/hbase-protocol/src/main/protobuf/WAL.proto /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogReader.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureProtobufLogWriter.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WriterBase.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileSeek.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/HLogPerformanceEvaluation.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/SequenceFileLogWriter.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureHLog.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/LoadTestTool.java /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java /hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb
          Hide
          Andrew Purtell added a comment -

          Got a good run on Hadoop 2 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK-Hadoop-2/529/testReport/
          Seems like a unrelated failure (TestLogRolling.testLogRollOnDatanodeDeath) on Hadoop 1 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK/535/testReport/
          Running again on Hadoop 1 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK/536/testReport/

          Show
          Andrew Purtell added a comment - Got a good run on Hadoop 2 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK-Hadoop-2/529/testReport/ Seems like a unrelated failure (TestLogRolling.testLogRollOnDatanodeDeath) on Hadoop 1 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK/535/testReport/ Running again on Hadoop 1 here: http://jenkins-public.iridiant.net/job/HBase-TRUNK/536/testReport/
          Hide
          Andrew Purtell added a comment -

          I resolved this as fixed after commit but will of course reopen for addendums if Jenkins is unhappy. Running some trunks builds on ec2 Jenkins now also.

          Show
          Andrew Purtell added a comment - I resolved this as fixed after commit but will of course reopen for addendums if Jenkins is unhappy. Running some trunks builds on ec2 Jenkins now also.
          Hide
          Andrew Purtell added a comment -

          Checked the QA output. There were no test failures or zombies. No test timed out either though a couple executions overlapped. No test failures locally. Committing. Let's see what happens.

          Show
          Andrew Purtell added a comment - Checked the QA output. There were no test failures or zombies. No test timed out either though a couple executions overlapped. No test failures locally. Committing. Let's see what happens.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12615745/7544-final.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 99 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          -1 core tests. The patch failed these unit tests:

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12615745/7544-final.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 99 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 1 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 2 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7992//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          Final patch. Passes all unit and integration tests locally. Submitting to HadoopQA.

          Show
          Andrew Purtell added a comment - Final patch. Passes all unit and integration tests locally. Submitting to HadoopQA.
          Hide
          Andrew Purtell added a comment -

          Client observed latency analysis, parallel minicluster load test

          Show
          Andrew Purtell added a comment - Client observed latency analysis, parallel minicluster load test
          Hide
          Andrew Purtell added a comment -

          I have addressed the latest round of review comments, and rebased on latest trunk. Running unit tests now. Will submit for HadoopQA if the results are good locally and then commit after. Thanks Ram, Anoop, and Stack for your reviews.

          Show
          Andrew Purtell added a comment - I have addressed the latest round of review comments, and rebased on latest trunk. Running unit tests now. Will submit for HadoopQA if the results are good locally and then commit after. Thanks Ram, Anoop, and Stack for your reviews.
          Hide
          Andrew Purtell added a comment -

          The WAL encryption is per WAL file. Selective edit encryption on a per family basis is future work. I haven't done it initially to keep things simple for the first cut.

          Show
          Andrew Purtell added a comment - The WAL encryption is per WAL file. Selective edit encryption on a per family basis is future work. I haven't done it initially to keep things simple for the first cut.
          Hide
          ramkrishna.s.vasudevan added a comment -

          Andrew Purtell
          Added some small review comments on the RB.
          The WAL encryption is per CF or per WAL file? Am asking this because we write the encryption info on the WAL header right? Sorry if am missing something here.

          Show
          ramkrishna.s.vasudevan added a comment - Andrew Purtell Added some small review comments on the RB. The WAL encryption is per CF or per WAL file? Am asking this because we write the encryption info on the WAL header right? Sorry if am missing something here.
          Hide
          Andrew Purtell added a comment -

          Thanks Anoop Sam John. The latest patch is up on a test cluster for performance and stability analysis. Looking good. Will commit and post the results of same later this week.

          Show
          Andrew Purtell added a comment - Thanks Anoop Sam John . The latest patch is up on a test cluster for performance and stability analysis. Looking good. Will commit and post the results of same later this week.
          Hide
          Anoop Sam John added a comment -

          +1 for an updated patch addressing minor comments in RB. Great work Andy!

          Show
          Anoop Sam John added a comment - +1 for an updated patch addressing minor comments in RB. Great work Andy!
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12613751/7544.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 82 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 3 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12613751/7544.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 82 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 3 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7855//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          Updated patch rebased on latest trunk, addressing review comments, also supporting alternate SecureRandom providers for AES easily via Configuration.

          Show
          Andrew Purtell added a comment - Updated patch rebased on latest trunk, addressing review comments, also supporting alternate SecureRandom providers for AES easily via Configuration.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12612322/7544.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 82 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 2 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          +1 core tests. The patch passed unit tests in .

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612322/7544.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 82 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 2 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. +1 core tests . The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7745//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          Remove an unwanted change in TestStripeCompactor and resubmit.

          Checked the FindBugs report, and locally prior to patch submission, and didn't see new items on account of this patch.

          Show
          Andrew Purtell added a comment - Remove an unwanted change in TestStripeCompactor and resubmit. Checked the FindBugs report, and locally prior to patch submission, and didn't see new items on account of this patch.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12612313/7544.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 82 new or modified tests.

          +1 hadoop1.0. The patch compiles against the hadoop 1.0 profile.

          +1 hadoop2.0. The patch compiles against the hadoop 2.0 profile.

          -1 javadoc. The javadoc tool appears to have generated 2 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          -1 findbugs. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 lineLengths. The patch does not introduce lines longer than 100

          -1 site. The patch appears to cause mvn site goal to fail.

          -1 core tests. The patch failed these unit tests:
          org.apache.hadoop.hbase.regionserver.TestStripeCompactor

          Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//testReport/
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
          Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
          Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12612313/7544.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 82 new or modified tests. +1 hadoop1.0 . The patch compiles against the hadoop 1.0 profile. +1 hadoop2.0 . The patch compiles against the hadoop 2.0 profile. -1 javadoc . The javadoc tool appears to have generated 2 warning messages. +1 javac . The applied patch does not increase the total number of javac compiler warnings. -1 findbugs . The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 lineLengths . The patch does not introduce lines longer than 100 -1 site . The patch appears to cause mvn site goal to fail. -1 core tests . The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestStripeCompactor Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/7742//console This message is automatically generated.
          Hide
          Andrew Purtell added a comment -

          Fix a findbug warning and kick off HadoopQA

          Show
          Andrew Purtell added a comment - Fix a findbug warning and kick off HadoopQA
          Hide
          Andrew Purtell added a comment -

          Changes in latest patch:

          • Support random HFile keying.
          • Check the results of key unwrapping with a CRC.
          • Introduce a new configuration value for holding an alternate master key alias. If the current master key fails to unwrap, and an alternate is available, try it. Allows for gradual master key rotation.
          • Plumb configuration down to the HFileV3 reader so we can avoid parsing the site file when creating a reader.
          • Update KeyProviderForTesting for verifying configuration plumbing.
          • Additional test cases and changes to existing test cases to confirm new functionality.

          No more changes are planned.

          Show
          Andrew Purtell added a comment - Changes in latest patch: Support random HFile keying. Check the results of key unwrapping with a CRC. Introduce a new configuration value for holding an alternate master key alias. If the current master key fails to unwrap, and an alternate is available, try it. Allows for gradual master key rotation. Plumb configuration down to the HFileV3 reader so we can avoid parsing the site file when creating a reader. Update KeyProviderForTesting for verifying configuration plumbing. Additional test cases and changes to existing test cases to confirm new functionality. No more changes are planned.
          Hide
          Andrew Purtell added a comment -

          Posted updated patch to RB.

          Adds a missing license header.

          Adds shell support for testing this feature, new CF attributes 'ENCRYPTION' (the algorithm name, as a string) and 'ENCRYPTION_KEY' (a string that will be hashed into a 128 bit key).

          Adds caching of instantiated key providers.

          Adds LoadTestTool support for enabling transparent encryption on a CF.

          Show
          Andrew Purtell added a comment - Posted updated patch to RB. Adds a missing license header. Adds shell support for testing this feature, new CF attributes 'ENCRYPTION' (the algorithm name, as a string) and 'ENCRYPTION_KEY' (a string that will be hashed into a 128 bit key). Adds caching of instantiated key providers. Adds LoadTestTool support for enabling transparent encryption on a CF.
          Hide
          Andrew Purtell added a comment -

          Attached patch '7544.patch' pulls it all together with a new integration test and bug fixes. Also see review https://reviews.apache.org/r/14769/

          I've left the split out patches attached as they still illustrate how the changes are grouped.

          Show
          Andrew Purtell added a comment - Attached patch '7544.patch' pulls it all together with a new integration test and bug fixes. Also see review https://reviews.apache.org/r/14769/ I've left the split out patches attached as they still illustrate how the changes are grouped.
          Hide
          Andrew Purtell added a comment -

          Rebase against latest trunk.

          Patch '7544p4.patch' adds an encrypting protobuf WAL, currently missing support for dictionary compression but will add that after more testing. Another easy planned addition here is selective encryption of only the WALEdits for encrypted families.

          Added some unit tests, notably one that confirms if hbck is run on the secure enclave with access to key material (implicitly using the same configuration as for regionservers) it can handle encrypted HFiles.

          Also note that an ASL licensed open source accelerated JCE codec for AES in CTR mode is available at https://github.com/intel-hadoop/project-diceros . Will be used if installed and hbase.crypto.algorithm.aes.provider="DC". This is not required for HBASE-7544 but will substantially reduce the latency and CPU cost introduced by encryption compared to the default AES codec that ships with the Oracle/OpenJDK JRE.

          Show
          Andrew Purtell added a comment - Rebase against latest trunk. Patch '7544p4.patch' adds an encrypting protobuf WAL, currently missing support for dictionary compression but will add that after more testing. Another easy planned addition here is selective encryption of only the WALEdits for encrypted families. Added some unit tests, notably one that confirms if hbck is run on the secure enclave with access to key material (implicitly using the same configuration as for regionservers) it can handle encrypted HFiles. Also note that an ASL licensed open source accelerated JCE codec for AES in CTR mode is available at https://github.com/intel-hadoop/project-diceros . Will be used if installed and hbase.crypto.algorithm.aes.provider="DC". This is not required for HBASE-7544 but will substantially reduce the latency and CPU cost introduced by encryption compared to the default AES codec that ships with the Oracle/OpenJDK JRE.
          Hide
          Andrew Purtell added a comment -

          Additional patches will be forthcoming shortly for WAL encryption and the remaining bits of the historical work not yet ported over.

          Show
          Andrew Purtell added a comment - Additional patches will be forthcoming shortly for WAL encryption and the remaining bits of the historical work not yet ported over.
          Hide
          Andrew Purtell added a comment -

          Provisional patch '7544p3', which depends on uncommitted work on HBASE-8496 (HFile V3), implements HFile V3 reader and writer support for transparently encrypted HFiles using the new cipher framework in hbase-common. It needs more unit tests, shell support, and cluster testing and performance impact evaluation as was done for the historical patch. This work is ongoing. I am attaching it now so you can get a sense of the work if you are so inclined.

          Show
          Andrew Purtell added a comment - Provisional patch '7544p3', which depends on uncommitted work on HBASE-8496 (HFile V3), implements HFile V3 reader and writer support for transparently encrypted HFiles using the new cipher framework in hbase-common. It needs more unit tests, shell support, and cluster testing and performance impact evaluation as was done for the historical patch. This work is ongoing. I am attaching it now so you can get a sense of the work if you are so inclined.
          Hide
          Andrew Purtell added a comment -

          Patch '7544p2' introduces new protos for encryption and support in hbase-client (which hbase-server will also have available, pulled in as a dependency) for protecting key material in an algorithm agnostic way. This is used for protecting CF keys in table schema and HFiles.

          Show
          Andrew Purtell added a comment - Patch '7544p2' introduces new protos for encryption and support in hbase-client (which hbase-server will also have available, pulled in as a dependency) for protecting key material in an algorithm agnostic way. This is used for protecting CF keys in table schema and HFiles.
          Hide
          Andrew Purtell added a comment -

          Patch '7544p1' provides an encryption scaffold and AES algorithm support in hbase-common.

          The API design on the HBase side is similar to the historical patch but we were not bound by any legacy so where it could be improved it has been improved. I'm much happier with this result, actually, there are no dependencies on anything but standard Java security APIs. This patch wraps and hides the JCE javax.crypto.Cipher type so as to allow future addition of HBase optimized/accelerated encryption algorithm implementations not based on the JCE, which in the case of at least the Oracle JRE requires encryption algorithm providers to be signed with a restricted code signing key not obtainable by an open source project. (Unsigned JCE providers are allowed by the OpenJDK JRE, but OpenJDK is not recommended for production.) The name of this type can be trivally changed if there is a concern about how it hides javax.crypto.Cipher. I did initially use the name 'Algorithm' but this seemed too generic and might be confused with Compression.Algorithm.

          Implementing a native optimized/accelerated AES cipher for HBase is ongoing work that should be completed shortly.

          Show
          Andrew Purtell added a comment - Patch '7544p1' provides an encryption scaffold and AES algorithm support in hbase-common. The API design on the HBase side is similar to the historical patch but we were not bound by any legacy so where it could be improved it has been improved. I'm much happier with this result, actually, there are no dependencies on anything but standard Java security APIs. This patch wraps and hides the JCE javax.crypto.Cipher type so as to allow future addition of HBase optimized/accelerated encryption algorithm implementations not based on the JCE, which in the case of at least the Oracle JRE requires encryption algorithm providers to be signed with a restricted code signing key not obtainable by an open source project. (Unsigned JCE providers are allowed by the OpenJDK JRE, but OpenJDK is not recommended for production.) The name of this type can be trivally changed if there is a concern about how it hides javax.crypto.Cipher. I did initially use the name 'Algorithm' but this seemed too generic and might be confused with Compression.Algorithm. Implementing a native optimized/accelerated AES cipher for HBase is ongoing work that should be completed shortly.
          Hide
          Andrew Purtell added a comment -

          Reattaching previous work as files named 'historical-*'. On account of radio silence over in Hadoop on a proposed encryption algorithm framework for Hadoop common, I have redone this work to remove any external dependencies, based on HFile v3 and targeting 0.98.

          Show
          Andrew Purtell added a comment - Reattaching previous work as files named 'historical-*'. On account of radio silence over in Hadoop on a proposed encryption algorithm framework for Hadoop common, I have redone this work to remove any external dependencies, based on HFile v3 and targeting 0.98.
          Hide
          Andrew Purtell added a comment -

          I recently added some simple shell support for testing this to 0.94-ish code, dropping the patch here for later. Should work modulo a minor fix up. A proper new patch requires a fair amount of rebasing.

          I'm not sure what is the long term disposition of the Hadoop side patches this issue depends on. It might be worth trying this with JRE ciphers.

          Show
          Andrew Purtell added a comment - I recently added some simple shell support for testing this to 0.94-ish code, dropping the patch here for later. Should work modulo a minor fix up. A proper new patch requires a fair amount of rebasing. I'm not sure what is the long term disposition of the Hadoop side patches this issue depends on. It might be worth trying this with JRE ciphers.
          Hide
          Andrew Purtell added a comment -

          I think another improvement, once HBASE-5699 is in place, is to only encrypt WAL for table(s) where encryption is turned on.

          Thanks for the feedback. I did try this already BTW, encrypting WALedits case by case instead of using SequenceFile record encryption (aka "compression") currently significantly hurts performance. Agreed with HBASE-5699 maybe it's possible to segregate WALedits for table(s) with encryption turned on to an encrypted container while leaving the others alone.

          Show
          Andrew Purtell added a comment - I think another improvement, once HBASE-5699 is in place, is to only encrypt WAL for table(s) where encryption is turned on. Thanks for the feedback. I did try this already BTW, encrypting WALedits case by case instead of using SequenceFile record encryption (aka "compression") currently significantly hurts performance. Agreed with HBASE-5699 maybe it's possible to segregate WALedits for table(s) with encryption turned on to an encrypted container while leaving the others alone.
          Hide
          Ted Yu added a comment - - edited

          Future work is planned on optimizing WAL encryption.

          I think another improvement, once HBASE-5699 is in place, is to only encrypt WAL for table(s) where encryption is turned on.

          Show
          Ted Yu added a comment - - edited Future work is planned on optimizing WAL encryption. I think another improvement, once HBASE-5699 is in place, is to only encrypt WAL for table(s) where encryption is turned on.
          Hide
          Andrew Purtell added a comment -

          Updated patch rebased to current trunk (SVN r1457042)

          Show
          Andrew Purtell added a comment - Updated patch rebased to current trunk (SVN r1457042)
          Hide
          Andrew Purtell added a comment -

          w.r.t. defining hadoop-two.version as 2.0.4-SNAPSHOT, I proposed doing this for trunk over in HBASE-7904
          What do you think ?

          IIRC I changed my vote to +0 on that.
          I'd rather see someone who knows Maven better than I fix the build so you can specify a hadoop-two version other than in the root POM on the command line.

          Show
          Andrew Purtell added a comment - w.r.t. defining hadoop-two.version as 2.0.4-SNAPSHOT, I proposed doing this for trunk over in HBASE-7904 What do you think ? IIRC I changed my vote to +0 on that. I'd rather see someone who knows Maven better than I fix the build so you can specify a hadoop-two version other than in the root POM on the command line.
          Hide
          Andrew Purtell added a comment -

          I still couldn't get pass the compilation error. This is due to a change similar to the following is missing in TestStoreFile.java at line 602.

          Thanks for trying it out. The build is passing here. Let me make a new patch, maybe the one attached to this issue is incorrect.

          Show
          Andrew Purtell added a comment - I still couldn't get pass the compilation error. This is due to a change similar to the following is missing in TestStoreFile.java at line 602. Thanks for trying it out. The build is passing here. Let me make a new patch, maybe the one attached to this issue is incorrect.
          Hide
          Ted Yu added a comment -

          w.r.t. defining hadoop-two.version as 2.0.4-SNAPSHOT, I proposed doing this for trunk over in HBASE-7904

          What do you think ?

          Show
          Ted Yu added a comment - w.r.t. defining hadoop-two.version as 2.0.4-SNAPSHOT, I proposed doing this for trunk over in HBASE-7904 What do you think ?
          Hide
          Ted Yu added a comment -

          I still couldn't get pass the compilation error.
          This is due to a change similar to the following is missing in TestStoreFile.java at line 602.

          diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          index 81a313a..4225771 100644
          --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
          ...
          @@ -478,7 +474,7 @@ public class TestStoreFile extends HBaseTestCase {
               writer.close();
          
               StoreFile.Reader reader = new StoreFile.Reader(fs, f, cacheConf,
          -        DataBlockEncoding.NONE);
          +        DataBlockEncoding.NONE, null);
          

          Will try out the above Hadoop branch-2 git.

          Show
          Ted Yu added a comment - I still couldn't get pass the compilation error. This is due to a change similar to the following is missing in TestStoreFile.java at line 602. diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java index 81a313a..4225771 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java ... @@ -478,7 +474,7 @@ public class TestStoreFile extends HBaseTestCase { writer.close(); StoreFile.Reader reader = new StoreFile.Reader(fs, f, cacheConf, - DataBlockEncoding.NONE); + DataBlockEncoding.NONE, null ); Will try out the above Hadoop branch-2 git.
          Hide
          Andrew Purtell added a comment -

          w.r.t. the compilation error

          A 'mvn -Dcrypto -Dhadoop.profile=2.0 -DskipTests clean install' to get everything in place in the local Maven cache followed by a 'mvn -Dcrypto -Dhadoop.profile=2.0 test -Dtest=TestHFileEncryption' works for me here

          Almost forgot that I also have to modify the root pom.xml to define hadoop-two.version to 2.0.4-SNAPSHOT. For some reason defining '-Dhadoop-two.version=2.0.4-SNAPSHOT' on the Maven command line doesn't do what I expect, Maven will still select 2.0.2-alpha as specified in the POM when building hbase-server. I raised this issue on dev@ but IIRC Nick said it worked for him, so I don't know what to make of that.

          Show
          Andrew Purtell added a comment - w.r.t. the compilation error A 'mvn -Dcrypto -Dhadoop.profile=2.0 -DskipTests clean install' to get everything in place in the local Maven cache followed by a 'mvn -Dcrypto -Dhadoop.profile=2.0 test -Dtest=TestHFileEncryption' works for me here Almost forgot that I also have to modify the root pom.xml to define hadoop-two.version to 2.0.4-SNAPSHOT. For some reason defining '-Dhadoop-two.version=2.0.4-SNAPSHOT' on the Maven command line doesn't do what I expect, Maven will still select 2.0.2-alpha as specified in the POM when building hbase-server. I raised this issue on dev@ but IIRC Nick said it worked for him, so I don't know what to make of that.
          Hide
          Andrew Purtell added a comment -

          w.r.t. the compilation error

          A 'mvn -Dcrypto -Dhadoop.profile=2.0 -DskipTests clean install' to get everything in place in the local Maven cache followed by a 'mvn -Dcrypto -Dhadoop.profile=2.0 test -Dtest=TestHFileEncryption' works for me here.

          (I am using a locally patched version of Hadoop branch-2 with crypto support. You can get it at https://github.com/intel-hadoop/hadoop-common-rhino/tree/branch-2. Be sure to compile Hadoop with -Pnative and include the directory holding the newly built libhadoop.so in LD_LIBRARY_PATH or TestHFileEncryption won't pass.)

          In hbase-common/pom.xml

          Yes that should be -Dcrypto. The 'crypto' or 'nocrypto' profiles are activated depending on if that is defined or not. If you have some thoughts on how this could be done better with Maven that would be great. Unfortunately a separate module isn't feasible because of the changes in hbase-server. Originally I didn't do any of this Maven hacking, I just used reflection in Encryption.java, but I don't want to do it that way because reflection is brittle and slow. I also need to use a different constructor for the WALReader in SequenceFileLogReader. (This is because HBase uses the deprecated Hadoop 1 style constructors for SequenceFile and the crypto support for SequenceFile, when using those constructors, requires a crypto context at instantiation.) So at a minimum I had to separate out a crypto enabled SequenceFileLogReader from a stock SequenceFileLogReader. This difference would be an excellent candidate for a hadoop-compat module as soon as there's a Hadoop version suitable for targeting.

          Show
          Andrew Purtell added a comment - w.r.t. the compilation error A 'mvn -Dcrypto -Dhadoop.profile=2.0 -DskipTests clean install' to get everything in place in the local Maven cache followed by a 'mvn -Dcrypto -Dhadoop.profile=2.0 test -Dtest=TestHFileEncryption' works for me here. (I am using a locally patched version of Hadoop branch-2 with crypto support. You can get it at https://github.com/intel-hadoop/hadoop-common-rhino/tree/branch-2 . Be sure to compile Hadoop with -Pnative and include the directory holding the newly built libhadoop.so in LD_LIBRARY_PATH or TestHFileEncryption won't pass.) In hbase-common/pom.xml Yes that should be -Dcrypto. The 'crypto' or 'nocrypto' profiles are activated depending on if that is defined or not. If you have some thoughts on how this could be done better with Maven that would be great. Unfortunately a separate module isn't feasible because of the changes in hbase-server. Originally I didn't do any of this Maven hacking, I just used reflection in Encryption.java, but I don't want to do it that way because reflection is brittle and slow. I also need to use a different constructor for the WALReader in SequenceFileLogReader. (This is because HBase uses the deprecated Hadoop 1 style constructors for SequenceFile and the crypto support for SequenceFile, when using those constructors, requires a crypto context at instantiation.) So at a minimum I had to separate out a crypto enabled SequenceFileLogReader from a stock SequenceFileLogReader. This difference would be an excellent candidate for a hadoop-compat module as soon as there's a Hadoop version suitable for targeting.
          Hide
          Ted Yu added a comment -

          w.r.t. the compilation error, it was due to missing the last parameter in the ctor:

              public Reader(FileSystem fs, Path path, CacheConfig cacheConf,
                  DataBlockEncoding preferredEncodingInCache,
                  Encryption.Context cryptoContext) throws IOException {
          

          I am a little confused by the new crypto profile. In hbase-common/pom.xml:

          +      Profile for building against a crypto enabled Hadoop. Activate using:
          +       mvn -Pcrypto
          +    -->
          +    <profile>
          +      <id>crypto</id>
          
          Show
          Ted Yu added a comment - w.r.t. the compilation error, it was due to missing the last parameter in the ctor: public Reader(FileSystem fs, Path path, CacheConfig cacheConf, DataBlockEncoding preferredEncodingInCache, Encryption.Context cryptoContext) throws IOException { I am a little confused by the new crypto profile. In hbase-common/pom.xml: + Profile for building against a crypto enabled Hadoop. Activate using: + mvn -Pcrypto + --> + <profile> + <id>crypto</id>
          Hide
          Andrew Purtell added a comment -

          I guess you meant '-Pcrypto' above.

          No I meant -Dcrypto.

          The patch applies cleanly on trunk, however: [Compilation failure because artifacts with API changes are not in the local Maven cache]

          Looks like artifacts with the API changes in the patch are not in the local Maven cache. Can't help you there.

          If patch is ready for review

          I'll be maintaining this out of tree until org.apache.hadoop.io.crypto is available somewhere downstream.

          Does it make sense to extract classes under hbase-common/crypto into their own module ?

          No I don't think a separate module makes sense. There are a lot of changes in hadoop-server in order to make use of crypto in block encoding contexts. The Maven profile is a transitional approach to avoiding compilation problems if building against a Hadoop without crypto support. I think once there is a Hadoop with org.apache.hadoop.io.crypto available – perhaps this will be 3.0 aka trunk – then it probably makes sense to move o.a.h.h.io.crypto.Encryption into a new hadoop-compat module, as well as use a new factory in that new compat module to instantiate SequenceFile readers and writers for HLog.

          Show
          Andrew Purtell added a comment - I guess you meant '-Pcrypto' above. No I meant -Dcrypto. The patch applies cleanly on trunk, however: [Compilation failure because artifacts with API changes are not in the local Maven cache] Looks like artifacts with the API changes in the patch are not in the local Maven cache. Can't help you there. If patch is ready for review I'll be maintaining this out of tree until org.apache.hadoop.io.crypto is available somewhere downstream. Does it make sense to extract classes under hbase-common/crypto into their own module ? No I don't think a separate module makes sense. There are a lot of changes in hadoop-server in order to make use of crypto in block encoding contexts. The Maven profile is a transitional approach to avoiding compilation problems if building against a Hadoop without crypto support. I think once there is a Hadoop with org.apache.hadoop.io.crypto available – perhaps this will be 3.0 aka trunk – then it probably makes sense to move o.a.h.h.io.crypto.Encryption into a new hadoop-compat module, as well as use a new factory in that new compat module to instantiate SequenceFile readers and writers for HLog.
          Hide
          Ted Yu added a comment -

          In the patch, I saw the following new files:

          hbase-common/crypto/main/with/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
          hbase-common/crypto/main/without/java/org/apache/hadoop/hbase/io/crypto/Encryption.java

          Looking at the javadoc for the above classes, I don't see much difference. I guess the second file is for when org.apache.hadoop.io.crypto is not available.
          Does it make sense to extract classes under hbase-common/crypto into their own module ?

          Show
          Ted Yu added a comment - In the patch, I saw the following new files: hbase-common/crypto/main/with/java/org/apache/hadoop/hbase/io/crypto/Encryption.java hbase-common/crypto/main/without/java/org/apache/hadoop/hbase/io/crypto/Encryption.java Looking at the javadoc for the above classes, I don't see much difference. I guess the second file is for when org.apache.hadoop.io.crypto is not available. Does it make sense to extract classes under hbase-common/crypto into their own module ?
          Hide
          Ted Yu added a comment -

          if a new 'crypto' profile is selected via -Dcrypto

          I guess you meant '-Pcrypto' above.

          The patch applies cleanly on trunk, however:

          [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile (default-testCompile) on project hbase-server: Compilation failure
          [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java:[602,30] cannot find symbol
          [ERROR] symbol  : constructor Reader(org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.Path,org.apache.hadoop.hbase.io.hfile.CacheConfig,org.apache.hadoop.hbase.io.encoding.DataBlockEncoding)
          

          If patch is ready for review, maybe put it on review board ?

          Thanks

          Show
          Ted Yu added a comment - if a new 'crypto' profile is selected via -Dcrypto I guess you meant '-Pcrypto' above. The patch applies cleanly on trunk, however: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile ( default -testCompile) on project hbase-server: Compilation failure [ERROR] /Users/tyu/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java:[602,30] cannot find symbol [ERROR] symbol : constructor Reader(org.apache.hadoop.fs.FileSystem,org.apache.hadoop.fs.Path,org.apache.hadoop.hbase.io.hfile.CacheConfig,org.apache.hadoop.hbase.io.encoding.DataBlockEncoding) If patch is ready for review, maybe put it on review board ? Thanks
          Hide
          Andrew Purtell added a comment -

          Updated patch and document.

          Instead of using reflection in the Encryption facade to avoid compilation failures if org.apache.hadoop.io.crypto is not available, now sources that reference that package are conditionally included in the generate-sources phase if a new 'crypto' profile is selected via -Dcrypto. Reflection is no longer required so is removed. Use of the build-helper plugin in this way would be transitional. Also updated to use AES-128 for WAL encryption instead of AES-256.

          Show
          Andrew Purtell added a comment - Updated patch and document. Instead of using reflection in the Encryption facade to avoid compilation failures if org.apache.hadoop.io.crypto is not available, now sources that reference that package are conditionally included in the generate-sources phase if a new 'crypto' profile is selected via -Dcrypto. Reflection is no longer required so is removed. Use of the build-helper plugin in this way would be transitional. Also updated to use AES-128 for WAL encryption instead of AES-256.
          Hide
          Andrew Purtell added a comment - - edited

          Feedback from the Feb 28 HUG: Row key data may leak into encoded region names in META and in ZooKeeper znodes. We have not addressed this yet, mainly because of the challenge of dealing with META. It should be straightforward to encrypt znode data on write and decrypt on read. For META we cannot change the region name encoding without disrupting sort order. The solution for obscuring on disk META data is for the admin to enable encryption on the META table (and for HBase to support META schema configuration changes).

          We may simply want to clearly document that constructing row keys with sensitive data should be avoided, as it may leak among users of the system.

          Transparent encryption does not address protection of the data of one user from another. This is outside the scope of this JIRA. To address this other use case, we might propose HTable support for compression codecs for mutation data. Aside from being useful for transparent compression, encryption codecs can stand in for compression codecs, thus the user can at their option encrypt keys and data. (It's an application concern so HTable support for this would be convenient but not essential.) Encrypting keys will have obvious consequences that should still be documented.

          Show
          Andrew Purtell added a comment - - edited Feedback from the Feb 28 HUG: Row key data may leak into encoded region names in META and in ZooKeeper znodes. We have not addressed this yet, mainly because of the challenge of dealing with META. It should be straightforward to encrypt znode data on write and decrypt on read. For META we cannot change the region name encoding without disrupting sort order. The solution for obscuring on disk META data is for the admin to enable encryption on the META table (and for HBase to support META schema configuration changes). We may simply want to clearly document that constructing row keys with sensitive data should be avoided, as it may leak among users of the system. Transparent encryption does not address protection of the data of one user from another. This is outside the scope of this JIRA. To address this other use case, we might propose HTable support for compression codecs for mutation data. Aside from being useful for transparent compression, encryption codecs can stand in for compression codecs, thus the user can at their option encrypt keys and data. (It's an application concern so HTable support for this would be convenient but not essential.) Encrypting keys will have obvious consequences that should still be documented.
          Hide
          Andrew Purtell added a comment -

          Attached 7554.patch. This depends on changes to Hadoop Common not yet in tree and is intended to be informational only at this time.

          Show
          Andrew Purtell added a comment - Attached 7554.patch. This depends on changes to Hadoop Common not yet in tree and is intended to be informational only at this time.
          Hide
          Andrew Purtell added a comment -

          I have a WIP patch that might be in good enough shape to drop soon. However, I would like to solicit opinion on something in advance:

          This work builds on a crypto codec framework to be submitted to the Hadoop Common and MapReduce projects. It will be maintained out of tree as a patch or on a feature branch until those APIs show up downstream (on the assumption that will happen eventually). Even so, there will be a period of time where some versions of Hadoop will have new APIs and some won't. There will probably be a request to backport from trunk to Hadoop 2.0, but I won't speculate on outcome. I can put code which refers to the new APIs in what would become part of a new hadoop-compat module (for Hadoop 3.0), or handle all of the instantiations with reflection to account for changes which may not have such a clear version boundary. I lean toward the latter as a realist, though I think of reflection as the least bad option. Thoughts?

          Show
          Andrew Purtell added a comment - I have a WIP patch that might be in good enough shape to drop soon. However, I would like to solicit opinion on something in advance: This work builds on a crypto codec framework to be submitted to the Hadoop Common and MapReduce projects. It will be maintained out of tree as a patch or on a feature branch until those APIs show up downstream (on the assumption that will happen eventually). Even so, there will be a period of time where some versions of Hadoop will have new APIs and some won't. There will probably be a request to backport from trunk to Hadoop 2.0, but I won't speculate on outcome. I can put code which refers to the new APIs in what would become part of a new hadoop-compat module (for Hadoop 3.0), or handle all of the instantiations with reflection to account for changes which may not have such a clear version boundary. I lean toward the latter as a realist, though I think of reflection as the least bad option. Thoughts?
          Hide
          Andrew Purtell added a comment -

          Within HBase, per-table and per-CF keys are created on demand.

          I should be a little clearer about this. The admin can turn on encryption one CF at a time, or can at any time do it for the whole table. In the latter case, every CF not configured for encryption would be set up accordingly.

          Show
          Andrew Purtell added a comment - Within HBase, per-table and per-CF keys are created on demand. I should be a little clearer about this. The admin can turn on encryption one CF at a time, or can at any time do it for the whole table. In the latter case, every CF not configured for encryption would be set up accordingly.
          Hide
          Andrew Purtell added a comment -

          Should we do compression at the HDFS layer?

          IMO yes, probably

          As a longer term project, I wouldn't mind looking at pushing both compression and encryption down into HDFS somehow. I haven't really thought it through. It seems higher risk because of the externialities.

          Show
          Andrew Purtell added a comment - Should we do compression at the HDFS layer? IMO yes, probably As a longer term project, I wouldn't mind looking at pushing both compression and encryption down into HDFS somehow. I haven't really thought it through. It seems higher risk because of the externialities.
          Hide
          Andrew Purtell added a comment -

          the 'hbase' user would have to have to have access to all the keys, and that user is the only one who would have access to the on-disk files

          The aim is to protect sensitive data against accidental leakage and to facilitate auditable compliance according to the regulations under which several industries operate. We assume under normal circumstances that the 'hbase' user is the only one who would have access to on-disk files. However this does not guarantee leakage isn't possible if HDFS configuration is incorrect – HDFS and HBase might be independently managed – or if a server is decommissioned from the cluster and mishandled. The usual rationale for this type of feature.

          Schema design considerations are similar to those of HFile compression. Some tables might only have one sensitive column encrypted, to minimize performance impacts. We might also not want to encrypt every type of block in the HFile (nor compress them).

          There would be a master key supplied to HBase processes, managed by the cluster administrator, protected by the Java Keystore, perhaps residing on a hardware security module. Within HBase, per-table and per-CF keys are created on demand. There are a couple of reasons why the 2-tier key architecture is good (reduction of scope of compromise, facilitating lazy key rotation, etc.) The administrator would need to run HBCK on a system with access to the master key material in order to take recovery actions.

          I will attach a design doc and patch for consideration, once I have the go ahead.

          Show
          Andrew Purtell added a comment - the 'hbase' user would have to have to have access to all the keys, and that user is the only one who would have access to the on-disk files The aim is to protect sensitive data against accidental leakage and to facilitate auditable compliance according to the regulations under which several industries operate. We assume under normal circumstances that the 'hbase' user is the only one who would have access to on-disk files. However this does not guarantee leakage isn't possible if HDFS configuration is incorrect – HDFS and HBase might be independently managed – or if a server is decommissioned from the cluster and mishandled. The usual rationale for this type of feature. Schema design considerations are similar to those of HFile compression. Some tables might only have one sensitive column encrypted, to minimize performance impacts. We might also not want to encrypt every type of block in the HFile (nor compress them). There would be a master key supplied to HBase processes, managed by the cluster administrator, protected by the Java Keystore, perhaps residing on a hardware security module. Within HBase, per-table and per-CF keys are created on demand. There are a couple of reasons why the 2-tier key architecture is good (reduction of scope of compromise, facilitating lazy key rotation, etc.) The administrator would need to run HBCK on a system with access to the master key material in order to take recovery actions. I will attach a design doc and patch for consideration, once I have the go ahead.
          Hide
          Todd Lipcon added a comment -

          Should we do compression at the HDFS layer?

          IMO yes, probably

          Can you be more specific with what you have in mind? Say we have per CF keys and want to set up readers and writers to use them, what kind of provision would/could HDFS have for that?

          I'll admit I missed the bit above about per-CF keys. That's a little odd, though, because the 'hbase' user would have to have to have access to all the keys, and that user is the only one who would have access to the on-disk files What's the threat model here?

          Show
          Todd Lipcon added a comment - Should we do compression at the HDFS layer? IMO yes, probably Can you be more specific with what you have in mind? Say we have per CF keys and want to set up readers and writers to use them, what kind of provision would/could HDFS have for that? I'll admit I missed the bit above about per-CF keys. That's a little odd, though, because the 'hbase' user would have to have to have access to all the keys, and that user is the only one who would have access to the on-disk files What's the threat model here?
          Hide
          Andrew Purtell added a comment -

          Note: I moved the below out of the description of this issue:

          I have an experimental patch that introduces encryption at the HFile level, with all necessary changes propagated up to the HStore level. For the most part, the changes are straightforward and mechanical. After HBASE-7414, we can introduce specification of an optional encryption codec in the file trailer. The work is not ready to go yet because key management and the HBCK pieces are TBD.

          I think this is what Todd was commenting about and I agree so far as it's an implementation option not a description of the objective per se.

          Show
          Andrew Purtell added a comment - Note: I moved the below out of the description of this issue: I have an experimental patch that introduces encryption at the HFile level, with all necessary changes propagated up to the HStore level. For the most part, the changes are straightforward and mechanical. After HBASE-7414 , we can introduce specification of an optional encryption codec in the file trailer. The work is not ready to go yet because key management and the HBCK pieces are TBD. I think this is what Todd was commenting about and I agree so far as it's an implementation option not a description of the objective per se.
          Hide
          Andrew Purtell added a comment -

          Also, I'm struggling to see how to encrypt WALEdits on a per CF basis with HDFS level tricks, but sure this could be a separate case.

          Show
          Andrew Purtell added a comment - Also, I'm struggling to see how to encrypt WALEdits on a per CF basis with HDFS level tricks, but sure this could be a separate case.
          Hide
          Andrew Purtell added a comment -

          > I'm a little skeptical: why not do this at the HDFS layer?

          This design simply structures encryption exactly the same as we do compression.

          Should we do compression at the HDFS layer?

          Can you be more specific with what you have in mind? Say we have per CF keys and want to set up readers and writers to use them, what kind of provision would/could HDFS have for that?

          Show
          Andrew Purtell added a comment - > I'm a little skeptical: why not do this at the HDFS layer? This design simply structures encryption exactly the same as we do compression. Should we do compression at the HDFS layer? Can you be more specific with what you have in mind? Say we have per CF keys and want to set up readers and writers to use them, what kind of provision would/could HDFS have for that?
          Hide
          Todd Lipcon added a comment -

          I'm a little skeptical: why not do this at the HDFS layer?

          Show
          Todd Lipcon added a comment - I'm a little skeptical: why not do this at the HDFS layer?
          Hide
          Andrew Purtell added a comment -

          The design also covers encrypting WALedits for sensitive CFs but I'm debating if that should be a separate JIRA. More shortly.

          Show
          Andrew Purtell added a comment - The design also covers encrypting WALedits for sensitive CFs but I'm debating if that should be a separate JIRA. More shortly.

            People

            • Assignee:
              Andrew Purtell
              Reporter:
              Andrew Purtell
            • Votes:
              0 Vote for this issue
              Watchers:
              34 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development