Details

    • Sub-task
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 2.0.0
    • None
    • None
    • Reviewed

    Description

      Implementation of a new compacting memstore with non-optimized immutable segment representation

      Attachments

        1. HBASE-14920-V01.patch
          161 kB
          Eshcar Hillel
        2. HBASE-14920-V02.patch
          168 kB
          Eshcar Hillel
        3. HBASE-14920-V03.patch
          176 kB
          Eshcar Hillel
        4. HBASE-14920-V04.patch
          179 kB
          Eshcar Hillel
        5. HBASE-14920-V05.patch
          179 kB
          Eshcar Hillel
        6. HBASE-14920-V06.patch
          179 kB
          Eshcar Hillel
        7. HBASE-14920-V07.patch
          178 kB
          Eshcar Hillel
        8. HBASE-14920-V08.patch
          184 kB
          Eshcar Hillel
        9. HBASE-14920-V09.patch
          188 kB
          Eshcar Hillel
        10. HBASE-14920-V10.patch
          187 kB
          Eshcar Hillel
        11. move.to.junit4.patch
          11 kB
          Michael Stack

        Issue Links

          Activity

            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 0s rubocop was not available.
            0 ruby-lint 0m 0s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 7 new or modified test files.
            0 mvndep 0m 41s Maven dependency ordering for branch
            +1 mvninstall 2m 59s master passed
            +1 compile 3m 58s master passed with JDK v1.8.0
            +1 compile 2m 32s master passed with JDK v1.7.0_79
            +1 checkstyle 8m 24s master passed
            +1 mvneclipse 1m 35s master passed
            +1 findbugs 7m 5s master passed
            +1 javadoc 3m 47s master passed with JDK v1.8.0
            +1 javadoc 2m 34s master passed with JDK v1.7.0_79
            0 mvndep 0m 8s Maven dependency ordering for patch
            +1 mvninstall 3m 1s the patch passed
            +1 compile 4m 22s the patch passed with JDK v1.8.0
            +1 javac 4m 22s the patch passed
            +1 compile 2m 34s the patch passed with JDK v1.7.0_79
            -1 javac 10m 6s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12)
            -1 javac 10m 6s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12)
            +1 javac 2m 34s the patch passed
            -1 checkstyle 1m 12s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 0m 59s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 0s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 3s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 3s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            +1 mvneclipse 1m 34s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 25m 20s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
            -1 findbugs 2m 15s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 findbugs 2m 14s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            +1 javadoc 3m 51s the patch passed with JDK v1.8.0
            +1 javadoc 2m 35s the patch passed with JDK v1.7.0_79
            +1 unit 2m 6s hbase-common in the patch passed.
            +1 unit 2m 5s hbase-common in the patch passed.
            +1 unit 1m 4s hbase-client in the patch passed.
            +1 unit 1m 3s hbase-client in the patch passed.
            -1 unit 61m 12s hbase-server in the patch failed.
            -1 unit 61m 15s hbase-server in the patch failed.
            +1 unit 6m 31s hbase-shell in the patch passed.
            +1 unit 6m 27s hbase-shell in the patch passed.
            +1 asflicense 2m 23s Patch does not generate ASF License warnings.
            240m 35s



            Reason Tests
            FindBugs module:hbase-server
              Invocation of toString on CompactingMemStore.getFamilyName() in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:[line 463]
            FindBugs module:hbase-server
              Invocation of toString on CompactingMemStore.getFamilyName() in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:[line 463]
            Failed junit tests hadoop.hbase.regionserver.TestPerColumnFamilyFlush
              hadoop.hbase.regionserver.TestPerColumnFamilyFlush
            Timed out junit tests org.apache.hadoop.hbase.client.TestReplicasClient
              org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface
              org.apache.hadoop.hbase.mapreduce.TestWALPlayer
              org.apache.hadoop.hbase.coprocessor.TestHTableWrapper
              org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
              org.apache.hadoop.hbase.mapreduce.TestTableInputFormat
              org.apache.hadoop.hbase.client.TestFromClientSide
              org.apache.hadoop.hbase.client.TestMultipleTimestamps
              org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries
              org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2
              org.apache.hadoop.hbase.TestFullLogReconstruction
              org.apache.hadoop.hbase.util.TestHBaseFsckOneRS
              org.apache.hadoop.hbase.mapred.TestTableInputFormat
              org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
              org.apache.hadoop.hbase.TestMultiVersions
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLablesWithGroups
              org.apache.hadoop.hbase.security.access.TestAccessController3
              org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
              org.apache.hadoop.hbase.wal.TestWALFiltering
              org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver
              org.apache.hadoop.hbase.security.access.TestAccessController2
              org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey
              org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil
              org.apache.hadoop.hbase.mapreduce.TestCellCounter
              org.apache.hadoop.hbase.TestZooKeeper
              org.apache.hadoop.hbase.mapreduce.TestRowCounter
              org.apache.hadoop.hbase.wal.TestWALSplitCompressed
              org.apache.hadoop.hbase.constraint.TestConstraint
              org.apache.hadoop.hbase.filter.TestFilterWithScanLimits
              org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
              org.apache.hadoop.hbase.client.TestMultiRespectsLimits
              org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor
              org.apache.hadoop.hbase.coprocessor.TestMasterObserver
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole
              org.apache.hadoop.hbase.client.TestMetaWithReplicas
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService
              org.apache.hadoop.hbase.util.TestHBaseFsckEncryption
              org.apache.hadoop.hbase.util.TestHBaseFsckReplicas
              org.apache.hadoop.hbase.coprocessor.TestWALObserver
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes
              org.apache.hadoop.hbase.client.TestFromClientSide3
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithCustomVisLabService
              org.apache.hadoop.hbase.TestRegionServerMetrics
              org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
              org.apache.hadoop.hbase.master.TestSplitLogManager
              org.apache.hadoop.hbase.TestClusterBootOrder
              org.apache.hadoop.hbase.client.TestScannerTimeout
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap
              org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer
              org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded
              org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy
              org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders
              org.apache.hadoop.hbase.mob.compactions.TestMobCompactor
              org.apache.hadoop.hbase.client.TestSplitOrMergeStatus
              org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential
              org.apache.hadoop.hbase.TestJMXListener
              org.apache.hadoop.hbase.coprocessor.TestClassLoading
              org.apache.hadoop.hbase.util.TestFanOutOneBlockAsyncDFSOutput
              org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor
              org.apache.hadoop.hbase.client.TestAdmin2
              org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorTableEndpoint
              org.apache.hadoop.hbase.client.TestAdmin1
              org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
              org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint
              org.apache.hadoop.hbase.security.access.TestNamespaceCommands
              org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint
              org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter
              org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt
              org.apache.hadoop.hbase.TestAcidGuarantees
              org.apache.hadoop.hbase.client.TestSnapshotMetadata
              org.apache.hadoop.hbase.backup.TestHFileArchiving
              org.apache.hadoop.hbase.TestNamespace
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL
              org.apache.hadoop.hbase.io.hfile.TestSeekBeforeWithInlineBlocks
              org.apache.hadoop.hbase.io.encoding.TestChangingEncoding
              org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock
              org.apache.hadoop.hbase.client.TestClientPushback
              org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
              org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks
              org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex
              org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes
              org.apache.hadoop.hbase.TestRegionRebalancing
              org.apache.hadoop.hbase.mapreduce.TestSyncTable
              org.apache.hadoop.hbase.mapreduce.TestCopyTable
              org.apache.hadoop.hbase.mapreduce.TestHashTable
              org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
              org.apache.hadoop.hbase.client.TestReplicaWithCluster
              org.apache.hadoop.hbase.util.TestRegionSplitter
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString
              org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions
              org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
              org.apache.hadoop.hbase.client.TestClientOperationInterrupt
              org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsReplication
              org.apache.hadoop.hbase.util.TestMergeTable
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase
              org.apache.hadoop.hbase.util.TestRegionMover
              org.apache.hadoop.hbase.util.TestMergeTool
              org.apache.hadoop.hbase.security.visibility.TestVisibilityWithCheckAuths
              org.apache.hadoop.hbase.security.access.TestCellACLs
              org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestLeaseRenewal
              org.apache.hadoop.hbase.filter.TestColumnRangeFilter
              org.apache.hadoop.hbase.security.visibility.TestWithDisabledAuthorization
              org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster
              org.apache.hadoop.hbase.client.TestTimestampsFilter
              org.apache.hadoop.hbase.TestIOFencing
              org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel
              org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint
              org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestHCM
              org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed
              org.apache.hadoop.hbase.mapreduce.TestImportExport
              org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization
              org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite
              org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk
              org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove
              org.apache.hadoop.hbase.security.access.TestAccessController
              org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
              org.apache.hadoop.hbase.util.TestHBaseFsckMOB
              org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass
              org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
              org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions
              org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
              org.apache.hadoop.hbase.security.access.TestTablePermissions
              org.apache.hadoop.hbase.quotas.TestQuotaThrottle
              org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter
              org.apache.hadoop.hbase.client.TestTableSnapshotScanner
              org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
              org.apache.hadoop.hbase.security.access.TestScanEarlyTermination
              org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
              org.apache.hadoop.hbase.fs.TestBlockReorder
              org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
              org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels
              org.apache.hadoop.hbase.wal.TestDefaultWALProvider
              org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
              org.apache.hadoop.hbase.client.TestBlockEvictionFromClient
              org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
              org.apache.hadoop.hbase.util.TestConnectionCache
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithTTLs
              org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
              org.apache.hadoop.hbase.io.hfile.TestHFileBlock
              org.apache.hadoop.hbase.coprocessor.TestDoubleColumnInterpreter
              org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery
              org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingStrategy
              org.apache.hadoop.hbase.wal.TestWALFactory
              org.apache.hadoop.hbase.client.TestIncrementsFromClientSide
              org.apache.hadoop.hbase.client.TestHTableMultiplexer
              org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorStop
              org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd
              org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat
              org.apache.hadoop.hbase.client.TestScannersFromClientSide
              org.apache.hadoop.hbase.client.TestMetaCache
              org.apache.hadoop.hbase.replication.TestMasterReplication
              org.apache.hadoop.hbase.TestPartialResultsFromClientSide
              org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort
              org.apache.hadoop.hbase.client.TestSnapshotFromClient
              org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat
              org.apache.hadoop.hbase.util.TestFSUtils
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery
              org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
              org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS
              org.apache.hadoop.hbase.mapreduce.TestImportTsv
              org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers
              org.apache.hadoop.hbase.wal.TestWALSplit
              org.apache.hadoop.hbase.client.TestReplicasClient
              org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface
              org.apache.hadoop.hbase.mapreduce.TestWALPlayer
              org.apache.hadoop.hbase.coprocessor.TestHTableWrapper
              org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence
              org.apache.hadoop.hbase.mapreduce.TestTableInputFormat
              org.apache.hadoop.hbase.client.TestFromClientSide
              org.apache.hadoop.hbase.client.TestMultipleTimestamps
              org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries
              org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2
              org.apache.hadoop.hbase.TestFullLogReconstruction
              org.apache.hadoop.hbase.util.TestHBaseFsckOneRS
              org.apache.hadoop.hbase.mapred.TestTableInputFormat
              org.apache.hadoop.hbase.namespace.TestNamespaceAuditor
              org.apache.hadoop.hbase.TestMultiVersions
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLablesWithGroups
              org.apache.hadoop.hbase.security.access.TestAccessController3
              org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
              org.apache.hadoop.hbase.wal.TestWALFiltering
              org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver
              org.apache.hadoop.hbase.security.access.TestAccessController2
              org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey
              org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil
              org.apache.hadoop.hbase.mapreduce.TestCellCounter
              org.apache.hadoop.hbase.TestZooKeeper
              org.apache.hadoop.hbase.mapreduce.TestRowCounter
              org.apache.hadoop.hbase.wal.TestWALSplitCompressed
              org.apache.hadoop.hbase.constraint.TestConstraint
              org.apache.hadoop.hbase.filter.TestFilterWithScanLimits
              org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout
              org.apache.hadoop.hbase.client.TestMultiRespectsLimits
              org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor
              org.apache.hadoop.hbase.coprocessor.TestMasterObserver
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole
              org.apache.hadoop.hbase.client.TestMetaWithReplicas
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService
              org.apache.hadoop.hbase.util.TestHBaseFsckEncryption
              org.apache.hadoop.hbase.util.TestHBaseFsckReplicas
              org.apache.hadoop.hbase.coprocessor.TestWALObserver
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes
              org.apache.hadoop.hbase.client.TestFromClientSide3
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithCustomVisLabService
              org.apache.hadoop.hbase.TestRegionServerMetrics
              org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer
              org.apache.hadoop.hbase.master.TestSplitLogManager
              org.apache.hadoop.hbase.TestClusterBootOrder
              org.apache.hadoop.hbase.client.TestScannerTimeout
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap
              org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer
              org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded
              org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy
              org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders
              org.apache.hadoop.hbase.mob.compactions.TestMobCompactor
              org.apache.hadoop.hbase.client.TestSplitOrMergeStatus
              org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential
              org.apache.hadoop.hbase.TestJMXListener
              org.apache.hadoop.hbase.coprocessor.TestClassLoading
              org.apache.hadoop.hbase.util.TestFanOutOneBlockAsyncDFSOutput
              org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor
              org.apache.hadoop.hbase.client.TestAdmin2
              org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorTableEndpoint
              org.apache.hadoop.hbase.client.TestAdmin1
              org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL
              org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint
              org.apache.hadoop.hbase.security.access.TestNamespaceCommands
              org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint
              org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter
              org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt
              org.apache.hadoop.hbase.TestAcidGuarantees
              org.apache.hadoop.hbase.client.TestSnapshotMetadata
              org.apache.hadoop.hbase.backup.TestHFileArchiving
              org.apache.hadoop.hbase.TestNamespace
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL
              org.apache.hadoop.hbase.io.hfile.TestSeekBeforeWithInlineBlocks
              org.apache.hadoop.hbase.io.encoding.TestChangingEncoding
              org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock
              org.apache.hadoop.hbase.client.TestClientPushback
              org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor
              org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks
              org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex
              org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes
              org.apache.hadoop.hbase.TestRegionRebalancing
              org.apache.hadoop.hbase.mapreduce.TestSyncTable
              org.apache.hadoop.hbase.mapreduce.TestCopyTable
              org.apache.hadoop.hbase.mapreduce.TestHashTable
              org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding
              org.apache.hadoop.hbase.client.TestReplicaWithCluster
              org.apache.hadoop.hbase.util.TestRegionSplitter
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString
              org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions
              org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache
              org.apache.hadoop.hbase.client.TestClientOperationInterrupt
              org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter
              org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsReplication
              org.apache.hadoop.hbase.util.TestMergeTable
              org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase
              org.apache.hadoop.hbase.util.TestRegionMover
              org.apache.hadoop.hbase.util.TestMergeTool
              org.apache.hadoop.hbase.security.visibility.TestVisibilityWithCheckAuths
              org.apache.hadoop.hbase.security.access.TestCellACLs
              org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestLeaseRenewal
              org.apache.hadoop.hbase.filter.TestColumnRangeFilter
              org.apache.hadoop.hbase.security.visibility.TestWithDisabledAuthorization
              org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster
              org.apache.hadoop.hbase.client.TestTimestampsFilter
              org.apache.hadoop.hbase.TestIOFencing
              org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel
              org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint
              org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestHCM
              org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed
              org.apache.hadoop.hbase.mapreduce.TestImportExport
              org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization
              org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite
              org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk
              org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove
              org.apache.hadoop.hbase.security.access.TestAccessController
              org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient
              org.apache.hadoop.hbase.util.TestHBaseFsckMOB
              org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass
              org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
              org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions
              org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
              org.apache.hadoop.hbase.security.access.TestTablePermissions
              org.apache.hadoop.hbase.quotas.TestQuotaThrottle
              org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter
              org.apache.hadoop.hbase.client.TestTableSnapshotScanner
              org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient
              org.apache.hadoop.hbase.security.access.TestScanEarlyTermination
              org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
              org.apache.hadoop.hbase.fs.TestBlockReorder
              org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient
              org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels
              org.apache.hadoop.hbase.wal.TestDefaultWALProvider
              org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
              org.apache.hadoop.hbase.client.TestBlockEvictionFromClient
              org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient
              org.apache.hadoop.hbase.util.TestConnectionCache
              org.apache.hadoop.hbase.mapreduce.TestImportTSVWithTTLs
              org.apache.hadoop.hbase.replication.TestMultiSlaveReplication
              org.apache.hadoop.hbase.io.hfile.TestHFileBlock
              org.apache.hadoop.hbase.coprocessor.TestDoubleColumnInterpreter
              org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery
              org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingStrategy
              org.apache.hadoop.hbase.wal.TestWALFactory
              org.apache.hadoop.hbase.client.TestIncrementsFromClientSide
              org.apache.hadoop.hbase.client.TestHTableMultiplexer
              org.apache.hadoop.hbase.client.TestMobSnapshotFromClient
              org.apache.hadoop.hbase.coprocessor.TestCoprocessorStop
              org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas
              org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd
              org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat
              org.apache.hadoop.hbase.client.TestScannersFromClientSide
              org.apache.hadoop.hbase.client.TestMetaCache
              org.apache.hadoop.hbase.replication.TestMasterReplication
              org.apache.hadoop.hbase.TestPartialResultsFromClientSide
              org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort
              org.apache.hadoop.hbase.client.TestSnapshotFromClient
              org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat
              org.apache.hadoop.hbase.util.TestFSUtils
              org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery
              org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence
              org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS
              org.apache.hadoop.hbase.mapreduce.TestImportTsv
              org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers
              org.apache.hadoop.hbase.wal.TestWALSplit



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794406/HBASE-14920-V01.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
            git revision master / dc6cd0b
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt
            javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-common.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-common.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/new-findbugs-hbase-server.html
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/new-findbugs-hbase-server.html
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1092/testReport/
            modules C: hbase-common hbase-common hbase-client hbase-client hbase-server hbase-server hbase-shell hbase-shell U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1092/console
            Powered by Apache Yetus 0.2.0 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 0s rubocop was not available. 0 ruby-lint 0m 0s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 7 new or modified test files. 0 mvndep 0m 41s Maven dependency ordering for branch +1 mvninstall 2m 59s master passed +1 compile 3m 58s master passed with JDK v1.8.0 +1 compile 2m 32s master passed with JDK v1.7.0_79 +1 checkstyle 8m 24s master passed +1 mvneclipse 1m 35s master passed +1 findbugs 7m 5s master passed +1 javadoc 3m 47s master passed with JDK v1.8.0 +1 javadoc 2m 34s master passed with JDK v1.7.0_79 0 mvndep 0m 8s Maven dependency ordering for patch +1 mvninstall 3m 1s the patch passed +1 compile 4m 22s the patch passed with JDK v1.8.0 +1 javac 4m 22s the patch passed +1 compile 2m 34s the patch passed with JDK v1.7.0_79 -1 javac 10m 6s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12) -1 javac 10m 6s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12) +1 javac 2m 34s the patch passed -1 checkstyle 1m 12s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 0m 59s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 0s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 3s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 3s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) +1 mvneclipse 1m 34s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 25m 20s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. -1 findbugs 2m 15s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 findbugs 2m 14s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 javadoc 3m 51s the patch passed with JDK v1.8.0 +1 javadoc 2m 35s the patch passed with JDK v1.7.0_79 +1 unit 2m 6s hbase-common in the patch passed. +1 unit 2m 5s hbase-common in the patch passed. +1 unit 1m 4s hbase-client in the patch passed. +1 unit 1m 3s hbase-client in the patch passed. -1 unit 61m 12s hbase-server in the patch failed. -1 unit 61m 15s hbase-server in the patch failed. +1 unit 6m 31s hbase-shell in the patch passed. +1 unit 6m 27s hbase-shell in the patch passed. +1 asflicense 2m 23s Patch does not generate ASF License warnings. 240m 35s Reason Tests FindBugs module:hbase-server   Invocation of toString on CompactingMemStore.getFamilyName() in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java: [line 463] FindBugs module:hbase-server   Invocation of toString on CompactingMemStore.getFamilyName() in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java:in org.apache.hadoop.hbase.regionserver.CompactingMemStore$MemStoreCompactor.doCompact() At CompactingMemStore.java: [line 463] Failed junit tests hadoop.hbase.regionserver.TestPerColumnFamilyFlush   hadoop.hbase.regionserver.TestPerColumnFamilyFlush Timed out junit tests org.apache.hadoop.hbase.client.TestReplicasClient   org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface   org.apache.hadoop.hbase.mapreduce.TestWALPlayer   org.apache.hadoop.hbase.coprocessor.TestHTableWrapper   org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence   org.apache.hadoop.hbase.mapreduce.TestTableInputFormat   org.apache.hadoop.hbase.client.TestFromClientSide   org.apache.hadoop.hbase.client.TestMultipleTimestamps   org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries   org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2   org.apache.hadoop.hbase.TestFullLogReconstruction   org.apache.hadoop.hbase.util.TestHBaseFsckOneRS   org.apache.hadoop.hbase.mapred.TestTableInputFormat   org.apache.hadoop.hbase.namespace.TestNamespaceAuditor   org.apache.hadoop.hbase.TestMultiVersions   org.apache.hadoop.hbase.security.visibility.TestVisibilityLablesWithGroups   org.apache.hadoop.hbase.security.access.TestAccessController3   org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint   org.apache.hadoop.hbase.wal.TestWALFiltering   org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver   org.apache.hadoop.hbase.security.access.TestAccessController2   org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey   org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil   org.apache.hadoop.hbase.mapreduce.TestCellCounter   org.apache.hadoop.hbase.TestZooKeeper   org.apache.hadoop.hbase.mapreduce.TestRowCounter   org.apache.hadoop.hbase.wal.TestWALSplitCompressed   org.apache.hadoop.hbase.constraint.TestConstraint   org.apache.hadoop.hbase.filter.TestFilterWithScanLimits   org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout   org.apache.hadoop.hbase.client.TestMultiRespectsLimits   org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor   org.apache.hadoop.hbase.coprocessor.TestMasterObserver   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole   org.apache.hadoop.hbase.client.TestMetaWithReplicas   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService   org.apache.hadoop.hbase.util.TestHBaseFsckEncryption   org.apache.hadoop.hbase.util.TestHBaseFsckReplicas   org.apache.hadoop.hbase.coprocessor.TestWALObserver   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes   org.apache.hadoop.hbase.client.TestFromClientSide3   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithCustomVisLabService   org.apache.hadoop.hbase.TestRegionServerMetrics   org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer   org.apache.hadoop.hbase.master.TestSplitLogManager   org.apache.hadoop.hbase.TestClusterBootOrder   org.apache.hadoop.hbase.client.TestScannerTimeout   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap   org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer   org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded   org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy   org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders   org.apache.hadoop.hbase.mob.compactions.TestMobCompactor   org.apache.hadoop.hbase.client.TestSplitOrMergeStatus   org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential   org.apache.hadoop.hbase.TestJMXListener   org.apache.hadoop.hbase.coprocessor.TestClassLoading   org.apache.hadoop.hbase.util.TestFanOutOneBlockAsyncDFSOutput   org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor   org.apache.hadoop.hbase.client.TestAdmin2   org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink   org.apache.hadoop.hbase.coprocessor.TestCoprocessorTableEndpoint   org.apache.hadoop.hbase.client.TestAdmin1   org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL   org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint   org.apache.hadoop.hbase.security.access.TestNamespaceCommands   org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner   org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint   org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter   org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt   org.apache.hadoop.hbase.TestAcidGuarantees   org.apache.hadoop.hbase.client.TestSnapshotMetadata   org.apache.hadoop.hbase.backup.TestHFileArchiving   org.apache.hadoop.hbase.TestNamespace   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL   org.apache.hadoop.hbase.io.hfile.TestSeekBeforeWithInlineBlocks   org.apache.hadoop.hbase.io.encoding.TestChangingEncoding   org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock   org.apache.hadoop.hbase.client.TestClientPushback   org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor   org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks   org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex   org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes   org.apache.hadoop.hbase.TestRegionRebalancing   org.apache.hadoop.hbase.mapreduce.TestSyncTable   org.apache.hadoop.hbase.mapreduce.TestCopyTable   org.apache.hadoop.hbase.mapreduce.TestHashTable   org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding   org.apache.hadoop.hbase.client.TestReplicaWithCluster   org.apache.hadoop.hbase.util.TestRegionSplitter   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString   org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions   org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache   org.apache.hadoop.hbase.client.TestClientOperationInterrupt   org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsReplication   org.apache.hadoop.hbase.util.TestMergeTable   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase   org.apache.hadoop.hbase.util.TestRegionMover   org.apache.hadoop.hbase.util.TestMergeTool   org.apache.hadoop.hbase.security.visibility.TestVisibilityWithCheckAuths   org.apache.hadoop.hbase.security.access.TestCellACLs   org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestLeaseRenewal   org.apache.hadoop.hbase.filter.TestColumnRangeFilter   org.apache.hadoop.hbase.security.visibility.TestWithDisabledAuthorization   org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster   org.apache.hadoop.hbase.client.TestTimestampsFilter   org.apache.hadoop.hbase.TestIOFencing   org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel   org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint   org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestHCM   org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed   org.apache.hadoop.hbase.mapreduce.TestImportExport   org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization   org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite   org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk   org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove   org.apache.hadoop.hbase.security.access.TestAccessController   org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient   org.apache.hadoop.hbase.util.TestHBaseFsckMOB   org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass   org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol   org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions   org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles   org.apache.hadoop.hbase.security.access.TestTablePermissions   org.apache.hadoop.hbase.quotas.TestQuotaThrottle   org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter   org.apache.hadoop.hbase.client.TestTableSnapshotScanner   org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient   org.apache.hadoop.hbase.security.access.TestScanEarlyTermination   org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager   org.apache.hadoop.hbase.fs.TestBlockReorder   org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient   org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels   org.apache.hadoop.hbase.wal.TestDefaultWALProvider   org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort   org.apache.hadoop.hbase.client.TestBlockEvictionFromClient   org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient   org.apache.hadoop.hbase.util.TestConnectionCache   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithTTLs   org.apache.hadoop.hbase.replication.TestMultiSlaveReplication   org.apache.hadoop.hbase.io.hfile.TestHFileBlock   org.apache.hadoop.hbase.coprocessor.TestDoubleColumnInterpreter   org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery   org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingStrategy   org.apache.hadoop.hbase.wal.TestWALFactory   org.apache.hadoop.hbase.client.TestIncrementsFromClientSide   org.apache.hadoop.hbase.client.TestHTableMultiplexer   org.apache.hadoop.hbase.client.TestMobSnapshotFromClient   org.apache.hadoop.hbase.coprocessor.TestCoprocessorStop   org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd   org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat   org.apache.hadoop.hbase.client.TestScannersFromClientSide   org.apache.hadoop.hbase.client.TestMetaCache   org.apache.hadoop.hbase.replication.TestMasterReplication   org.apache.hadoop.hbase.TestPartialResultsFromClientSide   org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort   org.apache.hadoop.hbase.client.TestSnapshotFromClient   org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat   org.apache.hadoop.hbase.util.TestFSUtils   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery   org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence   org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS   org.apache.hadoop.hbase.mapreduce.TestImportTsv   org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers   org.apache.hadoop.hbase.wal.TestWALSplit   org.apache.hadoop.hbase.client.TestReplicasClient   org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface   org.apache.hadoop.hbase.mapreduce.TestWALPlayer   org.apache.hadoop.hbase.coprocessor.TestHTableWrapper   org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence   org.apache.hadoop.hbase.mapreduce.TestTableInputFormat   org.apache.hadoop.hbase.client.TestFromClientSide   org.apache.hadoop.hbase.client.TestMultipleTimestamps   org.apache.hadoop.hbase.snapshot.TestSnapshotClientRetries   org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer2   org.apache.hadoop.hbase.TestFullLogReconstruction   org.apache.hadoop.hbase.util.TestHBaseFsckOneRS   org.apache.hadoop.hbase.mapred.TestTableInputFormat   org.apache.hadoop.hbase.namespace.TestNamespaceAuditor   org.apache.hadoop.hbase.TestMultiVersions   org.apache.hadoop.hbase.security.visibility.TestVisibilityLablesWithGroups   org.apache.hadoop.hbase.security.access.TestAccessController3   org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint   org.apache.hadoop.hbase.wal.TestWALFiltering   org.apache.hadoop.hbase.coprocessor.TestRegionServerObserver   org.apache.hadoop.hbase.security.access.TestAccessController2   org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey   org.apache.hadoop.hbase.mapred.TestTableMapReduceUtil   org.apache.hadoop.hbase.mapreduce.TestCellCounter   org.apache.hadoop.hbase.TestZooKeeper   org.apache.hadoop.hbase.mapreduce.TestRowCounter   org.apache.hadoop.hbase.wal.TestWALSplitCompressed   org.apache.hadoop.hbase.constraint.TestConstraint   org.apache.hadoop.hbase.filter.TestFilterWithScanLimits   org.apache.hadoop.hbase.client.TestClientScannerRPCTimeout   org.apache.hadoop.hbase.client.TestMultiRespectsLimits   org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor   org.apache.hadoop.hbase.coprocessor.TestMasterObserver   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildHole   org.apache.hadoop.hbase.client.TestMetaWithReplicas   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService   org.apache.hadoop.hbase.util.TestHBaseFsckEncryption   org.apache.hadoop.hbase.util.TestHBaseFsckReplicas   org.apache.hadoop.hbase.coprocessor.TestWALObserver   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithOperationAttributes   org.apache.hadoop.hbase.client.TestFromClientSide3   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithCustomVisLabService   org.apache.hadoop.hbase.TestRegionServerMetrics   org.apache.hadoop.hbase.mob.mapreduce.TestMobSweepReducer   org.apache.hadoop.hbase.master.TestSplitLogManager   org.apache.hadoop.hbase.TestClusterBootOrder   org.apache.hadoop.hbase.client.TestScannerTimeout   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildOverlap   org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer   org.apache.hadoop.hbase.util.TestMiniClusterLoadEncoded   org.apache.hadoop.hbase.util.TestCoprocessorScanPolicy   org.apache.hadoop.hbase.io.encoding.TestDataBlockEncoders   org.apache.hadoop.hbase.mob.compactions.TestMobCompactor   org.apache.hadoop.hbase.client.TestSplitOrMergeStatus   org.apache.hadoop.hbase.util.TestMiniClusterLoadSequential   org.apache.hadoop.hbase.TestJMXListener   org.apache.hadoop.hbase.coprocessor.TestClassLoading   org.apache.hadoop.hbase.util.TestFanOutOneBlockAsyncDFSOutput   org.apache.hadoop.hbase.client.TestIncrementFromClientSideWithCoprocessor   org.apache.hadoop.hbase.client.TestAdmin2   org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink   org.apache.hadoop.hbase.coprocessor.TestCoprocessorTableEndpoint   org.apache.hadoop.hbase.client.TestAdmin1   org.apache.hadoop.hbase.io.hfile.TestScannerSelectionUsingTTL   org.apache.hadoop.hbase.coprocessor.TestRowProcessorEndpoint   org.apache.hadoop.hbase.security.access.TestNamespaceCommands   org.apache.hadoop.hbase.mob.TestExpiredMobFileCleaner   org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint   org.apache.hadoop.hbase.filter.TestFuzzyRowAndColumnRangeFilter   org.apache.hadoop.hbase.filter.TestFilterListOrOperatorWithBlkCnt   org.apache.hadoop.hbase.TestAcidGuarantees   org.apache.hadoop.hbase.client.TestSnapshotMetadata   org.apache.hadoop.hbase.backup.TestHFileArchiving   org.apache.hadoop.hbase.TestNamespace   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithACL   org.apache.hadoop.hbase.io.hfile.TestSeekBeforeWithInlineBlocks   org.apache.hadoop.hbase.io.encoding.TestChangingEncoding   org.apache.hadoop.hbase.zookeeper.lock.TestZKInterProcessReadWriteLock   org.apache.hadoop.hbase.client.TestClientPushback   org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor   org.apache.hadoop.hbase.io.hfile.TestForceCacheImportantBlocks   org.apache.hadoop.hbase.io.hfile.TestHFileBlockIndex   org.apache.hadoop.hbase.coprocessor.TestOpenTableInCoprocessor   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes   org.apache.hadoop.hbase.TestRegionRebalancing   org.apache.hadoop.hbase.mapreduce.TestSyncTable   org.apache.hadoop.hbase.mapreduce.TestCopyTable   org.apache.hadoop.hbase.mapreduce.TestHashTable   org.apache.hadoop.hbase.mob.TestMobDataBlockEncoding   org.apache.hadoop.hbase.client.TestReplicaWithCluster   org.apache.hadoop.hbase.util.TestRegionSplitter   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString   org.apache.hadoop.hbase.security.access.TestCellACLWithMultipleVersions   org.apache.hadoop.hbase.client.TestHTableMultiplexerFlushCache   org.apache.hadoop.hbase.client.TestClientOperationInterrupt   org.apache.hadoop.hbase.filter.TestMultiRowRangeFilter   org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsReplication   org.apache.hadoop.hbase.util.TestMergeTable   org.apache.hadoop.hbase.util.hbck.TestOfflineMetaRebuildBase   org.apache.hadoop.hbase.util.TestRegionMover   org.apache.hadoop.hbase.util.TestMergeTool   org.apache.hadoop.hbase.security.visibility.TestVisibilityWithCheckAuths   org.apache.hadoop.hbase.security.access.TestCellACLs   org.apache.hadoop.hbase.snapshot.TestMobRestoreFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestLeaseRenewal   org.apache.hadoop.hbase.filter.TestColumnRangeFilter   org.apache.hadoop.hbase.security.visibility.TestWithDisabledAuthorization   org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster   org.apache.hadoop.hbase.client.TestTimestampsFilter   org.apache.hadoop.hbase.TestIOFencing   org.apache.hadoop.hbase.util.TestMiniClusterLoadParallel   org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesUseSecurityEndPoint   org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestHCM   org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed   org.apache.hadoop.hbase.mapreduce.TestImportExport   org.apache.hadoop.hbase.security.access.TestWithDisabledAuthorization   org.apache.hadoop.hbase.io.hfile.TestCacheOnWrite   org.apache.hadoop.hbase.io.encoding.TestLoadAndSwitchEncodeOnDisk   org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithRemove   org.apache.hadoop.hbase.security.access.TestAccessController   org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestMobRestoreSnapshotFromClient   org.apache.hadoop.hbase.util.TestHBaseFsckMOB   org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass   org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol   org.apache.hadoop.hbase.TestHColumnDescriptorDefaultVersions   org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles   org.apache.hadoop.hbase.security.access.TestTablePermissions   org.apache.hadoop.hbase.quotas.TestQuotaThrottle   org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter   org.apache.hadoop.hbase.client.TestTableSnapshotScanner   org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient   org.apache.hadoop.hbase.security.access.TestScanEarlyTermination   org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager   org.apache.hadoop.hbase.fs.TestBlockReorder   org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestRestoreSnapshotFromClient   org.apache.hadoop.hbase.client.TestCloneSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels   org.apache.hadoop.hbase.wal.TestDefaultWALProvider   org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort   org.apache.hadoop.hbase.client.TestBlockEvictionFromClient   org.apache.hadoop.hbase.client.TestCloneSnapshotFromClient   org.apache.hadoop.hbase.util.TestConnectionCache   org.apache.hadoop.hbase.mapreduce.TestImportTSVWithTTLs   org.apache.hadoop.hbase.replication.TestMultiSlaveReplication   org.apache.hadoop.hbase.io.hfile.TestHFileBlock   org.apache.hadoop.hbase.coprocessor.TestDoubleColumnInterpreter   org.apache.hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFilesSplitRecovery   org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingStrategy   org.apache.hadoop.hbase.wal.TestWALFactory   org.apache.hadoop.hbase.client.TestIncrementsFromClientSide   org.apache.hadoop.hbase.client.TestHTableMultiplexer   org.apache.hadoop.hbase.client.TestMobSnapshotFromClient   org.apache.hadoop.hbase.coprocessor.TestCoprocessorStop   org.apache.hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas   org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd   org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat   org.apache.hadoop.hbase.client.TestScannersFromClientSide   org.apache.hadoop.hbase.client.TestMetaCache   org.apache.hadoop.hbase.replication.TestMasterReplication   org.apache.hadoop.hbase.TestPartialResultsFromClientSide   org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort   org.apache.hadoop.hbase.client.TestSnapshotFromClient   org.apache.hadoop.hbase.mapred.TestTableSnapshotInputFormat   org.apache.hadoop.hbase.util.TestFSUtils   org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery   org.apache.hadoop.hbase.client.TestMobSnapshotCloneIndependence   org.apache.hadoop.hbase.util.TestHBaseFsckTwoRS   org.apache.hadoop.hbase.mapreduce.TestImportTsv   org.apache.hadoop.hbase.io.encoding.TestEncodedSeekers   org.apache.hadoop.hbase.wal.TestWALSplit Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794406/HBASE-14920-V01.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / dc6cd0b Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-common.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-common.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/diff-checkstyle-hbase-shell.txt findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/new-findbugs-hbase-server.html findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/new-findbugs-hbase-server.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/1092/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1092/testReport/ modules C: hbase-common hbase-common hbase-client hbase-client hbase-server hbase-server hbase-shell hbase-shell U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1092/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 0s rubocop was not available.
            0 ruby-lint 0m 0s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
            0 mvndep 0m 16s Maven dependency ordering for branch
            +1 mvninstall 3m 16s master passed
            +1 compile 5m 28s master passed with JDK v1.8.0
            +1 compile 3m 52s master passed with JDK v1.7.0_79
            +1 checkstyle 13m 18s master passed
            +1 mvneclipse 2m 7s master passed
            +1 findbugs 7m 11s master passed
            +1 javadoc 4m 14s master passed with JDK v1.8.0
            +1 javadoc 2m 58s master passed with JDK v1.7.0_79
            0 mvndep 0m 10s Maven dependency ordering for patch
            +1 mvninstall 3m 46s the patch passed
            +1 compile 4m 3s the patch passed with JDK v1.8.0
            +1 javac 4m 3s the patch passed
            +1 compile 3m 20s the patch passed with JDK v1.7.0_79
            -1 javac 11m 19s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12)
            -1 javac 11m 19s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12)
            +1 javac 3m 20s the patch passed
            -1 checkstyle 1m 10s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 5s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 1s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 6s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 2s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 6s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 0s hbase-it: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            -1 checkstyle 1m 5s hbase-it: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3)
            +1 mvneclipse 2m 6s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 25m 8s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
            +1 findbugs 8m 45s the patch passed
            +1 javadoc 3m 42s the patch passed with JDK v1.8.0
            +1 javadoc 2m 46s the patch passed with JDK v1.7.0_79
            +1 unit 1m 46s hbase-common in the patch passed.
            +1 unit 1m 43s hbase-common in the patch passed.
            +1 unit 0m 53s hbase-client in the patch passed.
            +1 unit 0m 53s hbase-client in the patch passed.
            +1 unit 87m 23s hbase-server in the patch passed.
            +1 unit 87m 15s hbase-server in the patch passed.
            +1 unit 6m 41s hbase-shell in the patch passed.
            +1 unit 6m 30s hbase-shell in the patch passed.
            +1 unit 0m 21s hbase-it in the patch passed.
            +1 unit 0m 22s hbase-it in the patch passed.
            +1 asflicense 2m 8s Patch does not generate ASF License warnings.
            305m 25s



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794486/HBASE-14920-V02.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
            git revision master / f48c92d
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt
            javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-common.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-common.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-it.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-it.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1104/testReport/
            modules C: hbase-common hbase-common hbase-client hbase-client hbase-server hbase-server hbase-shell hbase-shell hbase-it hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1104/console
            Powered by Apache Yetus 0.2.0 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 0s rubocop was not available. 0 ruby-lint 0m 0s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 0m 16s Maven dependency ordering for branch +1 mvninstall 3m 16s master passed +1 compile 5m 28s master passed with JDK v1.8.0 +1 compile 3m 52s master passed with JDK v1.7.0_79 +1 checkstyle 13m 18s master passed +1 mvneclipse 2m 7s master passed +1 findbugs 7m 11s master passed +1 javadoc 4m 14s master passed with JDK v1.8.0 +1 javadoc 2m 58s master passed with JDK v1.7.0_79 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 3m 46s the patch passed +1 compile 4m 3s the patch passed with JDK v1.8.0 +1 javac 4m 3s the patch passed +1 compile 3m 20s the patch passed with JDK v1.7.0_79 -1 javac 11m 19s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12) -1 javac 11m 19s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 4 new + 8 unchanged - 4 fixed = 12 total (was 12) +1 javac 3m 20s the patch passed -1 checkstyle 1m 10s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-common: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 5s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-client: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 1s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 6s hbase-server: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 2s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 6s hbase-shell: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 0s hbase-it: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) -1 checkstyle 1m 5s hbase-it: patch generated 3 new + 0 unchanged - 3 fixed = 3 total (was 3) +1 mvneclipse 2m 6s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 25m 8s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 findbugs 8m 45s the patch passed +1 javadoc 3m 42s the patch passed with JDK v1.8.0 +1 javadoc 2m 46s the patch passed with JDK v1.7.0_79 +1 unit 1m 46s hbase-common in the patch passed. +1 unit 1m 43s hbase-common in the patch passed. +1 unit 0m 53s hbase-client in the patch passed. +1 unit 0m 53s hbase-client in the patch passed. +1 unit 87m 23s hbase-server in the patch passed. +1 unit 87m 15s hbase-server in the patch passed. +1 unit 6m 41s hbase-shell in the patch passed. +1 unit 6m 30s hbase-shell in the patch passed. +1 unit 0m 21s hbase-it in the patch passed. +1 unit 0m 22s hbase-it in the patch passed. +1 asflicense 2m 8s Patch does not generate ASF License warnings. 305m 25s Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12794486/HBASE-14920-V02.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / f48c92d Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt javac hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-compile-javac-hbase-server-jdk1.7.0_79.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-common.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-common.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-it.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1104/artifact/patchprocess/diff-checkstyle-hbase-it.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1104/testReport/ modules C: hbase-common hbase-common hbase-client hbase-client hbase-server hbase-server hbase-shell hbase-shell hbase-it hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1104/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
            stack Michael Stack added a comment -

            A few notes on the patch

            Is the comment right when it says we are going to compact memory in the cache?

            698 * @return True if we prefer to keep the in-memory data compacted
            699 * for this column family in the HRegionServer cache

            The attribute is InMemoryCompaction so the method should be setInMemoryCompaction, not setCompacted?

            Should this be in our public API? getMemStoreClassName It is a little exotic to expose? We only have one use for it at the moment... wait till more than one before exposing it?

            In HConstants I see this unused import:

            import static org.apache.hadoop.hbase.io.hfile.BlockType.MAGIC_LENGTH;

            Your editor is redoing imports which bloats your patch eshcar See IntegrationTestBigLinkedList, a one line change bloated to 30 by the redo of import and ExecutorService which is just import moves.

            I'll be back

            stack Michael Stack added a comment - A few notes on the patch Is the comment right when it says we are going to compact memory in the cache? 698 * @return True if we prefer to keep the in-memory data compacted 699 * for this column family in the HRegionServer cache The attribute is InMemoryCompaction so the method should be setInMemoryCompaction, not setCompacted? Should this be in our public API? getMemStoreClassName It is a little exotic to expose? We only have one use for it at the moment... wait till more than one before exposing it? In HConstants I see this unused import: import static org.apache.hadoop.hbase.io.hfile.BlockType.MAGIC_LENGTH; Your editor is redoing imports which bloats your patch eshcar See IntegrationTestBigLinkedList, a one line change bloated to 30 by the redo of import and ExecutorService which is just import moves. I'll be back
            stack Michael Stack added a comment -

            I like the notion of a 'CompactionMemStore' instead of a plain old MemStore.

            Reading the nice description made me think of how we need to work on recovery. There'll be more to recover if we can carry more in memory. Another issue.

            This needs a comment on it: IN_MEMORY_FLUSH_THRESHOLD_FACTOR

            Would be good if could use the Interface Store rather than implementation HStore in below

            66 private HStore store;

            For these...

            // A flag for tests only

            ... we normally just flag with @VisibleForTesting annotation. FYI.

            I've asked this before but forgot the answer, this has to be here: isCompactingMemStore ?

            Is this a race in the below?

            184 @Override public long getFlushableSize() {
            185 long snapshotSize = getSnapshot().getSize();
            186 if(snapshotSize == 0)

            { 187 //if snapshot is empty the tail of the pipeline is flushed 188 snapshotSize = pipeline.getTailSize(); 189 }

            190 return snapshotSize > 0 ? snapshotSize : keySize();
            191 }

            If no snapshot, use the tail of the pipeline? The pipeline tail has not yet been moved to be the new snapshot?

            If sequenceid is MAX, don't we want to update it?

            if(minSequenceId != Long.MAX_VALUE) {

            I asked this before too...so, to do compaction, we need to have exclusive lock or exclusive update lock just while you swap segments (the compacted for the inputs?)... it looks like the next method, flushInMemory, answers my question so ignore..

            245 /* The thread is dispatched to flush-in-memory. This cannot be done
            246 * on the same thread, because for flush-in-memory we require updatesLock
            247 * in exclusive mode while this method (checkActiveSize) is invoked holding updatesLock
            248 * in the shared mode. */

            Any guard against queuing same old compactions over and over again?

            250 if(pool != null)

            { // the pool can be null in some tests scenarios 251 InMemoryFlushWorker worker = new InMemoryFlushWorker(); 252 LOG.info("Dispatching the MemStore in-memory flush for store " 253 + store.getColumnFamilyName()); 254 pool.submit(worker); 255 inMemoryFlushInProgress.set(true); 256 }

            ... say there is a bug or we are slow to run already-queued compactions?

            s/stopCompact/stopCompact/g

            The below would seem to say we compact with updates lock held. Fix if wrong... if it is true, then lets discuss:

            351 * It takes the updatesLock exclusively, pushes active into the pipeline,
            352 * and compacts the pipeline.

            Got as far as MemstoreCompactor... will be back.

            stack Michael Stack added a comment - I like the notion of a 'CompactionMemStore' instead of a plain old MemStore. Reading the nice description made me think of how we need to work on recovery. There'll be more to recover if we can carry more in memory. Another issue. This needs a comment on it: IN_MEMORY_FLUSH_THRESHOLD_FACTOR Would be good if could use the Interface Store rather than implementation HStore in below 66 private HStore store; For these... // A flag for tests only ... we normally just flag with @VisibleForTesting annotation. FYI. I've asked this before but forgot the answer, this has to be here: isCompactingMemStore ? Is this a race in the below? 184 @Override public long getFlushableSize() { 185 long snapshotSize = getSnapshot().getSize(); 186 if(snapshotSize == 0) { 187 //if snapshot is empty the tail of the pipeline is flushed 188 snapshotSize = pipeline.getTailSize(); 189 } 190 return snapshotSize > 0 ? snapshotSize : keySize(); 191 } If no snapshot, use the tail of the pipeline? The pipeline tail has not yet been moved to be the new snapshot? If sequenceid is MAX, don't we want to update it? if(minSequenceId != Long.MAX_VALUE) { I asked this before too...so, to do compaction, we need to have exclusive lock or exclusive update lock just while you swap segments (the compacted for the inputs?)... it looks like the next method, flushInMemory, answers my question so ignore.. 245 /* The thread is dispatched to flush-in-memory. This cannot be done 246 * on the same thread, because for flush-in-memory we require updatesLock 247 * in exclusive mode while this method (checkActiveSize) is invoked holding updatesLock 248 * in the shared mode. */ Any guard against queuing same old compactions over and over again? 250 if(pool != null) { // the pool can be null in some tests scenarios 251 InMemoryFlushWorker worker = new InMemoryFlushWorker(); 252 LOG.info("Dispatching the MemStore in-memory flush for store " 253 + store.getColumnFamilyName()); 254 pool.submit(worker); 255 inMemoryFlushInProgress.set(true); 256 } ... say there is a bug or we are slow to run already-queued compactions? s/stopCompact/stopCompact/g The below would seem to say we compact with updates lock held. Fix if wrong... if it is true, then lets discuss: 351 * It takes the updatesLock exclusively, pushes active into the pipeline, 352 * and compacts the pipeline. Got as far as MemstoreCompactor... will be back.
            stack Michael Stack added a comment -

            I see/remember why the isCompactingMemStore method. Makes sense. Make it more generic? isSloppy or isRunsToFat or isNeedsHeadRoom. Minor. Can ignore. We can change when a new type shows up that is not compacting but that wants to also run close-to-the-edge...

            stack Michael Stack added a comment - I see/remember why the isCompactingMemStore method. Makes sense. Make it more generic? isSloppy or isRunsToFat or isNeedsHeadRoom. Minor. Can ignore. We can change when a new type shows up that is not compacting but that wants to also run close-to-the-edge...
            stack Michael Stack added a comment -

            s/startCompact/startCompaction/g
            s/doCompact/doCompaction/g

            No need of these things...

            445 /*----------------------------------------------------------------------

            no one else does this...

            Oh... so we could let go of an edit because it was beyond max versions or something and then you need to update the lowest outstanding sequenceid .... this thing?

            465 updateLowestUnflushedSequenceIdInWal(true); // only if greater

            I'd never have thought to do that.

            We are changing the flush policy on the user here?

            944 if(hasCompactingStore)

            { 945 double alpha = FLUSH_SIZE_WEIGHT; 946 this.memstoreFlushSize = 947 (long)(alpha*memstoreFlushSize + (1-alpha)*blockingMemStoreSize); 948 LOG.info("Increasing memstore flush size to "+memstoreFlushSize +" for the region=" 949 + this); 950 htableDescriptor.setFlushPolicyClassName(FlushNonCompactingStoresFirstPolicy.class 951 .getName()); 952 }

            Thats not bad in-itself (just Release Note it as well as the fact that we are going to run w/ more memory), but we are doing it by setting policy on the HTableDescriptor; every region is going to do this? Can we do it on the region itself? Or can we get away w/o changing HTD? This is unusual practice.

            Do you have to start your own? Can you not piggyback off the RS's existing Executor?

            1748 this.service.startExecutorService(ExecutorType.RS_IN_MEMORY_FLUSH_AND_COMPACTION,
            1749 conf.getInt("hbase.regionserver.executor.inmemoryflush.threads", 3));

            Do you need to add this one to RegionServicesFor...

            56 public HTableDescriptor getTableDesc()

            { 57 return region.getTableDesc(); 58 }

            Would be good if you could get away w/ not passing it down...

            And this one...

            68 public RegionServerServices getRegionServerServices()

            { 69 return region.getRegionServerServices(); 70 }

            This onlyIfGreater was there before you?

            168 void updateStore(byte[] encodedRegionName, byte[] familyName, Long sequenceId,
            169 boolean onlyIfGreater) {

            We should purge it? It is of little use?

            TestCompactingMemStore is junit3 rather than junit4 because it parent is. I'll attach a patch here that moves TestDefaultMemStore to junit4 so this test can be junit 4 too (We've been trying to purge the old tests...)

            I skimmed the tests. They look great.

            Looks committable to me.

            stack Michael Stack added a comment - s/startCompact/startCompaction/g s/doCompact/doCompaction/g No need of these things... 445 /*---------------------------------------------------------------------- no one else does this... Oh... so we could let go of an edit because it was beyond max versions or something and then you need to update the lowest outstanding sequenceid .... this thing? 465 updateLowestUnflushedSequenceIdInWal(true); // only if greater I'd never have thought to do that. We are changing the flush policy on the user here? 944 if(hasCompactingStore) { 945 double alpha = FLUSH_SIZE_WEIGHT; 946 this.memstoreFlushSize = 947 (long)(alpha*memstoreFlushSize + (1-alpha)*blockingMemStoreSize); 948 LOG.info("Increasing memstore flush size to "+memstoreFlushSize +" for the region=" 949 + this); 950 htableDescriptor.setFlushPolicyClassName(FlushNonCompactingStoresFirstPolicy.class 951 .getName()); 952 } Thats not bad in-itself (just Release Note it as well as the fact that we are going to run w/ more memory), but we are doing it by setting policy on the HTableDescriptor; every region is going to do this? Can we do it on the region itself? Or can we get away w/o changing HTD? This is unusual practice. Do you have to start your own? Can you not piggyback off the RS's existing Executor? 1748 this.service.startExecutorService(ExecutorType.RS_IN_MEMORY_FLUSH_AND_COMPACTION, 1749 conf.getInt("hbase.regionserver.executor.inmemoryflush.threads", 3)); Do you need to add this one to RegionServicesFor... 56 public HTableDescriptor getTableDesc() { 57 return region.getTableDesc(); 58 } Would be good if you could get away w/ not passing it down... And this one... 68 public RegionServerServices getRegionServerServices() { 69 return region.getRegionServerServices(); 70 } This onlyIfGreater was there before you? 168 void updateStore(byte[] encodedRegionName, byte[] familyName, Long sequenceId, 169 boolean onlyIfGreater) { We should purge it? It is of little use? TestCompactingMemStore is junit3 rather than junit4 because it parent is. I'll attach a patch here that moves TestDefaultMemStore to junit4 so this test can be junit 4 too (We've been trying to purge the old tests...) I skimmed the tests. They look great. Looks committable to me.
            stack Michael Stack added a comment -

            Patch to integrate which changes TestMemStoreDefault to be junit4... if it helps.

            stack Michael Stack added a comment - Patch to integrate which changes TestMemStoreDefault to be junit4... if it helps.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 1 new or modified test files.
            +1 mvninstall 4m 44s master passed
            +1 compile 1m 40s master passed with JDK v1.8.0
            +1 compile 0m 51s master passed with JDK v1.7.0_79
            +1 checkstyle 6m 10s master passed
            +1 mvneclipse 0m 24s master passed
            +1 findbugs 3m 8s master passed
            +1 javadoc 1m 6s master passed with JDK v1.8.0
            +1 javadoc 1m 1s master passed with JDK v1.7.0_79
            +1 mvninstall 1m 4s the patch passed
            +1 compile 1m 31s the patch passed with JDK v1.8.0
            +1 javac 1m 31s the patch passed
            +1 compile 0m 50s the patch passed with JDK v1.7.0_79
            +1 javac 0m 50s the patch passed
            +1 checkstyle 5m 40s the patch passed
            +1 mvneclipse 0m 24s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 41m 6s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
            +1 findbugs 3m 44s the patch passed
            +1 javadoc 1m 17s the patch passed with JDK v1.8.0
            +1 javadoc 1m 1s the patch passed with JDK v1.7.0_79
            -1 unit 212m 40s hbase-server in the patch failed.
            +1 asflicense 3m 5s Patch does not generate ASF License warnings.
            292m 22s



            Reason Tests
            Failed junit tests hadoop.hbase.TestPartialResultsFromClientSide
              hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort
              hadoop.hbase.master.balancer.TestStochasticLoadBalancer2
            Timed out junit tests org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot
              org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot
              org.apache.hadoop.hbase.security.access.TestNamespaceCommands
              org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot
              org.apache.hadoop.hbase.snapshot.TestExportSnapshot
              org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient
              org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
              org.apache.hadoop.hbase.client.TestBlockEvictionFromClient



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795943/move.to.junit4.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
            uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
            git revision master / afdfd1b
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1214/artifact/patchprocess/patch-unit-hbase-server.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1214/artifact/patchprocess/patch-unit-hbase-server.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1214/testReport/
            modules C: hbase-server U: hbase-server
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1214/console
            Powered by Apache Yetus 0.2.0 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 1 new or modified test files. +1 mvninstall 4m 44s master passed +1 compile 1m 40s master passed with JDK v1.8.0 +1 compile 0m 51s master passed with JDK v1.7.0_79 +1 checkstyle 6m 10s master passed +1 mvneclipse 0m 24s master passed +1 findbugs 3m 8s master passed +1 javadoc 1m 6s master passed with JDK v1.8.0 +1 javadoc 1m 1s master passed with JDK v1.7.0_79 +1 mvninstall 1m 4s the patch passed +1 compile 1m 31s the patch passed with JDK v1.8.0 +1 javac 1m 31s the patch passed +1 compile 0m 50s the patch passed with JDK v1.7.0_79 +1 javac 0m 50s the patch passed +1 checkstyle 5m 40s the patch passed +1 mvneclipse 0m 24s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 41m 6s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 findbugs 3m 44s the patch passed +1 javadoc 1m 17s the patch passed with JDK v1.8.0 +1 javadoc 1m 1s the patch passed with JDK v1.7.0_79 -1 unit 212m 40s hbase-server in the patch failed. +1 asflicense 3m 5s Patch does not generate ASF License warnings. 292m 22s Reason Tests Failed junit tests hadoop.hbase.TestPartialResultsFromClientSide   hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort   hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 Timed out junit tests org.apache.hadoop.hbase.snapshot.TestMobSecureExportSnapshot   org.apache.hadoop.hbase.snapshot.TestSecureExportSnapshot   org.apache.hadoop.hbase.security.access.TestNamespaceCommands   org.apache.hadoop.hbase.snapshot.TestMobExportSnapshot   org.apache.hadoop.hbase.snapshot.TestExportSnapshot   org.apache.hadoop.hbase.snapshot.TestMobFlushSnapshotFromClient   org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient   org.apache.hadoop.hbase.client.TestBlockEvictionFromClient Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12795943/move.to.junit4.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / afdfd1b Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HBASE-Build/1214/artifact/patchprocess/patch-unit-hbase-server.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1214/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1214/testReport/ modules C: hbase-server U: hbase-server Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1214/console Powered by Apache Yetus 0.2.0 http://yetus.apache.org This message was automatically generated.
            eshcar Eshcar Hillel added a comment -

            Hi,

            stack I will get back with answers to your questions/comments.
            However, first I encountered an issue which I would like to consult about.

            During in-memory compaction we write aside the minimum sequence id in the segment, which is the result of the compaction.
            We then use this sequence id to update the WAL sequence Id accounting so that old WAL files can be discarded.
            What I discovered is that there are cells the are not regular key-value cells, which reside in a segment and have very small sequence id (e.g., 8). I guess these cells are being added automatically by the store, and not as a result of the client operation.

            These cells fall through the compaction filter, so they always land on the resulting segment which means the minimum sequence id of the segment is always small and as a consequence WAL files are not discarded (and at some point may exceed the limit on the number of WAL files - currently 33).

            A possible solution would be to ignore these cells while doing the bookkeeping of the minimal sequence id.
            Namely, only consider cells of type Put or Delete when setting the minimum seq id of the segment.

            Does this seem a reasonable solution?

            Here is a full list of possible types: Minimum, Put, Delete, DeleteFamilyVersion, DeleteColumn, DeleteFamily Maximum.

            eshcar Eshcar Hillel added a comment - Hi, stack I will get back with answers to your questions/comments. However, first I encountered an issue which I would like to consult about. During in-memory compaction we write aside the minimum sequence id in the segment, which is the result of the compaction. We then use this sequence id to update the WAL sequence Id accounting so that old WAL files can be discarded. What I discovered is that there are cells the are not regular key-value cells, which reside in a segment and have very small sequence id (e.g., 8). I guess these cells are being added automatically by the store, and not as a result of the client operation. These cells fall through the compaction filter, so they always land on the resulting segment which means the minimum sequence id of the segment is always small and as a consequence WAL files are not discarded (and at some point may exceed the limit on the number of WAL files - currently 33). A possible solution would be to ignore these cells while doing the bookkeeping of the minimal sequence id. Namely, only consider cells of type Put or Delete when setting the minimum seq id of the segment. Does this seem a reasonable solution? Here is a full list of possible types: Minimum, Put, Delete, DeleteFamilyVersion, DeleteColumn, DeleteFamily Maximum.
            stack Michael Stack added a comment -

            ....which reside in a segment and have very small sequence id (e.g., 8).

            You have an example eshcar? Seems totally wrong. Would agree w/ your, unlikely to come from client. Must be made by 'internals'. anoop.hbase and ram_krish, any ideas? Read-side we might have stuff like first-on-row but at write-time?

            stack Michael Stack added a comment - ....which reside in a segment and have very small sequence id (e.g., 8). You have an example eshcar ? Seems totally wrong. Would agree w/ your, unlikely to come from client. Must be made by 'internals'. anoop.hbase and ram_krish , any ideas? Read-side we might have stuff like first-on-row but at write-time?

            Printing the cell would be helpful to idenfiy from the type of the cell. Not very sure on how it is getting added as Client won't be doing it as this in Write flow.

            ram_krish ramkrishna.s.vasudevan added a comment - Printing the cell would be helpful to idenfiy from the type of the cell. Not very sure on how it is getting added as Client won't be doing it as this in Write flow.
            anoop.hbase Anoop Sam John added a comment -

            The in memory compaction is happening by scanning the cells out of the segments and so there is a scan in place.. Still am not sure how the fake cells come out of memstore scanners. Ya print that cell to see what is that?

            anoop.hbase Anoop Sam John added a comment - The in memory compaction is happening by scanning the cells out of the segments and so there is a scan in place.. Still am not sure how the fake cells come out of memstore scanners. Ya print that cell to see what is that?
            anoop.hbase Anoop Sam John added a comment -

            BTW , when in memory flush or compaction happens, nothing goes to disk and so why the seqId accounting to wal. U say some of the wals can be ignored. Ideally that is after the flush to disk. Can u explain the need. Pls add fat comments in code so that people wont ask again.

            anoop.hbase Anoop Sam John added a comment - BTW , when in memory flush or compaction happens, nothing goes to disk and so why the seqId accounting to wal. U say some of the wals can be ignored. Ideally that is after the flush to disk. Can u explain the need. Pls add fat comments in code so that people wont ask again.
            anoop.hbase Anoop Sam John added a comment -
            // The scanner is doing all the elimination logic
            516	            // now we just copy it to the new segment
            517	            KeyValue kv = KeyValueUtil.ensureKeyValue(c);
            518	            Cell newKV = result.maybeCloneWithAllocator(kv);
            

            No more calls to ensureKV pls. We have removed all these with many sub tasks.
            Is the patch up in RB?

            anoop.hbase Anoop Sam John added a comment - // The scanner is doing all the elimination logic 516 // now we just copy it to the new segment 517 KeyValue kv = KeyValueUtil.ensureKeyValue(c); 518 Cell newKV = result.maybeCloneWithAllocator(kv); No more calls to ensureKV pls. We have removed all these with many sub tasks. Is the patch up in RB?
            anoop.hbase Anoop Sam John added a comment -

            So in memory flush will move the cur active segment (CSLM) to the pipeline and a new active segment (and so new CSLM) is created. So in pipeline it is still CSLM only.. The in memory compaction picks up segments from pipeline and make new compacted result. Here Cells bytes (key and value) might be copied to new MSLAB chunks and there wont be CSLM any more. There we will have Cell array with ref to cells (in MSLAB chunks)

            anoop.hbase Anoop Sam John added a comment - So in memory flush will move the cur active segment (CSLM) to the pipeline and a new active segment (and so new CSLM) is created. So in pipeline it is still CSLM only.. The in memory compaction picks up segments from pipeline and make new compacted result. Here Cells bytes (key and value) might be copied to new MSLAB chunks and there wont be CSLM any more. There we will have Cell array with ref to cells (in MSLAB chunks)
            eshcar Eshcar Hillel added a comment -

            I recently updated YCSB to support also delete operations.
            The delete operations in the benchmark I ran ended up as cells of type DeleteFamily. I would expect tombstones to be of type Delete.
            This could be an issue with my YCSB client, so we can ignore this for the moment.
            Anyone else had the same problem before?
            What exactly is the affect of a cell of type DeleteFamily in a normal disk compaction? does it remove all the entries in the column family?

            anoop.hbase The patch is in RB (there's a link to it in the Jira).

            An in-memory compaction removes entries from the memory, much like a flush to disk would do.
            The only reason to keep records in WAL is when data is not yet persistent on disk.
            If we remove data from memory (during in-memory compaction) so it will never arrive to disk (since a more recent version already exists), no point in keeping the records in WAL, and it can be removed from it.

            To summarize this point, in the case of a compacting memstore tombstones are not removed during in-memory compaction (this is the equivalent of minor compaction) and need to wait till they hit the disk to be removed in a major compaction.

            eshcar Eshcar Hillel added a comment - I recently updated YCSB to support also delete operations. The delete operations in the benchmark I ran ended up as cells of type DeleteFamily . I would expect tombstones to be of type Delete. This could be an issue with my YCSB client, so we can ignore this for the moment. Anyone else had the same problem before? What exactly is the affect of a cell of type DeleteFamily in a normal disk compaction? does it remove all the entries in the column family? anoop.hbase The patch is in RB (there's a link to it in the Jira). An in-memory compaction removes entries from the memory, much like a flush to disk would do. The only reason to keep records in WAL is when data is not yet persistent on disk. If we remove data from memory (during in-memory compaction) so it will never arrive to disk (since a more recent version already exists), no point in keeping the records in WAL, and it can be removed from it. To summarize this point, in the case of a compacting memstore tombstones are not removed during in-memory compaction (this is the equivalent of minor compaction) and need to wait till they hit the disk to be removed in a major compaction.
            anoop.hbase Anoop Sam John added a comment -

            Ya it is clear now why need the seqId update after in memory compaction also.. It helps when many of the cells go away as a result of this. (Because of delete or version expiry etc).. Ya make sense.. Pls add all these in code comments. Ya was expecting this as the reason.. Just asked for confirm.

            There is no problem in ur client code. When we delete a row or a family fully, the type will be DeleteFamily. Ya the scanner deals with it as removes all cells of this row:cf with TS less than the TS of this delete fam cell. So u see this typed cell with low seqId?

            anoop.hbase Anoop Sam John added a comment - Ya it is clear now why need the seqId update after in memory compaction also.. It helps when many of the cells go away as a result of this. (Because of delete or version expiry etc).. Ya make sense.. Pls add all these in code comments. Ya was expecting this as the reason.. Just asked for confirm. There is no problem in ur client code. When we delete a row or a family fully, the type will be DeleteFamily . Ya the scanner deals with it as removes all cells of this row:cf with TS less than the TS of this delete fam cell. So u see this typed cell with low seqId?
            eshcar Eshcar Hillel added a comment -

            OK, thanks anoop.hbase for clarifying this.

            I will work on a new patch to incorporate the suggestion above.
            I will apply what is possible and explain the changes that are not applicable.
            stack just to be clear, you suggested to change the class name (and some methods) to CompactionMemStore (instead of CompatingMemStore)?

            eshcar Eshcar Hillel added a comment - OK, thanks anoop.hbase for clarifying this. I will work on a new patch to incorporate the suggestion above. I will apply what is possible and explain the changes that are not applicable. stack just to be clear, you suggested to change the class name (and some methods) to CompactionMemStore (instead of CompatingMemStore)?
            stack Michael Stack added a comment -

            eshcar I suggested changing stopCompact to stopCompacting, etc. Don't recall suggesting classname changes.

            stack Michael Stack added a comment - eshcar I suggested changing stopCompact to stopCompacting, etc. Don't recall suggesting classname changes.
            eshcar Eshcar Hillel added a comment -

            Working on the patch.
            Following some recent benchmarks we came to understand that a compacting store (now under the more general name of sloppy stores) has benefit even without getting more memstore space.
            The benefit is in delaying the flush to disk, reducing the write amplification effect by writing bigger files, writing them less frequently, thereby reducing the overall IO in the underlying hdfs.
            This means that in the size of the memstore is not increased.
            We decided to keep the new flush policy which favors sloppy stores over regular stores, namely selects regular (default) stores to be flushed first and only flushes sloppy stores when it has no other potions.
            We believe this is good enough to be able to benefit from having a compacting store.

            stack you suggested removing the executor service from HRegionServer.
            Here is an alternative - do the same as CompactSplitThread.
            We can have a ThreadPoolExecutor as an attribute of RegionServicesForStores, then in-memory compaction is executed by submitting a an inmemory-flush-and-compaction-runnable to this pool.
            How do you like this solution? Better than the current?

            eshcar Eshcar Hillel added a comment - Working on the patch. Following some recent benchmarks we came to understand that a compacting store (now under the more general name of sloppy stores) has benefit even without getting more memstore space. The benefit is in delaying the flush to disk, reducing the write amplification effect by writing bigger files, writing them less frequently, thereby reducing the overall IO in the underlying hdfs. This means that in the size of the memstore is not increased. We decided to keep the new flush policy which favors sloppy stores over regular stores, namely selects regular (default) stores to be flushed first and only flushes sloppy stores when it has no other potions. We believe this is good enough to be able to benefit from having a compacting store. stack you suggested removing the executor service from HRegionServer. Here is an alternative - do the same as CompactSplitThread. We can have a ThreadPoolExecutor as an attribute of RegionServicesForStores, then in-memory compaction is executed by submitting a an inmemory-flush-and-compaction-runnable to this pool. How do you like this solution? Better than the current?
            anoop.hbase Anoop Sam John added a comment -

            Excellent. Ya Specially when the key and/or value size of cells are not much, the heap size overhead because of Cell and CSLM node is considerably high. This will 100 bytes per cell. So when data size is like 100-200 bytes we will have flushes (128 MB memstore size reached) with actual data getting flushed is only half of that size. And in memory flush/compaction which result in kind of flat bytes representation (with overhead of 8 bytes per Cell for meta data) looks much better. This will help us in our off heap write path and memstore impl also. Waiting for this patch to land in.

            How do you like this solution? Better than the current?

            This mean per region one pool right?

            anoop.hbase Anoop Sam John added a comment - Excellent. Ya Specially when the key and/or value size of cells are not much, the heap size overhead because of Cell and CSLM node is considerably high. This will 100 bytes per cell. So when data size is like 100-200 bytes we will have flushes (128 MB memstore size reached) with actual data getting flushed is only half of that size. And in memory flush/compaction which result in kind of flat bytes representation (with overhead of 8 bytes per Cell for meta data) looks much better. This will help us in our off heap write path and memstore impl also. Waiting for this patch to land in. How do you like this solution? Better than the current? This mean per region one pool right?
            eshcar Eshcar Hillel added a comment -

            Well, now that you ask - one pool per region seems a waste, since a region is not likely to have more than one flush in parallel.
            I can make it a static attribute so we'll have one pool per region server, this seems more appropriate.

            eshcar Eshcar Hillel added a comment - Well, now that you ask - one pool per region seems a waste, since a region is not likely to have more than one flush in parallel. I can make it a static attribute so we'll have one pool per region server, this seems more appropriate.
            eshcar Eshcar Hillel added a comment -

            new patch

            eshcar Eshcar Hillel added a comment - new patch
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 0s rubocop was not available.
            0 ruby-lint 0m 0s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 9 new or modified test files.
            0 mvndep 0m 11s Maven dependency ordering for branch
            +1 mvninstall 3m 0s master passed
            +1 compile 2m 24s master passed with JDK v1.8.0
            +1 compile 1m 36s master passed with JDK v1.7.0_79
            +1 checkstyle 5m 27s master passed
            +1 mvneclipse 1m 5s master passed
            +1 findbugs 4m 3s master passed
            +1 javadoc 2m 2s master passed with JDK v1.8.0
            +1 javadoc 1m 36s master passed with JDK v1.7.0_79
            0 mvndep 0m 12s Maven dependency ordering for patch
            +1 mvninstall 2m 8s the patch passed
            +1 compile 2m 25s the patch passed with JDK v1.8.0
            +1 javac 2m 25s the patch passed
            +1 compile 1m 52s the patch passed with JDK v1.7.0_79
            +1 javac 1m 52s the patch passed
            +1 checkstyle 1m 17s hbase-common: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 5s hbase-client: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 14s hbase-server: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 20s hbase-shell: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 4s hbase-it: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 mvneclipse 1m 5s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 30m 30s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
            +1 findbugs 5m 18s the patch passed
            -1 javadoc 4m 59s hbase-server-jdk1.8.0 with JDK v1.8.0 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
            +1 javadoc 2m 20s the patch passed with JDK v1.8.0
            -1 javadoc 6m 42s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
            +1 javadoc 1m 43s the patch passed with JDK v1.7.0_79
            +1 unit 1m 51s hbase-common in the patch passed.
            +1 unit 0m 59s hbase-client in the patch passed.
            -1 unit 100m 13s hbase-server in the patch failed.
            -1 unit 1m 9s hbase-shell in the patch failed.
            +1 unit 0m 25s hbase-it in the patch passed.
            +1 asflicense 1m 6s Patch does not generate ASF License warnings.
            182m 6s



            Reason Tests
            Failed junit tests hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush
              hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes
              hadoop.hbase.client.rsgroup.TestShellRSGroups
              hadoop.hbase.client.TestShell
            Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799508/HBASE-14920-V03.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
            git revision master / bd3b975
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            javadoc hbase-server-jdk1.8.0: https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.8.0.txt
            javadoc hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.7.0_79.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-server.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-shell.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-shell.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1485/testReport/
            modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1485/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 0s rubocop was not available. 0 ruby-lint 0m 0s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 9 new or modified test files. 0 mvndep 0m 11s Maven dependency ordering for branch +1 mvninstall 3m 0s master passed +1 compile 2m 24s master passed with JDK v1.8.0 +1 compile 1m 36s master passed with JDK v1.7.0_79 +1 checkstyle 5m 27s master passed +1 mvneclipse 1m 5s master passed +1 findbugs 4m 3s master passed +1 javadoc 2m 2s master passed with JDK v1.8.0 +1 javadoc 1m 36s master passed with JDK v1.7.0_79 0 mvndep 0m 12s Maven dependency ordering for patch +1 mvninstall 2m 8s the patch passed +1 compile 2m 25s the patch passed with JDK v1.8.0 +1 javac 2m 25s the patch passed +1 compile 1m 52s the patch passed with JDK v1.7.0_79 +1 javac 1m 52s the patch passed +1 checkstyle 1m 17s hbase-common: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 5s hbase-client: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 14s hbase-server: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 20s hbase-shell: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 4s hbase-it: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 mvneclipse 1m 5s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 30m 30s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 findbugs 5m 18s the patch passed -1 javadoc 4m 59s hbase-server-jdk1.8.0 with JDK v1.8.0 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) +1 javadoc 2m 20s the patch passed with JDK v1.8.0 -1 javadoc 6m 42s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) +1 javadoc 1m 43s the patch passed with JDK v1.7.0_79 +1 unit 1m 51s hbase-common in the patch passed. +1 unit 0m 59s hbase-client in the patch passed. -1 unit 100m 13s hbase-server in the patch failed. -1 unit 1m 9s hbase-shell in the patch failed. +1 unit 0m 25s hbase-it in the patch passed. +1 asflicense 1m 6s Patch does not generate ASF License warnings. 182m 6s Reason Tests Failed junit tests hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush   hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes   hadoop.hbase.client.rsgroup.TestShellRSGroups   hadoop.hbase.client.TestShell Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799508/HBASE-14920-V03.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / bd3b975 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 javadoc hbase-server-jdk1.8.0: https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.8.0.txt javadoc hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-shell.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-server.txt https://builds.apache.org/job/PreCommit-HBASE-Build/1485/artifact/patchprocess/patch-unit-hbase-shell.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1485/testReport/ modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1485/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            eshcar Eshcar Hillel added a comment -

            new patch - updating review board

            eshcar Eshcar Hillel added a comment - new patch - updating review board
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 0s rubocop was not available.
            0 ruby-lint 0m 0s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 47s Maven dependency ordering for branch
            +1 mvninstall 2m 50s master passed
            +1 compile 2m 19s master passed with JDK v1.8.0
            +1 compile 1m 36s master passed with JDK v1.7.0_79
            +1 checkstyle 5m 21s master passed
            +1 mvneclipse 1m 2s master passed
            +1 findbugs 3m 28s master passed
            +1 javadoc 1m 50s master passed with JDK v1.8.0
            +1 javadoc 1m 27s master passed with JDK v1.7.0_79
            0 mvndep 0m 10s Maven dependency ordering for patch
            +1 mvninstall 1m 52s the patch passed
            +1 compile 2m 16s the patch passed with JDK v1.8.0
            +1 javac 2m 16s the patch passed
            +1 compile 1m 39s the patch passed with JDK v1.7.0_79
            +1 javac 1m 39s the patch passed
            +1 checkstyle 1m 15s hbase-common: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 6s hbase-client: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 0s hbase-server: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 1m 3s hbase-shell: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 checkstyle 0m 59s hbase-it: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)
            +1 mvneclipse 1m 3s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 25m 4s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1.
            +1 findbugs 4m 23s the patch passed
            -1 javadoc 4m 5s hbase-server-jdk1.8.0 with JDK v1.8.0 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
            +1 javadoc 1m 52s the patch passed with JDK v1.8.0
            -1 javadoc 5m 29s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1)
            +1 javadoc 1m 24s the patch passed with JDK v1.7.0_79
            +1 unit 1m 43s hbase-common in the patch passed.
            +1 unit 0m 54s hbase-client in the patch passed.
            +1 unit 91m 54s hbase-server in the patch passed.
            -1 unit 1m 6s hbase-shell in the patch failed.
            +1 unit 0m 23s hbase-it in the patch passed.
            +1 asflicense 1m 8s Patch does not generate ASF License warnings.
            164m 10s



            Reason Tests
            Failed junit tests hadoop.hbase.client.TestShell
              hadoop.hbase.client.rsgroup.TestShellRSGroups



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799733/HBASE-14920-V04.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
            git revision master / 954a417
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            javadoc hbase-server-jdk1.8.0: https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.8.0.txt
            javadoc hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.7.0_79.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/patch-unit-hbase-shell.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/patch-unit-hbase-shell.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1514/testReport/
            modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1514/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 0s rubocop was not available. 0 ruby-lint 0m 0s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 47s Maven dependency ordering for branch +1 mvninstall 2m 50s master passed +1 compile 2m 19s master passed with JDK v1.8.0 +1 compile 1m 36s master passed with JDK v1.7.0_79 +1 checkstyle 5m 21s master passed +1 mvneclipse 1m 2s master passed +1 findbugs 3m 28s master passed +1 javadoc 1m 50s master passed with JDK v1.8.0 +1 javadoc 1m 27s master passed with JDK v1.7.0_79 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 52s the patch passed +1 compile 2m 16s the patch passed with JDK v1.8.0 +1 javac 2m 16s the patch passed +1 compile 1m 39s the patch passed with JDK v1.7.0_79 +1 javac 1m 39s the patch passed +1 checkstyle 1m 15s hbase-common: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 6s hbase-client: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 0s hbase-server: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 1m 3s hbase-shell: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 checkstyle 0m 59s hbase-it: patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) +1 mvneclipse 1m 3s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 25m 4s Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. +1 findbugs 4m 23s the patch passed -1 javadoc 4m 5s hbase-server-jdk1.8.0 with JDK v1.8.0 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) +1 javadoc 1m 52s the patch passed with JDK v1.8.0 -1 javadoc 5m 29s hbase-server-jdk1.7.0_79 with JDK v1.7.0_79 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) +1 javadoc 1m 24s the patch passed with JDK v1.7.0_79 +1 unit 1m 43s hbase-common in the patch passed. +1 unit 0m 54s hbase-client in the patch passed. +1 unit 91m 54s hbase-server in the patch passed. -1 unit 1m 6s hbase-shell in the patch failed. +1 unit 0m 23s hbase-it in the patch passed. +1 asflicense 1m 8s Patch does not generate ASF License warnings. 164m 10s Reason Tests Failed junit tests hadoop.hbase.client.TestShell   hadoop.hbase.client.rsgroup.TestShellRSGroups Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12799733/HBASE-14920-V04.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh git revision master / 954a417 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 javadoc hbase-server-jdk1.8.0: https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.8.0.txt javadoc hbase-server-jdk1.7.0_79: https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/diff-javadoc-javadoc-hbase-server-jdk1.7.0_79.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/patch-unit-hbase-shell.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1514/artifact/patchprocess/patch-unit-hbase-shell.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1514/testReport/ modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1514/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            stack Michael Stack added a comment -

            I was going to say the same, is it an executor per region?

            Static is usually more pain than it is worth.

            Just leave as an HRegionServer executor then if it is causing this much grief. Can pass it in with RegionServices...

            Let me look at the patch.

            stack Michael Stack added a comment - I was going to say the same, is it an executor per region? Static is usually more pain than it is worth. Just leave as an HRegionServer executor then if it is causing this much grief. Can pass it in with RegionServices... Let me look at the patch.

            Just a remark - finishing this JIRA will expedite HBASE-14921 (flat storage). Any new feedback folks?

            ebortnik Edward Bortnikov added a comment - Just a remark - finishing this JIRA will expedite HBASE-14921 (flat storage). Any new feedback folks?
            anoop.hbase Anoop Sam John added a comment -

            public final static double IN_MEMORY_FLUSH_THRESHOLD_FACTOR = 0.9;

            So we check after every cell addition to the active segment, whether it is worth for an in memory flush now. The size calc for that , why we consider FlushLargeStoresPolicy.DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN and then multiply that with this factor of 90%?
            FlushLargeStoresPolicy#configureForRegion sets a bound for each memstore by

            protected void configureForRegion(HRegion region) {
                super.configureForRegion(region);
                int familyNumber = region.getTableDesc().getFamilies().size();
                if (familyNumber <= 1) {
                  // No need to parse and set flush size lower bound if only one family
                  // Family number might also be zero in some of our unit test case
                  return;
                }
                // For multiple families, lower bound is the "average flush size" by default
                // unless setting in configuration is larger.
                long flushSizeLowerBound = region.getMemstoreFlushSize() / familyNumber;
                long minimumLowerBound =
                    getConf().getLong(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN,
                      DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN);
                if (minimumLowerBound > flushSizeLowerBound) {
                  flushSizeLowerBound = minimumLowerBound;
                }
            

            Can we simplify our calc like we get avg size for each memstore size when a normal flush (ie. Memstore size , def 128 MB / #stores) and multiply that with a factor for deciding the in memory flush. Table have 2 stores. So avg max size for each memstore is 64 MB. And we keep a factor of say 25% . So when memstore size reaches 16 MB, we do an in memory flush.

            Another concern is when a flush request comes (It can be because of global memstore size above high or lower watermark or because of region memstore size reaches limit, def 128MB or because of an explicit flush call from user via API ), why we flush to disk only some part? Only the tail of pipeline. IMHO, when a to disk flush request comes, we must flush whole memstore.
            In case of flush because of lower/higher water mark crossed, we pick up regions for flush n increasing order of region memstore size. This size includes all segment's size. And we may end up in flushing much lesser size!

            Another thing on general is we account the memstore size in many places now.. RS level, Region level as state vars. And within the memstore it has a size. Now with all the in memory flush, the size changes after an in memory flush. I see we have a call via RegionServicesForStores. But all these make us more error prone? Do we need some sort of cleanup in this size accounting area? cc saint.ack@gmail.com

            anoop.hbase Anoop Sam John added a comment - public final static double IN_MEMORY_FLUSH_THRESHOLD_FACTOR = 0.9; So we check after every cell addition to the active segment, whether it is worth for an in memory flush now. The size calc for that , why we consider FlushLargeStoresPolicy.DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN and then multiply that with this factor of 90%? FlushLargeStoresPolicy#configureForRegion sets a bound for each memstore by protected void configureForRegion(HRegion region) { super .configureForRegion(region); int familyNumber = region.getTableDesc().getFamilies().size(); if (familyNumber <= 1) { // No need to parse and set flush size lower bound if only one family // Family number might also be zero in some of our unit test case return ; } // For multiple families, lower bound is the "average flush size" by default // unless setting in configuration is larger. long flushSizeLowerBound = region.getMemstoreFlushSize() / familyNumber; long minimumLowerBound = getConf().getLong(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN, DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND_MIN); if (minimumLowerBound > flushSizeLowerBound) { flushSizeLowerBound = minimumLowerBound; } Can we simplify our calc like we get avg size for each memstore size when a normal flush (ie. Memstore size , def 128 MB / #stores) and multiply that with a factor for deciding the in memory flush. Table have 2 stores. So avg max size for each memstore is 64 MB. And we keep a factor of say 25% . So when memstore size reaches 16 MB, we do an in memory flush. Another concern is when a flush request comes (It can be because of global memstore size above high or lower watermark or because of region memstore size reaches limit, def 128MB or because of an explicit flush call from user via API ), why we flush to disk only some part? Only the tail of pipeline. IMHO, when a to disk flush request comes, we must flush whole memstore. In case of flush because of lower/higher water mark crossed, we pick up regions for flush n increasing order of region memstore size. This size includes all segment's size. And we may end up in flushing much lesser size! Another thing on general is we account the memstore size in many places now.. RS level, Region level as state vars. And within the memstore it has a size. Now with all the in memory flush, the size changes after an in memory flush. I see we have a call via RegionServicesForStores. But all these make us more error prone? Do we need some sort of cleanup in this size accounting area? cc saint.ack@gmail.com
            anoop.hbase Anoop Sam John added a comment -

            After an in memory flush, we will reduce some heap overhead and will reduce that delta from memstore size. I can see a call to reduce the variable what we keep at RS level. Another we have at every HRegion level, do we update that also? Because of these I was thinking whether we should simplify this and avoid keeping the size in multiple places.

            anoop.hbase Anoop Sam John added a comment - After an in memory flush, we will reduce some heap overhead and will reduce that delta from memstore size. I can see a call to reduce the variable what we keep at RS level. Another we have at every HRegion level, do we update that also? Because of these I was thinking whether we should simplify this and avoid keeping the size in multiple places.
            yuzhihong@gmail.com Ted Yu added a comment -

            Can review board be updated ?

            When I clicked Download, HBASE-14920-V01.patch was downloaded which was out of date.

            Thanks

            yuzhihong@gmail.com Ted Yu added a comment - Can review board be updated ? When I clicked Download, HBASE-14920 -V01.patch was downloaded which was out of date. Thanks
            stack Michael Stack added a comment -

            Unused import import static org.apache.hadoop.hbase.io.hfile.BlockType.MAGIC_LENGTH;

            Was IntegrationTestBigLinkedList failing w/o your changing the flush policy to ALL?

            Misedit hiere... * @g MemStore#snapshot()

            No need of a region lock in this case:

            47	 * Periodically, a compaction is applied in the background to all pipeline components resulting
            48	 * in a single read-only component. The ``old'' components are discarded when no scanner is reading
            49	 * them.
            

            We will print this out for every Store in the table?

            92 LOG.info("Setting in-memory flush size threshold for table "
            93 + getRegionServices().getRegionInfo().getTable()
            94 + "to " + flushSizeLowerBound);

            Should it not be a message particular to this Store rather than a remark on table?

            These look like internal utility methods but are public:

            public static long getSegmentSize(Segment segment) {
            public static long getSegmentsSize(List<? extends Segment> list) {

            More to follow...

            stack Michael Stack added a comment - Unused import import static org.apache.hadoop.hbase.io.hfile.BlockType.MAGIC_LENGTH; Was IntegrationTestBigLinkedList failing w/o your changing the flush policy to ALL? Misedit hiere... * @g MemStore#snapshot() No need of a region lock in this case: 47 * Periodically, a compaction is applied in the background to all pipeline components resulting 48 * in a single read-only component. The ``old'' components are discarded when no scanner is reading 49 * them. We will print this out for every Store in the table? 92 LOG.info("Setting in-memory flush size threshold for table " 93 + getRegionServices().getRegionInfo().getTable() 94 + "to " + flushSizeLowerBound); Should it not be a message particular to this Store rather than a remark on table? These look like internal utility methods but are public: public static long getSegmentSize(Segment segment) { public static long getSegmentsSize(List<? extends Segment> list) { More to follow...
            eshcar Eshcar Hillel added a comment -

            New patch is available. Also available in RB https://reviews.apache.org/r/45080/.
            I changed the setting of the inmemory flush threshold according to what anoop.hbase have suggested.

            Anoop also raised a concern that flushing the tail of the compaction pipeline is not enough.
            As I see it, a call to flush data to disk aims for reducing the memory held by the region, and this goal is achieved. Furthermore, in most cases the largest portion of the data resides in the tail segment in the pipeline, therefore almost all the data will be flushed to disk. Finally this wouldn’t be the first case where you need to flush more than once in order to completely empty the memory - see HRegion#doClose()

            After an in memory flush, we will reduce some heap overhead and will reduce that delta from memstore size. I can see a call to reduce the variable what we keep at RS level. Another we have at every HRegion level, do we update that also?

            After an in-memory compaction is completed the memstore invokes RegionServicesForStores#addAndGetGlobalMemstoreSize(size) which then invokes HRegion#addAndGetGlobalMemstoreSize(size) which updates the region counter and takes care to update the RegionServer counter.

            public long addAndGetGlobalMemstoreSize(long memStoreSize) {
             if (this.rsAccounting != null) {
               rsAccounting.addAndGetGlobalMemstoreSize(memStoreSize);
             }
             return this.memstoreSize.addAndGet(memStoreSize);
            }
            

            None of the counters (RS, region, segment) are new, all of them existed before this patch, so I fail to see the problem.

            No need of a region lock in this case

            The region lock is only held while flushing the active segment into the pipeline, and not during compaction

            void flushInMemory() throws IOException {
             // Phase I: Update the pipeline
             getRegionServices().blockUpdates();
             try {
               MutableSegment active = getActive();
               LOG.info("IN-MEMORY FLUSH: Pushing active segment into compaction pipeline, " +
                   "and initiating compaction.");
               pushActiveToPipeline(active);
             } finally {
               getRegionServices().unblockUpdates();
             }
            ...
            
            eshcar Eshcar Hillel added a comment - New patch is available. Also available in RB https://reviews.apache.org/r/45080/ . I changed the setting of the inmemory flush threshold according to what anoop.hbase have suggested. Anoop also raised a concern that flushing the tail of the compaction pipeline is not enough. As I see it, a call to flush data to disk aims for reducing the memory held by the region, and this goal is achieved. Furthermore, in most cases the largest portion of the data resides in the tail segment in the pipeline, therefore almost all the data will be flushed to disk. Finally this wouldn’t be the first case where you need to flush more than once in order to completely empty the memory - see HRegion#doClose() After an in memory flush, we will reduce some heap overhead and will reduce that delta from memstore size. I can see a call to reduce the variable what we keep at RS level. Another we have at every HRegion level, do we update that also? After an in-memory compaction is completed the memstore invokes RegionServicesForStores#addAndGetGlobalMemstoreSize(size) which then invokes HRegion#addAndGetGlobalMemstoreSize(size) which updates the region counter and takes care to update the RegionServer counter. public long addAndGetGlobalMemstoreSize( long memStoreSize) { if ( this .rsAccounting != null ) { rsAccounting.addAndGetGlobalMemstoreSize(memStoreSize); } return this .memstoreSize.addAndGet(memStoreSize); } None of the counters (RS, region, segment) are new, all of them existed before this patch, so I fail to see the problem. No need of a region lock in this case The region lock is only held while flushing the active segment into the pipeline, and not during compaction void flushInMemory() throws IOException { // Phase I: Update the pipeline getRegionServices().blockUpdates(); try { MutableSegment active = getActive(); LOG.info( "IN-MEMORY FLUSH: Pushing active segment into compaction pipeline, " + "and initiating compaction." ); pushActiveToPipeline(active); } finally { getRegionServices().unblockUpdates(); } ...
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 1s rubocop was not available.
            0 ruby-lint 0m 1s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 30s Maven dependency ordering for branch
            +1 mvninstall 3m 8s master passed
            +1 compile 2m 19s master passed with JDK v1.8.0
            +1 compile 1m 42s master passed with JDK v1.7.0_79
            +1 checkstyle 5m 20s master passed
            +1 mvneclipse 1m 6s master passed
            -1 findbugs 0m 17s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 53s master passed with JDK v1.8.0
            +1 javadoc 1m 24s master passed with JDK v1.7.0_79
            0 mvndep 0m 10s Maven dependency ordering for patch
            +1 mvninstall 1m 54s the patch passed
            +1 compile 2m 12s the patch passed with JDK v1.8.0
            +1 javac 2m 12s the patch passed
            +1 compile 1m 37s the patch passed with JDK v1.7.0_79
            +1 javac 1m 37s the patch passed
            -1 checkstyle 1m 16s hbase-common: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 checkstyle 1m 6s hbase-client: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 checkstyle 1m 0s hbase-server: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 checkstyle 1m 5s hbase-shell: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 checkstyle 1m 3s hbase-it: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            +1 mvneclipse 1m 4s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 8m 42s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0.
            -1 findbugs 2m 15s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
            -1 findbugs 0m 18s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 54s the patch passed with JDK v1.8.0
            +1 javadoc 1m 24s the patch passed with JDK v1.7.0_79
            +1 unit 1m 43s hbase-common in the patch passed.
            +1 unit 0m 53s hbase-client in the patch passed.
            -1 unit 112m 6s hbase-server in the patch failed.
            +1 unit 0m 21s hbase-shell in the patch passed.
            +1 unit 0m 27s hbase-it in the patch passed.
            +1 asflicense 1m 2s Patch does not generate ASF License warnings.
            168m 25s



            Reason Tests
            FindBugs module:hbase-server
              org.apache.hadoop.hbase.regionserver.CompactingMemStore.IN_MEMORY_FLUSH_THRESHOLD_FACTOR isn't final but should be At CompactingMemStore.java:be At CompactingMemStore.java:[line 58]
            Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12801824/HBASE-14920-V05.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / 58c4c3d
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-common.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-it.txt
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/new-findbugs-hbase-server.html
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/patch-unit-hbase-server.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/patch-unit-hbase-server.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1712/testReport/
            modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1712/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 1s rubocop was not available. 0 ruby-lint 0m 1s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 30s Maven dependency ordering for branch +1 mvninstall 3m 8s master passed +1 compile 2m 19s master passed with JDK v1.8.0 +1 compile 1m 42s master passed with JDK v1.7.0_79 +1 checkstyle 5m 20s master passed +1 mvneclipse 1m 6s master passed -1 findbugs 0m 17s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 53s master passed with JDK v1.8.0 +1 javadoc 1m 24s master passed with JDK v1.7.0_79 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 54s the patch passed +1 compile 2m 12s the patch passed with JDK v1.8.0 +1 javac 2m 12s the patch passed +1 compile 1m 37s the patch passed with JDK v1.7.0_79 +1 javac 1m 37s the patch passed -1 checkstyle 1m 16s hbase-common: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 checkstyle 1m 6s hbase-client: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 checkstyle 1m 0s hbase-server: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 checkstyle 1m 5s hbase-shell: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 checkstyle 1m 3s hbase-it: patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) +1 mvneclipse 1m 4s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 8m 42s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. -1 findbugs 2m 15s hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) -1 findbugs 0m 18s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 54s the patch passed with JDK v1.8.0 +1 javadoc 1m 24s the patch passed with JDK v1.7.0_79 +1 unit 1m 43s hbase-common in the patch passed. +1 unit 0m 53s hbase-client in the patch passed. -1 unit 112m 6s hbase-server in the patch failed. +1 unit 0m 21s hbase-shell in the patch passed. +1 unit 0m 27s hbase-it in the patch passed. +1 asflicense 1m 2s Patch does not generate ASF License warnings. 168m 25s Reason Tests FindBugs module:hbase-server   org.apache.hadoop.hbase.regionserver.CompactingMemStore.IN_MEMORY_FLUSH_THRESHOLD_FACTOR isn't final but should be At CompactingMemStore.java:be At CompactingMemStore.java: [line 58] Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12801824/HBASE-14920-V05.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / 58c4c3d Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-common.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/diff-checkstyle-hbase-it.txt findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/new-findbugs-hbase-server.html unit https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/patch-unit-hbase-server.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1712/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1712/testReport/ modules C: hbase-common hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1712/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            anoop.hbase Anoop Sam John added a comment -

            IMHO, when a user calls to flush a region, all the cells must get flushed to disk. It is an explicit call for that from user.
            Also yes agree to the fact that the flush tried to reduce the heap usage of the memstore and flushing only part do this. But the selection of the region for flushing is done with assumption that the flush call will release all memstore heap size. (What is there at that time) It can happen that when flush is called the pipeline have entries and active also almost near to in memory flush size. The heap size occupied by active is considerably high (specially with small sized cells) because of the overhead part.

            anoop.hbase Anoop Sam John added a comment - IMHO, when a user calls to flush a region, all the cells must get flushed to disk. It is an explicit call for that from user. Also yes agree to the fact that the flush tried to reduce the heap usage of the memstore and flushing only part do this. But the selection of the region for flushing is done with assumption that the flush call will release all memstore heap size. (What is there at that time) It can happen that when flush is called the pipeline have entries and active also almost near to in memory flush size. The heap size occupied by active is considerably high (specially with small sized cells) because of the overhead part.
            anoop.hbase Anoop Sam John added a comment -

            None of the counters (RS, region, segment) are new, all of them existed before this patch, so I fail to see the problem.

            I agree. Am not saying this patch added any.. What I say is with the size math happens in 3 places and with added more calls now after this patch to change the size, it is very complex. It is more error prone.. So just asking whether we should visit this area and some way simplify it. Not direct on this patch.

            anoop.hbase Anoop Sam John added a comment - None of the counters (RS, region, segment) are new, all of them existed before this patch, so I fail to see the problem. I agree. Am not saying this patch added any.. What I say is with the size math happens in 3 places and with added more calls now after this patch to change the size, it is very complex. It is more error prone.. So just asking whether we should visit this area and some way simplify it. Not direct on this patch.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 1s rubocop was not available.
            0 ruby-lint 0m 1s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 9s Maven dependency ordering for branch
            +1 mvninstall 2m 46s master passed
            +1 compile 1m 49s master passed with JDK v1.8.0
            +1 compile 1m 20s master passed with JDK v1.7.0_79
            +1 checkstyle 6m 48s master passed
            +1 mvneclipse 0m 51s master passed
            -1 findbugs 0m 13s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 25s master passed with JDK v1.8.0
            +1 javadoc 1m 11s master passed with JDK v1.7.0_79
            0 mvndep 0m 9s Maven dependency ordering for patch
            +1 mvninstall 1m 36s the patch passed
            +1 compile 1m 49s the patch passed with JDK v1.8.0
            +1 javac 1m 49s the patch passed
            +1 compile 1m 16s the patch passed with JDK v1.7.0_79
            +1 javac 1m 16s the patch passed
            -1 checkstyle 1m 57s hbase-client: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50)
            -1 checkstyle 1m 32s hbase-server: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50)
            -1 checkstyle 1m 32s hbase-shell: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50)
            -1 checkstyle 1m 40s hbase-it: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50)
            +1 mvneclipse 0m 55s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 8m 19s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0.
            -1 findbugs 0m 11s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 27s the patch passed with JDK v1.8.0
            +1 javadoc 1m 6s the patch passed with JDK v1.7.0_79
            +1 unit 0m 54s hbase-client in the patch passed.
            -1 unit 109m 22s hbase-server in the patch failed.
            +1 unit 0m 22s hbase-shell in the patch passed.
            +1 unit 0m 20s hbase-it in the patch passed.
            +1 asflicense 0m 44s Patch does not generate ASF License warnings.
            158m 49s



            Reason Tests
            Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12801952/HBASE-14920-V06.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / 082d193
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-it.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/patch-unit-hbase-server.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/patch-unit-hbase-server.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1733/testReport/
            modules C: hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1733/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 1s rubocop was not available. 0 ruby-lint 0m 1s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 9s Maven dependency ordering for branch +1 mvninstall 2m 46s master passed +1 compile 1m 49s master passed with JDK v1.8.0 +1 compile 1m 20s master passed with JDK v1.7.0_79 +1 checkstyle 6m 48s master passed +1 mvneclipse 0m 51s master passed -1 findbugs 0m 13s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 25s master passed with JDK v1.8.0 +1 javadoc 1m 11s master passed with JDK v1.7.0_79 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 1m 36s the patch passed +1 compile 1m 49s the patch passed with JDK v1.8.0 +1 javac 1m 49s the patch passed +1 compile 1m 16s the patch passed with JDK v1.7.0_79 +1 javac 1m 16s the patch passed -1 checkstyle 1m 57s hbase-client: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50) -1 checkstyle 1m 32s hbase-server: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50) -1 checkstyle 1m 32s hbase-shell: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50) -1 checkstyle 1m 40s hbase-it: patch generated 2 new + 49 unchanged - 1 fixed = 51 total (was 50) +1 mvneclipse 0m 55s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 8m 19s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. -1 findbugs 0m 11s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 27s the patch passed with JDK v1.8.0 +1 javadoc 1m 6s the patch passed with JDK v1.7.0_79 +1 unit 0m 54s hbase-client in the patch passed. -1 unit 109m 22s hbase-server in the patch failed. +1 unit 0m 22s hbase-shell in the patch passed. +1 unit 0m 20s hbase-it in the patch passed. +1 asflicense 0m 44s Patch does not generate ASF License warnings. 158m 49s Reason Tests Timed out junit tests org.apache.hadoop.hbase.security.access.TestNamespaceCommands Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12801952/HBASE-14920-V06.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / 082d193 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/diff-checkstyle-hbase-it.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/patch-unit-hbase-server.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1733/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1733/testReport/ modules C: hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1733/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            eshcar Eshcar Hillel added a comment -

            A core property of the new in-memory flush and compaction is retaining data in memory for longer time. In the majority of cases flush to disk is invoked to reduce memory usage. In less frequent use cases the user wish to clear the memory.
            If we only consider the second case and flush the entire content to disk with every flush request, we significantly weaken the feature strength.
            Alternatively, we can have this feature well documented, so that user understand that if they wish to clear the memory they may need to invoke flush several times.

            Actually the same can happen for a default memstore.
            Consider the case where while a flush to disk is in progress the user invokes a flush to clear memory. However when the user flush is executed it finds the snapshot segment still has entries – it only logs a warning and ignores the user flush request. So in this case as well a user request is not satisfied, and the user should be able to handle this.

            eshcar Eshcar Hillel added a comment - A core property of the new in-memory flush and compaction is retaining data in memory for longer time. In the majority of cases flush to disk is invoked to reduce memory usage. In less frequent use cases the user wish to clear the memory. If we only consider the second case and flush the entire content to disk with every flush request, we significantly weaken the feature strength. Alternatively, we can have this feature well documented, so that user understand that if they wish to clear the memory they may need to invoke flush several times. Actually the same can happen for a default memstore. Consider the case where while a flush to disk is in progress the user invokes a flush to clear memory. However when the user flush is executed it finds the snapshot segment still has entries – it only logs a warning and ignores the user flush request. So in this case as well a user request is not satisfied, and the user should be able to handle this.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 0s rubocop was not available.
            0 ruby-lint 0m 0s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 13s Maven dependency ordering for branch
            +1 mvninstall 3m 26s master passed
            +1 compile 1m 55s master passed with JDK v1.8.0
            +1 compile 1m 28s master passed with JDK v1.7.0_79
            +1 checkstyle 7m 25s master passed
            +1 mvneclipse 1m 0s master passed
            -1 findbugs 0m 18s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 32s master passed with JDK v1.8.0
            +1 javadoc 1m 11s master passed with JDK v1.7.0_79
            0 mvndep 0m 10s Maven dependency ordering for patch
            +1 mvninstall 1m 44s the patch passed
            +1 compile 1m 46s the patch passed with JDK v1.8.0
            +1 javac 1m 46s the patch passed
            +1 compile 1m 23s the patch passed with JDK v1.7.0_79
            +1 javac 1m 23s the patch passed
            -1 checkstyle 2m 6s hbase-client: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50)
            -1 checkstyle 1m 43s hbase-server: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50)
            -1 checkstyle 1m 39s hbase-shell: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50)
            -1 checkstyle 1m 44s hbase-it: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50)
            +1 mvneclipse 0m 57s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 9m 22s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0.
            -1 findbugs 0m 18s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 27s the patch passed with JDK v1.8.0
            +1 javadoc 1m 15s the patch passed with JDK v1.7.0_79
            +1 unit 0m 57s hbase-client in the patch passed.
            +1 unit 103m 30s hbase-server in the patch passed.
            +1 unit 0m 23s hbase-shell in the patch passed.
            +1 unit 0m 23s hbase-it in the patch passed.
            +1 asflicense 0m 48s Patch does not generate ASF License warnings.
            157m 43s



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802282/HBASE-14920-V07.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / d23d600
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-client.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-server.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-shell.txt
            checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-it.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1765/testReport/
            modules C: hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1765/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 0s rubocop was not available. 0 ruby-lint 0m 0s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 13s Maven dependency ordering for branch +1 mvninstall 3m 26s master passed +1 compile 1m 55s master passed with JDK v1.8.0 +1 compile 1m 28s master passed with JDK v1.7.0_79 +1 checkstyle 7m 25s master passed +1 mvneclipse 1m 0s master passed -1 findbugs 0m 18s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 32s master passed with JDK v1.8.0 +1 javadoc 1m 11s master passed with JDK v1.7.0_79 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 44s the patch passed +1 compile 1m 46s the patch passed with JDK v1.8.0 +1 javac 1m 46s the patch passed +1 compile 1m 23s the patch passed with JDK v1.7.0_79 +1 javac 1m 23s the patch passed -1 checkstyle 2m 6s hbase-client: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) -1 checkstyle 1m 43s hbase-server: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) -1 checkstyle 1m 39s hbase-shell: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) -1 checkstyle 1m 44s hbase-it: patch generated 1 new + 49 unchanged - 1 fixed = 50 total (was 50) +1 mvneclipse 0m 57s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 9m 22s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. -1 findbugs 0m 18s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 27s the patch passed with JDK v1.8.0 +1 javadoc 1m 15s the patch passed with JDK v1.7.0_79 +1 unit 0m 57s hbase-client in the patch passed. +1 unit 103m 30s hbase-server in the patch passed. +1 unit 0m 23s hbase-shell in the patch passed. +1 unit 0m 23s hbase-it in the patch passed. +1 asflicense 0m 48s Patch does not generate ASF License warnings. 157m 43s Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12802282/HBASE-14920-V07.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / d23d600 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-client.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-server.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-shell.txt checkstyle https://builds.apache.org/job/PreCommit-HBASE-Build/1765/artifact/patchprocess/diff-checkstyle-hbase-it.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1765/testReport/ modules C: hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1765/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            stack Michael Stack added a comment -

            If we only consider the second case and flush the entire content to disk with every flush request, we significantly weaken the feature strength.

            I'd think this would happen extremely rarely and when it does, I'd think it an emergency or a dev testing.

            Alternatively, we can have this feature well documented, so that user understand that if they wish to clear the memory they may need to invoke flush several times.

            We could do this too, yes.

            So in this case as well a user request is not satisfied, and the user should be able to handle this.

            Nod. Sounds like documenting that it make take a few attempts to clear memory is the way to go.

            If a user is persistent, will all memory be flushed from pipeline (each user click moves a segment? So if N segments, it will take N clicks to clear all memory?)

            That'd be fine I'd say.

            stack Michael Stack added a comment - If we only consider the second case and flush the entire content to disk with every flush request, we significantly weaken the feature strength. I'd think this would happen extremely rarely and when it does, I'd think it an emergency or a dev testing. Alternatively, we can have this feature well documented, so that user understand that if they wish to clear the memory they may need to invoke flush several times. We could do this too, yes. So in this case as well a user request is not satisfied, and the user should be able to handle this. Nod. Sounds like documenting that it make take a few attempts to clear memory is the way to go. If a user is persistent, will all memory be flushed from pipeline (each user click moves a segment? So if N segments, it will take N clicks to clear all memory?) That'd be fine I'd say.
            stack Michael Stack added a comment -

            Good point. Any input here eshcar since you deep in here? Could be a follow-on.

            stack Michael Stack added a comment - Good point. Any input here eshcar since you deep in here? Could be a follow-on.
            stack Michael Stack added a comment -

            I went over page #1 of the patch and skimmed page #2. Left some comments. Thanks eshcar

            stack Michael Stack added a comment - I went over page #1 of the patch and skimmed page #2. Left some comments. Thanks eshcar
            anoop.hbase Anoop Sam John added a comment -

            So how will it behave when system call region flush before a graceful stop (region close)? This will happen before split, after a replay etc? Then also we wont do the full flush?
            I agree to the point that in ur use case where mostly increment mutations, delaying the flush of cells as much as possible helps you. Considering the general usage where most of the cells will survive a flush, dont you think this part only flush will make more number of small sized files and so more #compactions? The normal decision making of when 2 flush is depending on the memstore size. This feature helps to reduce the overall memstore heap size. Say we will in memory flush when 25% of memstore size reaches. This will reduce the heap size as it avoid the Cell object and CSLM entry overhead. Also this will help to provide bigger value for memstore (provided we have more -Xmx value).. In normal memstore the more the value for memstore, the more #cells it will hold and when it increases CSLM perf become poor. But in this feature, we do clear off the CSLM in btw and so we are better. So these way it already helps much to hold off cells for longer time in memory. Am still not convinced on the part that an explicit need/call to flush is still not flushing the whole memstore.

            anoop.hbase Anoop Sam John added a comment - So how will it behave when system call region flush before a graceful stop (region close)? This will happen before split, after a replay etc? Then also we wont do the full flush? I agree to the point that in ur use case where mostly increment mutations, delaying the flush of cells as much as possible helps you. Considering the general usage where most of the cells will survive a flush, dont you think this part only flush will make more number of small sized files and so more #compactions? The normal decision making of when 2 flush is depending on the memstore size. This feature helps to reduce the overall memstore heap size. Say we will in memory flush when 25% of memstore size reaches. This will reduce the heap size as it avoid the Cell object and CSLM entry overhead. Also this will help to provide bigger value for memstore (provided we have more -Xmx value).. In normal memstore the more the value for memstore, the more #cells it will hold and when it increases CSLM perf become poor. But in this feature, we do clear off the CSLM in btw and so we are better. So these way it already helps much to hold off cells for longer time in memory. Am still not convinced on the part that an explicit need/call to flush is still not flushing the whole memstore.

            anoop.hbase, would it help if we added a "complete" flag to the flush API, to be used exactly in these non-negotiable cases? The default would be false, so that the system can optimize the mainstream performance.

            ebortnik Edward Bortnikov added a comment - anoop.hbase , would it help if we added a "complete" flag to the flush API, to be used exactly in these non-negotiable cases? The default would be false, so that the system can optimize the mainstream performance.
            stack Michael Stack added a comment -

            Here is an attempt at addressing anoop.hbase concern that in-memory flushes could have us flushing small files; correct me if I have it wrong eshcar

            As I understand it, we flush a single Segment at a time to disk. In current patch where there is no pipeline – an in-memory flush makes a Segment which is the Snapshot – then flush sizes should be effectively as they are now.

            Later, when a pipeline is instituted, the flush-to-disk size will be whatever our Segment size is or more even (Segments get added to the pipeline; if compaction, Segments can be combined). Segments can also run more dense than CSLS. In general case, a Segment should be as fat as our current file flushes?

            stack Michael Stack added a comment - Here is an attempt at addressing anoop.hbase concern that in-memory flushes could have us flushing small files; correct me if I have it wrong eshcar As I understand it, we flush a single Segment at a time to disk. In current patch where there is no pipeline – an in-memory flush makes a Segment which is the Snapshot – then flush sizes should be effectively as they are now. Later, when a pipeline is instituted, the flush-to-disk size will be whatever our Segment size is or more even (Segments get added to the pipeline; if compaction, Segments can be combined). Segments can also run more dense than CSLS. In general case, a Segment should be as fat as our current file flushes?
            eshcar Eshcar Hillel added a comment -

            There are two distinct issues here.

            dont you think this part only flush will make more number of small sized files and so more #compactions?

            On the contrary, the evaluation experiments we ran show that with compacting memstore flush files are bigger and created less frequently thus incur less compaction. Let me explain.
            With the default memstore consider the case where the memory is filled and all the data (128MB) is flushed to disk. If there is some duplication in the data then it is removed by a compaction while flushing the data to a file; this reduces the size of data by the factor of the duplication. In addition, the data is compressed on disk and thus takes even less space. For example, in our experiments the size of the files after being flushed were 76MB, 60MB, and 24MB for uniform, zipfian and hotspot keys distribution, respectively. That is, the more the data is skewed the smaller the files are.
            On the other hand, with compacting memstore – the segment at the tail of the pipeline having been compacted at least once – not only the files are bigger they are flushed to disk much later. When the optimizations of HBASE-14921 are in, memory utilization will be even better.

            So how will it behave when system call region flush before a graceful stop (region close)? This will happen before split, after a replay etc? Then also we wont do the full flush?

            The way HRegion currently handles graceful stop is by issuing a loop of flush requests, in between checking whether the size of the memstore have become zero.

            // Don't flush the cache if we are aborting
            if (!abort && canFlush) {
             int flushCount = 0;
             while (this.memstoreSize.get() > 0) {
               try {
                 if (flushCount++ > 0) {
                   int actualFlushes = flushCount - 1;
                   if (actualFlushes > 5) {
                     // If we tried 5 times and are unable to clear memory, abort
                     // so we do not lose data
                     throw new DroppedSnapshotException("Failed clearing memory after " +
                       actualFlushes + " attempts on region: " +
                         Bytes.toStringBinary(getRegionInfo().getRegionName()));
                   }
                   LOG.info("Running extra flush, " + actualFlushes +
                     " (carrying snapshot?) " + this);
                 }
                 internalFlushcache(status);
               } catch (IOException ioe) {
                 status.setStatus("Failed flush " + this + ", putting online again");
                 synchronized (writestate) {
                   writestate.writesEnabled = true;
                 }
                 // Have to throw to upper layers.  I can't abort server from here.
                 throw ioe;
               }
             }
            }
            

            Here is a suggestion of how to tweak this code so that it also handles the case of a compacting memstore: count the number of ‘’failed’’ flush requests, that is flush requests that did not reduce the size of the memstore. Limit the number of failed attempts. This is equivalent to the original intent.

            // Don't flush the cache if we are aborting
            if (!abort && canFlush) {
             int failedfFlushCount = 0;
             int flushCount = 0;
             int remainingSize = this.memstoreSize.get();
             while (remainingSize > 0) {
               try {
                 internalFlushcache(status);
                 if(flushCount >0) {
                   LOG.info("Running extra flush, " + flushCount +
                     " (carrying snapshot?) " + this);
                 }
                 flushCount++;
                 int tmp = this.memstoreSize.get();
                 if (tmp >= remainingSize) {
                   failedfFlushCount++;
                 } 
                 remainingSize = tmp;
                 if (failedfFlushCount > 5) {
                     // If we failed 5 times and are unable to clear memory, abort
                     // so we do not lose data
                     throw new DroppedSnapshotException("Failed clearing memory after " +
                       flushCount + " attempts on region: " +
                         Bytes.toStringBinary(getRegionInfo().getRegionName()));
                 }
               } catch (IOException ioe) {
                 status.setStatus("Failed flush " + this + ", putting online again");
                 synchronized (writestate) {
                   writestate.writesEnabled = true;
                 }
                 // Have to throw to upper layers.  I can't abort server from here.
                 throw ioe;
               }
             }
            }
            
            eshcar Eshcar Hillel added a comment - There are two distinct issues here. dont you think this part only flush will make more number of small sized files and so more #compactions? On the contrary, the evaluation experiments we ran show that with compacting memstore flush files are bigger and created less frequently thus incur less compaction. Let me explain. With the default memstore consider the case where the memory is filled and all the data (128MB) is flushed to disk. If there is some duplication in the data then it is removed by a compaction while flushing the data to a file; this reduces the size of data by the factor of the duplication. In addition, the data is compressed on disk and thus takes even less space. For example, in our experiments the size of the files after being flushed were 76MB, 60MB, and 24MB for uniform, zipfian and hotspot keys distribution, respectively. That is, the more the data is skewed the smaller the files are. On the other hand, with compacting memstore – the segment at the tail of the pipeline having been compacted at least once – not only the files are bigger they are flushed to disk much later. When the optimizations of HBASE-14921 are in, memory utilization will be even better. So how will it behave when system call region flush before a graceful stop (region close)? This will happen before split, after a replay etc? Then also we wont do the full flush? The way HRegion currently handles graceful stop is by issuing a loop of flush requests, in between checking whether the size of the memstore have become zero. // Don't flush the cache if we are aborting if (!abort && canFlush) { int flushCount = 0; while ( this .memstoreSize.get() > 0) { try { if (flushCount++ > 0) { int actualFlushes = flushCount - 1; if (actualFlushes > 5) { // If we tried 5 times and are unable to clear memory, abort // so we do not lose data throw new DroppedSnapshotException( "Failed clearing memory after " + actualFlushes + " attempts on region: " + Bytes.toStringBinary(getRegionInfo().getRegionName())); } LOG.info( "Running extra flush, " + actualFlushes + " (carrying snapshot?) " + this ); } internalFlushcache(status); } catch (IOException ioe) { status.setStatus( "Failed flush " + this + ", putting online again" ); synchronized (writestate) { writestate.writesEnabled = true ; } // Have to throw to upper layers. I can't abort server from here. throw ioe; } } } Here is a suggestion of how to tweak this code so that it also handles the case of a compacting memstore: count the number of ‘’failed’’ flush requests, that is flush requests that did not reduce the size of the memstore. Limit the number of failed attempts. This is equivalent to the original intent. // Don't flush the cache if we are aborting if (!abort && canFlush) { int failedfFlushCount = 0; int flushCount = 0; int remainingSize = this .memstoreSize.get(); while (remainingSize > 0) { try { internalFlushcache(status); if (flushCount >0) { LOG.info( "Running extra flush, " + flushCount + " (carrying snapshot?) " + this ); } flushCount++; int tmp = this .memstoreSize.get(); if (tmp >= remainingSize) { failedfFlushCount++; } remainingSize = tmp; if (failedfFlushCount > 5) { // If we failed 5 times and are unable to clear memory, abort // so we do not lose data throw new DroppedSnapshotException( "Failed clearing memory after " + flushCount + " attempts on region: " + Bytes.toStringBinary(getRegionInfo().getRegionName())); } } catch (IOException ioe) { status.setStatus( "Failed flush " + this + ", putting online again" ); synchronized (writestate) { writestate.writesEnabled = true ; } // Have to throw to upper layers. I can't abort server from here. throw ioe; } } }
            eshcar Eshcar Hillel added a comment -

            New patch including code review comments and solution for graceful region closing

            eshcar Eshcar Hillel added a comment - New patch including code review comments and solution for graceful region closing
            anoop.hbase Anoop Sam John added a comment -

            Did not go through the latest patch..

            So it will work this way. The active is moved to pipeline at a threshold and it is in memory flushed and this new immutable segment is there in pipeline now. Later another active moved to pipeline and these 2 segments together get compacted into one immutable segment and this continues. Correct? I was thinking that once in flushed segment will not get further compacted. That was the biggest worry point behind saying that we will flush less sized HFile. Yes when we do in memory flush, we will make bigger sized HFiles now (compared to current memstore) as in current memstore considerable part of the heap size is overhead from Cell and CSLM node. What I was saying is if we flush whole memstore (not just tail of pipeline) in the new implementation also, then compared to we will make less sized HFile.

            Will see how we handle the close situation.

            On the multiple time compacting the already compacted segment, in case of many cells can go ways because of version expiry/delete etc, that is good. We reduce heap space with every compaction. But in a normal case, we might not reduce as the cells might be moving out. So how we can balance here? Just asking. Because the next time compact has to create scanner over the immutable segment, read them and create Cells. It makes lot of garbage.

            Another concern with not flushing whole memstore was that when the system is at bounds of global memstore size, the selection of region for flush is decided by the whole heap size of the memstore. Then we will select region with highest heap size. But actually it might not release that much memory. If another memstore (normal old style) was there and that was selected, we would have freed much more heap.

            anoop.hbase Anoop Sam John added a comment - Did not go through the latest patch.. So it will work this way. The active is moved to pipeline at a threshold and it is in memory flushed and this new immutable segment is there in pipeline now. Later another active moved to pipeline and these 2 segments together get compacted into one immutable segment and this continues. Correct? I was thinking that once in flushed segment will not get further compacted. That was the biggest worry point behind saying that we will flush less sized HFile. Yes when we do in memory flush, we will make bigger sized HFiles now (compared to current memstore) as in current memstore considerable part of the heap size is overhead from Cell and CSLM node. What I was saying is if we flush whole memstore (not just tail of pipeline) in the new implementation also, then compared to we will make less sized HFile. Will see how we handle the close situation. On the multiple time compacting the already compacted segment, in case of many cells can go ways because of version expiry/delete etc, that is good. We reduce heap space with every compaction. But in a normal case, we might not reduce as the cells might be moving out. So how we can balance here? Just asking. Because the next time compact has to create scanner over the immutable segment, read them and create Cells. It makes lot of garbage. Another concern with not flushing whole memstore was that when the system is at bounds of global memstore size, the selection of region for flush is decided by the whole heap size of the memstore. Then we will select region with highest heap size. But actually it might not release that much memory. If another memstore (normal old style) was there and that was selected, we would have freed much more heap.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 2s rubocop was not available.
            0 ruby-lint 0m 2s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 31s Maven dependency ordering for branch
            +1 mvninstall 4m 8s master passed
            +1 compile 2m 50s master passed with JDK v1.8.0
            +1 compile 2m 18s master passed with JDK v1.7.0_79
            +1 checkstyle 2m 28s master passed
            +1 mvneclipse 1m 22s master passed
            -1 findbugs 1m 24s hbase-client in master has 1 extant Findbugs warnings.
            -1 findbugs 0m 23s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 2m 6s master passed with JDK v1.8.0
            +1 javadoc 1m 37s master passed with JDK v1.7.0_79
            0 mvndep 0m 14s Maven dependency ordering for patch
            -1 mvninstall 0m 34s hbase-server in the patch failed.
            -1 mvninstall 0m 21s hbase-it in the patch failed.
            -1 compile 0m 44s hbase-server in the patch failed with JDK v1.8.0.
            -1 compile 0m 33s hbase-it in the patch failed with JDK v1.8.0.
            -1 javac 0m 44s hbase-server in the patch failed with JDK v1.8.0.
            -1 javac 0m 33s hbase-it in the patch failed with JDK v1.8.0.
            -1 compile 0m 32s hbase-server in the patch failed with JDK v1.7.0_79.
            -1 compile 0m 24s hbase-it in the patch failed with JDK v1.7.0_79.
            -1 javac 0m 32s hbase-server in the patch failed with JDK v1.7.0_79.
            -1 javac 0m 24s hbase-it in the patch failed with JDK v1.7.0_79.
            +1 checkstyle 2m 9s the patch passed
            +1 mvneclipse 1m 10s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            -1 hadoopcheck 1m 2s Patch causes 16 errors with Hadoop v2.4.1.
            -1 hadoopcheck 2m 7s Patch causes 16 errors with Hadoop v2.5.2.
            -1 hadoopcheck 3m 17s Patch causes 16 errors with Hadoop v2.6.0.
            -1 findbugs 0m 28s hbase-server in the patch failed.
            -1 findbugs 0m 29s hbase-it in the patch failed.
            +1 javadoc 2m 2s the patch passed with JDK v1.8.0
            +1 javadoc 1m 41s the patch passed with JDK v1.7.0_79
            +1 unit 1m 35s hbase-client in the patch passed.
            -1 unit 0m 43s hbase-server in the patch failed.
            +1 unit 0m 33s hbase-shell in the patch passed.
            -1 unit 0m 27s hbase-it in the patch failed.
            +1 asflicense 0m 45s Patch does not generate ASF License warnings.
            45m 58s



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803180/HBASE-14920-V08.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux pomona.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / 3b74b6f
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/branch-findbugs-hbase-client-warnings.html
            mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-mvninstall-hbase-server.txt
            mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-mvninstall-hbase-it.txt
            compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0.txt
            compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.8.0.txt
            javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0.txt
            javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.8.0.txt
            compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_79.txt
            compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.7.0_79.txt
            javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_79.txt
            javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.7.0_79.txt
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-findbugs-hbase-server.txt
            findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-findbugs-hbase-it.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-unit-hbase-server.txt
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-unit-hbase-it.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1852/testReport/
            modules C: hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1852/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 2s rubocop was not available. 0 ruby-lint 0m 2s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 31s Maven dependency ordering for branch +1 mvninstall 4m 8s master passed +1 compile 2m 50s master passed with JDK v1.8.0 +1 compile 2m 18s master passed with JDK v1.7.0_79 +1 checkstyle 2m 28s master passed +1 mvneclipse 1m 22s master passed -1 findbugs 1m 24s hbase-client in master has 1 extant Findbugs warnings. -1 findbugs 0m 23s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 2m 6s master passed with JDK v1.8.0 +1 javadoc 1m 37s master passed with JDK v1.7.0_79 0 mvndep 0m 14s Maven dependency ordering for patch -1 mvninstall 0m 34s hbase-server in the patch failed. -1 mvninstall 0m 21s hbase-it in the patch failed. -1 compile 0m 44s hbase-server in the patch failed with JDK v1.8.0. -1 compile 0m 33s hbase-it in the patch failed with JDK v1.8.0. -1 javac 0m 44s hbase-server in the patch failed with JDK v1.8.0. -1 javac 0m 33s hbase-it in the patch failed with JDK v1.8.0. -1 compile 0m 32s hbase-server in the patch failed with JDK v1.7.0_79. -1 compile 0m 24s hbase-it in the patch failed with JDK v1.7.0_79. -1 javac 0m 32s hbase-server in the patch failed with JDK v1.7.0_79. -1 javac 0m 24s hbase-it in the patch failed with JDK v1.7.0_79. +1 checkstyle 2m 9s the patch passed +1 mvneclipse 1m 10s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. -1 hadoopcheck 1m 2s Patch causes 16 errors with Hadoop v2.4.1. -1 hadoopcheck 2m 7s Patch causes 16 errors with Hadoop v2.5.2. -1 hadoopcheck 3m 17s Patch causes 16 errors with Hadoop v2.6.0. -1 findbugs 0m 28s hbase-server in the patch failed. -1 findbugs 0m 29s hbase-it in the patch failed. +1 javadoc 2m 2s the patch passed with JDK v1.8.0 +1 javadoc 1m 41s the patch passed with JDK v1.7.0_79 +1 unit 1m 35s hbase-client in the patch passed. -1 unit 0m 43s hbase-server in the patch failed. +1 unit 0m 33s hbase-shell in the patch passed. -1 unit 0m 27s hbase-it in the patch failed. +1 asflicense 0m 45s Patch does not generate ASF License warnings. 45m 58s Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12803180/HBASE-14920-V08.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux pomona.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / 3b74b6f Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/branch-findbugs-hbase-client-warnings.html mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-mvninstall-hbase-server.txt mvninstall https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-mvninstall-hbase-it.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.8.0.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.8.0.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.8.0.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_79.txt compile https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.7.0_79.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-server-jdk1.7.0_79.txt javac https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-compile-hbase-it-jdk1.7.0_79.txt findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-findbugs-hbase-server.txt findbugs https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-findbugs-hbase-it.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-unit-hbase-server.txt unit https://builds.apache.org/job/PreCommit-HBASE-Build/1852/artifact/patchprocess/patch-unit-hbase-it.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1852/testReport/ modules C: hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1852/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            stack Michael Stack added a comment -

            On the latter anoop.hbase, there is a new flush policy which will pick non-compacting memstores over compacting ones... when trying to figure what to flush first.

            stack Michael Stack added a comment - On the latter anoop.hbase , there is a new flush policy which will pick non-compacting memstores over compacting ones... when trying to figure what to flush first.

            [anoop.hbase], you have a point. The whole space of when/how much to flush, in memory/to disk is yet to be explored and optimized. It seems though that some aspects of it are being addressed in the HBASE-14921 discussion. For example, we already accepted your suggestion to decide selectively, upon flush, whether to compact multiple in memory segments or just flatten the newest segment. Other optimizations can be optimized as part of the same conversation.

            Unless there is some critical flaw with the baseline functionality in this jira, I'd suggest to finalize it, and defer the performance discussions to the next jira. It's already a lot of code. A commit would expedite the ensuing work a lot. Not avoiding the discussion but just trying to be realistic .. Does this makes sense?

            ebortnik Edward Bortnikov added a comment - [anoop.hbase] , you have a point. The whole space of when/how much to flush, in memory/to disk is yet to be explored and optimized. It seems though that some aspects of it are being addressed in the HBASE-14921 discussion. For example, we already accepted your suggestion to decide selectively, upon flush, whether to compact multiple in memory segments or just flatten the newest segment. Other optimizations can be optimized as part of the same conversation. Unless there is some critical flaw with the baseline functionality in this jira, I'd suggest to finalize it, and defer the performance discussions to the next jira. It's already a lot of code. A commit would expedite the ensuing work a lot. Not avoiding the discussion but just trying to be realistic .. Does this makes sense?
            anoop.hbase Anoop Sam John added a comment -

            Yes me too want it to be in asap.. We need this feature very much for the off heap write path and memstore. Fine we can do more as part of remaining jiras.

            anoop.hbase Anoop Sam John added a comment - Yes me too want it to be in asap.. We need this feature very much for the off heap write path and memstore. Fine we can do more as part of remaining jiras.
            eshcar Eshcar Hillel added a comment -

            patch after rebase

            eshcar Eshcar Hillel added a comment - patch after rebase
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 1s rubocop was not available.
            0 ruby-lint 0m 1s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 2m 38s Maven dependency ordering for branch
            +1 mvninstall 3m 7s master passed
            +1 compile 1m 44s master passed with JDK v1.8.0
            +1 compile 1m 18s master passed with JDK v1.7.0_79
            +1 checkstyle 1m 39s master passed
            +1 mvneclipse 0m 54s master passed
            -1 findbugs 0m 14s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 26s master passed with JDK v1.8.0
            +1 javadoc 1m 7s master passed with JDK v1.7.0_79
            0 mvndep 0m 9s Maven dependency ordering for patch
            +1 mvninstall 1m 37s the patch passed
            +1 compile 1m 42s the patch passed with JDK v1.8.0
            +1 javac 1m 42s the patch passed
            +1 compile 1m 20s the patch passed with JDK v1.7.0_79
            +1 javac 1m 20s the patch passed
            +1 checkstyle 1m 38s the patch passed
            +1 mvneclipse 0m 55s the patch passed
            +1 whitespace 0m 1s Patch has no whitespace issues.
            +1 hadoopcheck 8m 46s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0.
            -1 findbugs 0m 14s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 26s the patch passed with JDK v1.8.0
            +1 javadoc 1m 9s the patch passed with JDK v1.7.0_79
            +1 unit 0m 56s hbase-client in the patch passed.
            -1 unit 99m 33s hbase-server in the patch failed.
            +1 unit 0m 25s hbase-shell in the patch passed.
            +1 unit 0m 24s hbase-it in the patch passed.
            +1 asflicense 0m 51s Patch does not generate ASF License warnings.
            142m 27s



            Reason Tests
            Failed junit tests hadoop.hbase.regionserver.TestRegionServerMetrics



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12804127/HBASE-14920-V09.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / e0aff10
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            unit https://builds.apache.org/job/PreCommit-HBASE-Build/1905/artifact/patchprocess/patch-unit-hbase-server.txt
            unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1905/artifact/patchprocess/patch-unit-hbase-server.txt
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1905/testReport/
            modules C: hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1905/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 1s rubocop was not available. 0 ruby-lint 0m 1s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 2m 38s Maven dependency ordering for branch +1 mvninstall 3m 7s master passed +1 compile 1m 44s master passed with JDK v1.8.0 +1 compile 1m 18s master passed with JDK v1.7.0_79 +1 checkstyle 1m 39s master passed +1 mvneclipse 0m 54s master passed -1 findbugs 0m 14s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 26s master passed with JDK v1.8.0 +1 javadoc 1m 7s master passed with JDK v1.7.0_79 0 mvndep 0m 9s Maven dependency ordering for patch +1 mvninstall 1m 37s the patch passed +1 compile 1m 42s the patch passed with JDK v1.8.0 +1 javac 1m 42s the patch passed +1 compile 1m 20s the patch passed with JDK v1.7.0_79 +1 javac 1m 20s the patch passed +1 checkstyle 1m 38s the patch passed +1 mvneclipse 0m 55s the patch passed +1 whitespace 0m 1s Patch has no whitespace issues. +1 hadoopcheck 8m 46s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. -1 findbugs 0m 14s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 26s the patch passed with JDK v1.8.0 +1 javadoc 1m 9s the patch passed with JDK v1.7.0_79 +1 unit 0m 56s hbase-client in the patch passed. -1 unit 99m 33s hbase-server in the patch failed. +1 unit 0m 25s hbase-shell in the patch passed. +1 unit 0m 24s hbase-it in the patch passed. +1 asflicense 0m 51s Patch does not generate ASF License warnings. 142m 27s Reason Tests Failed junit tests hadoop.hbase.regionserver.TestRegionServerMetrics Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12804127/HBASE-14920-V09.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / e0aff10 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 unit https://builds.apache.org/job/PreCommit-HBASE-Build/1905/artifact/patchprocess/patch-unit-hbase-server.txt unit test logs https://builds.apache.org/job/PreCommit-HBASE-Build/1905/artifact/patchprocess/patch-unit-hbase-server.txt Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1905/testReport/ modules C: hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1905/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            anoop.hbase Anoop Sam John added a comment -

            Mainly gone through changed files this time. On the new memstore impl class raised some comments already and as discussed we can address in follow up issues also.

            if(hasSloppyStores) {
            950	          htableDescriptor.setFlushPolicyClassName(FlushNonSloppyStoresFirstPolicy.class
            951	              .getName());
            952	          LOG.info("Setting FlushNonSloppyStoresFirstPolicy for the region=" + this);
            953	        }
            

            How this works? Why we need? Will it be persisted to HTD?

            Still some code style issues like
            public ThreadPoolExecutor getInMemoryCompactionPool()

            { return INMEMORY_COMPACTION_POOL; }

            This pool is in RegionServicesForStores which is one per Region. Better avoid this and have a single pool at RS level. Here #threads depends on #Regions and will be an issue when we have more regions per RS. This was discussed already and accepted comment I believe.

            assertMinSequenceId Where it is used? Why?

            Regarding flush to disk in case of system generated flush events, it was discussed to handle and flush whole at once. Did already now or we will do in following issues?

            anoop.hbase Anoop Sam John added a comment - Mainly gone through changed files this time. On the new memstore impl class raised some comments already and as discussed we can address in follow up issues also. if (hasSloppyStores) { 950 htableDescriptor.setFlushPolicyClassName(FlushNonSloppyStoresFirstPolicy. class 951 .getName()); 952 LOG.info( "Setting FlushNonSloppyStoresFirstPolicy for the region=" + this ); 953 } How this works? Why we need? Will it be persisted to HTD? Still some code style issues like public ThreadPoolExecutor getInMemoryCompactionPool() { return INMEMORY_COMPACTION_POOL; } This pool is in RegionServicesForStores which is one per Region. Better avoid this and have a single pool at RS level. Here #threads depends on #Regions and will be an issue when we have more regions per RS. This was discussed already and accepted comment I believe. assertMinSequenceId Where it is used? Why? Regarding flush to disk in case of system generated flush events, it was discussed to handle and flush whole at once. Did already now or we will do in following issues?
            anoop.hbase Anoop Sam John added a comment - - edited

            I agree.. Let us get this in first..Big patch. It will be better for review and internal testing then. We can address comments/issues in followup issues. Ping saint.ack@gmail.com

            anoop.hbase Anoop Sam John added a comment - - edited I agree.. Let us get this in first..Big patch. It will be better for review and internal testing then. We can address comments/issues in followup issues. Ping saint.ack@gmail.com
            eshcar Eshcar Hillel added a comment -

            The ThreadPoolExecutor is static so it is not one per region but one per region server.

            Removed the method assertMinSequenceId which was used for debugging.

            eshcar Eshcar Hillel added a comment - The ThreadPoolExecutor is static so it is not one per region but one per region server. Removed the method assertMinSequenceId which was used for debugging.
            anoop.hbase Anoop Sam John added a comment -

            I see. my bad eyes.. Saw it is in RegionServicesForStores and asked.

            anoop.hbase Anoop Sam John added a comment - I see. my bad eyes.. Saw it is in RegionServicesForStores and asked.
            hadoopqa Hadoop QA added a comment -
            -1 overall



            Vote Subsystem Runtime Comment
            0 rubocop 0m 3s rubocop was not available.
            0 ruby-lint 0m 3s Ruby-lint was not available.
            +1 hbaseanti 0m 0s Patch does not have any anti-patterns.
            +1 @author 0m 0s The patch does not contain any @author tags.
            +1 test4tests 0m 0s The patch appears to include 10 new or modified test files.
            0 mvndep 0m 18s Maven dependency ordering for branch
            +1 mvninstall 3m 35s master passed
            +1 compile 2m 3s master passed with JDK v1.8.0
            +1 compile 1m 29s master passed with JDK v1.7.0_79
            +1 checkstyle 1m 42s master passed
            +1 mvneclipse 1m 0s master passed
            -1 findbugs 0m 17s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 34s master passed with JDK v1.8.0
            +1 javadoc 1m 16s master passed with JDK v1.7.0_79
            0 mvndep 0m 10s Maven dependency ordering for patch
            +1 mvninstall 1m 47s the patch passed
            +1 compile 2m 6s the patch passed with JDK v1.8.0
            +1 javac 2m 6s the patch passed
            +1 compile 1m 27s the patch passed with JDK v1.7.0_79
            +1 javac 1m 27s the patch passed
            +1 checkstyle 1m 41s the patch passed
            +1 mvneclipse 1m 0s the patch passed
            +1 whitespace 0m 0s Patch has no whitespace issues.
            +1 hadoopcheck 9m 26s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0.
            -1 findbugs 0m 14s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml)
            +1 javadoc 1m 34s the patch passed with JDK v1.8.0
            +1 javadoc 1m 18s the patch passed with JDK v1.7.0_79
            +1 unit 1m 2s hbase-client in the patch passed.
            +1 unit 100m 5s hbase-server in the patch passed.
            +1 unit 0m 21s hbase-shell in the patch passed.
            +1 unit 0m 22s hbase-it in the patch passed.
            +1 asflicense 0m 50s Patch does not generate ASF License warnings.
            144m 40s



            Subsystem Report/Notes
            JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12804170/HBASE-14920-V10.patch
            JIRA Issue HBASE-14920
            Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint
            uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
            Build tool maven
            Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh
            git revision master / e0aff10
            Default Java 1.7.0_79
            Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79
            findbugs v3.0.0
            Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1906/testReport/
            modules C: hbase-client hbase-server hbase-shell hbase-it U: .
            Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1906/console
            Powered by Apache Yetus 0.2.1 http://yetus.apache.org

            This message was automatically generated.

            hadoopqa Hadoop QA added a comment - -1 overall Vote Subsystem Runtime Comment 0 rubocop 0m 3s rubocop was not available. 0 ruby-lint 0m 3s Ruby-lint was not available. +1 hbaseanti 0m 0s Patch does not have any anti-patterns. +1 @author 0m 0s The patch does not contain any @author tags. +1 test4tests 0m 0s The patch appears to include 10 new or modified test files. 0 mvndep 0m 18s Maven dependency ordering for branch +1 mvninstall 3m 35s master passed +1 compile 2m 3s master passed with JDK v1.8.0 +1 compile 1m 29s master passed with JDK v1.7.0_79 +1 checkstyle 1m 42s master passed +1 mvneclipse 1m 0s master passed -1 findbugs 0m 17s branch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 34s master passed with JDK v1.8.0 +1 javadoc 1m 16s master passed with JDK v1.7.0_79 0 mvndep 0m 10s Maven dependency ordering for patch +1 mvninstall 1m 47s the patch passed +1 compile 2m 6s the patch passed with JDK v1.8.0 +1 javac 2m 6s the patch passed +1 compile 1m 27s the patch passed with JDK v1.7.0_79 +1 javac 1m 27s the patch passed +1 checkstyle 1m 41s the patch passed +1 mvneclipse 1m 0s the patch passed +1 whitespace 0m 0s Patch has no whitespace issues. +1 hadoopcheck 9m 26s Patch does not cause any errors with Hadoop 2.4.1 2.5.2 2.6.0. -1 findbugs 0m 14s patch/hbase-it no findbugs output file (hbase-it/target/findbugsXml.xml) +1 javadoc 1m 34s the patch passed with JDK v1.8.0 +1 javadoc 1m 18s the patch passed with JDK v1.7.0_79 +1 unit 1m 2s hbase-client in the patch passed. +1 unit 100m 5s hbase-server in the patch passed. +1 unit 0m 21s hbase-shell in the patch passed. +1 unit 0m 22s hbase-it in the patch passed. +1 asflicense 0m 50s Patch does not generate ASF License warnings. 144m 40s Subsystem Report/Notes JIRA Patch URL https://issues.apache.org/jira/secure/attachment/12804170/HBASE-14920-V10.patch JIRA Issue HBASE-14920 Optional Tests asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile rubocop ruby_lint uname Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux Build tool maven Personality /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/test_framework/yetus-0.2.1/lib/precommit/personality/hbase.sh git revision master / e0aff10 Default Java 1.7.0_79 Multi-JDK versions /home/jenkins/tools/java/jdk1.8.0:1.8.0 /usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 findbugs v3.0.0 Test Results https://builds.apache.org/job/PreCommit-HBASE-Build/1906/testReport/ modules C: hbase-client hbase-server hbase-shell hbase-it U: . Console output https://builds.apache.org/job/PreCommit-HBASE-Build/1906/console Powered by Apache Yetus 0.2.1 http://yetus.apache.org This message was automatically generated.
            eshcar Eshcar Hillel added a comment -

            anoop.hbase the FlushNonSloppyStoresFirstPolicy first looks for large non-sloppy stores to flush; if no large stores there, it looks for large sloppy stores (*by sloppy we mean with compacting memstore); and if it fails to find any then it flushes all the stores.

            When closing a region all segments in the pipeline are flushed one after the other. This is okay for now since most of the time there are one or two segments in the pipeline.
            At the context of the memory optimization done in HBASE-14921 the pipeline may consist more than two segments, there we can revise this solution, and apply a compaction to generate a single segment to be flushed to disk.

            Looks to me this is ready for commit

            eshcar Eshcar Hillel added a comment - anoop.hbase the FlushNonSloppyStoresFirstPolicy first looks for large non-sloppy stores to flush; if no large stores there, it looks for large sloppy stores (*by sloppy we mean with compacting memstore); and if it fails to find any then it flushes all the stores. When closing a region all segments in the pipeline are flushed one after the other. This is okay for now since most of the time there are one or two segments in the pipeline. At the context of the memory optimization done in HBASE-14921 the pipeline may consist more than two segments, there we can revise this solution, and apply a compaction to generate a single segment to be flushed to disk. Looks to me this is ready for commit
            stack Michael Stack added a comment -

            +1 from me (Skimmed the new patch. Good cleanup). anoop.hbase I'll commit in morning unless objection.

            stack Michael Stack added a comment - +1 from me (Skimmed the new patch. Good cleanup). anoop.hbase I'll commit in morning unless objection.

            saint.ack@gmail.com
            By today or tomorrow if I don't review you can commit this patch.

            ram_krish ramkrishna.s.vasudevan added a comment - saint.ack@gmail.com By today or tomorrow if I don't review you can commit this patch.

            +1 for anastas and myself - reviewed internally.

            ebortnik Edward Bortnikov added a comment - +1 for anastas and myself - reviewed internally.

            Thanks for the great work. I have just few questions and some small comments. I won't any way block the commit and always we can do some work after testing. We also plan to test this patch along with HBASE-14921 also.

            Minor comments from code

            String className = conf.get(MEMSTORE_CLASS_NAME, DefaultMemStore.class.getName());
            

            Just don't do this if family.isInMemoryCompaction() is true. Just declare className and move the conf.get to the else part.
            'DEEP_OVERHEAD_PER_PIPELINE_ITEM' - Always has the TIMERANGE also. Only ImmutableSegment has it right? Still it is needed?

            LinkedList<SegmentScanner> list = new LinkedList<SegmentScanner>();
                list.add(getActive().getSegmentScanner(readPt, order+1));
                for (Segment item : pipelineList) {
                  list.add(item.getSegmentScanner(readPt, order));
                  order--;
                }
                list.add(getSnapshot().getSegmentScanner(readPt, order));
            

            So active gets added, then the segments in the pipleline and then the snapshot. That order+1 is needed?

            Also since we add the segments from the pipeline also - is the tail element already added? Once again getsnapshot() will add the tail element? Pls check. I may be wrong here.
            I read through the other comments in this JIRA.
            How does close() region handle the complete flushing? I saw that there was some suggestion to pass a boolean indicating to flush all..Is that being done now? Not sure how Region.close() gets handled. So is it like the last in memory compaction will get over and by the time we will have only one segement and that will get flushed as part of the existing code only? If it is so then fine.
            Coming to a normal case using CompactingMemstore where we don't have much duplicates as in your use case, how effective is that WAL.updateStore() is going to be? Can that be avoided for a case where there are not much compaction happening (i mean no duplicates)?
            I agree that documenting what this CompactingMemstore does is every important and when HBASE-14921 is also combined we may really get the actual benefit by avoiding the memory overhead.
            Thanks for the great work.

            ram_krish ramkrishna.s.vasudevan added a comment - Thanks for the great work. I have just few questions and some small comments. I won't any way block the commit and always we can do some work after testing. We also plan to test this patch along with HBASE-14921 also. Minor comments from code String className = conf.get(MEMSTORE_CLASS_NAME, DefaultMemStore. class. getName()); Just don't do this if family.isInMemoryCompaction() is true. Just declare className and move the conf.get to the else part. 'DEEP_OVERHEAD_PER_PIPELINE_ITEM' - Always has the TIMERANGE also. Only ImmutableSegment has it right? Still it is needed? LinkedList<SegmentScanner> list = new LinkedList<SegmentScanner>(); list.add(getActive().getSegmentScanner(readPt, order+1)); for (Segment item : pipelineList) { list.add(item.getSegmentScanner(readPt, order)); order--; } list.add(getSnapshot().getSegmentScanner(readPt, order)); So active gets added, then the segments in the pipleline and then the snapshot. That order+1 is needed? Also since we add the segments from the pipeline also - is the tail element already added? Once again getsnapshot() will add the tail element? Pls check. I may be wrong here. I read through the other comments in this JIRA. How does close() region handle the complete flushing? I saw that there was some suggestion to pass a boolean indicating to flush all..Is that being done now? Not sure how Region.close() gets handled. So is it like the last in memory compaction will get over and by the time we will have only one segement and that will get flushed as part of the existing code only? If it is so then fine. Coming to a normal case using CompactingMemstore where we don't have much duplicates as in your use case, how effective is that WAL.updateStore() is going to be? Can that be avoided for a case where there are not much compaction happening (i mean no duplicates)? I agree that documenting what this CompactingMemstore does is every important and when HBASE-14921 is also combined we may really get the actual benefit by avoiding the memory overhead. Thanks for the great work.

            ram_krish absolutely documentation is super important since this is a user facing feature. We'll start working on a blog post/release note ASAP. Hope the presentations at HBaseCon will promote both CompactingMemstore and off-heap write path.

            ebortnik Edward Bortnikov added a comment - ram_krish absolutely documentation is super important since this is a user facing feature. We'll start working on a blog post/release note ASAP. Hope the presentations at HBaseCon will promote both CompactingMemstore and off-heap write path.
            stack Michael Stack added a comment -

            anastas or eshcar, you have any feedback on ram_krish's notes above? Thanks? Let me commit what is here before it rots.

            stack Michael Stack added a comment - anastas or eshcar , you have any feedback on ram_krish 's notes above? Thanks? Let me commit what is here before it rots.
            stack Michael Stack added a comment -

            I pushed the patch. Thanks for the mighty work eshcar (and anastas)

            Leaving open for now while ram_krish questions open.

            stack Michael Stack added a comment - I pushed the patch. Thanks for the mighty work eshcar (and anastas ) Leaving open for now while ram_krish questions open.
            hudson Hudson added a comment -

            FAILURE: Integrated in HBase-Trunk_matrix #935 (See https://builds.apache.org/job/HBase-Trunk_matrix/935/)
            HBASE-14920: Compacting memstore (stack: rev a27504c70181ec3585033eaee2523184c40a144f)

            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWalAndCompactingMemStoreFlush.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionWithInMemoryFlush.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java
            • hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentFactory.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/VersionedSegmentsList.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushNonSloppyStoresFirstPolicy.java
            • hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllLargeStoresPolicy.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HMobStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java
            • hbase-shell/src/main/ruby/hbase/admin.rb
            • hbase-shell/src/main/ruby/hbase.rb
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServicesForStores.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingMemStore.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java
            • hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentScanner.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java
            • hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceIdAccounting.java
            hudson Hudson added a comment - FAILURE: Integrated in HBase-Trunk_matrix #935 (See https://builds.apache.org/job/HBase-Trunk_matrix/935/ ) HBASE-14920 : Compacting memstore (stack: rev a27504c70181ec3585033eaee2523184c40a144f) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWalAndCompactingMemStoreFlush.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionWithInMemoryFlush.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/VersionedSegmentsList.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushNonSloppyStoresFirstPolicy.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllLargeStoresPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HMobStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionPipeline.java hbase-shell/src/main/ruby/hbase/admin.rb hbase-shell/src/main/ruby/hbase.rb hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServicesForStores.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingMemStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Store.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentScanner.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractFSWAL.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceIdAccounting.java
            eshcar Eshcar Hillel added a comment -

            Thank you ram_krish anoop.hbase and stack for your feedback, the feature would not have been in such a great shape without your helpful comments.

            Answering your questions:

            Just don't do this if family.isInMemoryCompaction() is true. Just declare className and move the conf.get to the else part.

            Aggree. Already asked Anastasia to fix this as part of HBASE-14921.

            'DEEP_OVERHEAD_PER_PIPELINE_ITEM' - Always has the TIMERANGE also. Only ImmutableSegment has it right?

            Right. Pipeline items are always immutable segments so this is ok.

            So active gets added, then the segments in the pipleline and then the snapshot. That order+1 is needed?

            ‘order’ is initially the size of the pipeline. For example, if the pipeline has 2 segments, active segment sets order=3, pipeline recent segment sets order=2, pipeline old segment sets order=1, and snapshot sets order=0.

            Once again getsnapshot() will add the tail element? Pls check.

            getSnapshot() simply retrieve the reference of the snapshot segment.

            How does close() region handle the complete flushing?

            The method HRegion::doClose() was updated to continue flushing as long as the size of the region decreases (+some fixed number for potential failures in flush). This is a reasonable solution for now as the pipeline only has 1 or 2 segments. If this is changed in future Jiras, this solution can be revised.

            Coming to a normal case using CompactingMemstore where we don't have much duplicates as in your use case, how effective is that WAL.updateStore() is going to be? Can that be avoided for a case where there are not much compaction happening (i mean no duplicates)?

            One option is not to have compacting memstore in this case. Second option will be considered as part of HBASE-14921; to apply compaction only when there is potential gain. Finally updating the store’s wal sequence number is not that much to pay every once in awhile.

            eshcar Eshcar Hillel added a comment - Thank you ram_krish anoop.hbase and stack for your feedback, the feature would not have been in such a great shape without your helpful comments. Answering your questions: Just don't do this if family.isInMemoryCompaction() is true. Just declare className and move the conf.get to the else part. Aggree. Already asked Anastasia to fix this as part of HBASE-14921 . 'DEEP_OVERHEAD_PER_PIPELINE_ITEM' - Always has the TIMERANGE also. Only ImmutableSegment has it right? Right. Pipeline items are always immutable segments so this is ok. So active gets added, then the segments in the pipleline and then the snapshot. That order+1 is needed? ‘order’ is initially the size of the pipeline. For example, if the pipeline has 2 segments, active segment sets order=3, pipeline recent segment sets order=2, pipeline old segment sets order=1, and snapshot sets order=0. Once again getsnapshot() will add the tail element? Pls check. getSnapshot() simply retrieve the reference of the snapshot segment. How does close() region handle the complete flushing? The method HRegion::doClose() was updated to continue flushing as long as the size of the region decreases (+some fixed number for potential failures in flush). This is a reasonable solution for now as the pipeline only has 1 or 2 segments. If this is changed in future Jiras, this solution can be revised. Coming to a normal case using CompactingMemstore where we don't have much duplicates as in your use case, how effective is that WAL.updateStore() is going to be? Can that be avoided for a case where there are not much compaction happening (i mean no duplicates)? One option is not to have compacting memstore in this case. Second option will be considered as part of HBASE-14921 ; to apply compaction only when there is potential gain. Finally updating the store’s wal sequence number is not that much to pay every once in awhile.
            stack Michael Stack added a comment -

            Resolving. Thanks eshcar (and anastas and ebortnik).

            Reopen ram_krish if you want more for answer on your questions.

            stack Michael Stack added a comment - Resolving. Thanks eshcar (and anastas and ebortnik ). Reopen ram_krish if you want more for answer on your questions.
            ddas Devaraj Das added a comment -

            I am looking at running some tests with the in-memory stuff. ebortnik, you say above about user-facing docs/blogs. Is that done? Where do I go to look for how to configure/enable this stuff?

            ddas Devaraj Das added a comment - I am looking at running some tests with the in-memory stuff. ebortnik , you say above about user-facing docs/blogs. Is that done? Where do I go to look for how to configure/enable this stuff?

            Good catch devaraj. We've started working on a user-facing blog, see the write-up in https://docs.google.com/document/d/10k4hqi4mCCpVrPodp1Q4rV0XsZ4TZtULOa-FOoe-_hw/edit?usp=sharing. Would be great to hear from everyone what you think about the content.

            The reason we did not publish the blog post yet is that we are doing extensive performance testing of the feature literally these days. The preliminary results demonstrated at HBaseCon were very good but now we're working with 100X data, so the benchmarks take a while. One more thing we'd like to finalize is the in-memory compaction policy which we'd like to see automated as much as possible (right now there's just one configuration parameter but maybe it's possible to avoid it too).

            Pushing to get all done before the end-of-year, both to enable the optimization standalone and to remove barriers to off-heaping the write path.

            Thanks devaraj.

            ebortnik Edward Bortnikov added a comment - Good catch devaraj . We've started working on a user-facing blog, see the write-up in https://docs.google.com/document/d/10k4hqi4mCCpVrPodp1Q4rV0XsZ4TZtULOa-FOoe-_hw/edit?usp=sharing . Would be great to hear from everyone what you think about the content. The reason we did not publish the blog post yet is that we are doing extensive performance testing of the feature literally these days. The preliminary results demonstrated at HBaseCon were very good but now we're working with 100X data, so the benchmarks take a while. One more thing we'd like to finalize is the in-memory compaction policy which we'd like to see automated as much as possible (right now there's just one configuration parameter but maybe it's possible to avoid it too). Pushing to get all done before the end-of-year, both to enable the optimization standalone and to remove barriers to off-heaping the write path. Thanks devaraj .
            stack Michael Stack added a comment -

            You might want to open the doc up for comments ebortnik

            I love the name.

            The write up is great so far. Yeah, perf diagrams and the set of configs to enable and tune -once settled- would round it out. Thanks.

            stack Michael Stack added a comment - You might want to open the doc up for comments ebortnik I love the name. The write up is great so far. Yeah, perf diagrams and the set of configs to enable and tune - once settled - would round it out. Thanks.

            Opened the doc for commenting, thanks for suggesting stack.

            ebortnik Edward Bortnikov added a comment - Opened the doc for commenting, thanks for suggesting stack .

            People

              eshcar Eshcar Hillel
              eshcar Eshcar Hillel
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: