diff --git CHANGES.txt CHANGES.txt index f7403a5..52d2120 100644 --- CHANGES.txt +++ CHANGES.txt @@ -1,1462 +1,4455 @@ HBase Change Log +Release 0.93.0 - Unreleased + *DO NOT ADD ISSUES HERE ON COMMIT ANY MORE. WE'LL GENERATE THE LIST + FROM JIRA INSTEAD WHEN WE MAKE A RELEASE* -Release Notes - HBase - Version 0.99.2 12/07/2014 - -** Sub-task - * [HBASE-10671] - Add missing InterfaceAudience annotations for classes in hbase-common and hbase-client modules - * [HBASE-11164] - Document and test rolling updates from 0.98 -> 1.0 - * [HBASE-11915] - Document and test 0.94 -> 1.0.0 update - * [HBASE-11964] - Improve spreading replication load from failed regionservers - * [HBASE-12075] - Preemptive Fast Fail - * [HBASE-12128] - Cache configuration and RpcController selection for Table in Connection - * [HBASE-12147] - Porting Online Config Change from 89-fb - * [HBASE-12202] - Support DirectByteBuffer usage in HFileBlock - * [HBASE-12214] - Visibility Controller in the peer cluster should be able to extract visibility tags from the replicated cells - * [HBASE-12288] - Support DirectByteBuffer usage in DataBlock Encoding area - * [HBASE-12297] - Support DBB usage in Bloom and HFileIndex area - * [HBASE-12313] - Redo the hfile index length optimization so cell-based rather than serialized KV key - * [HBASE-12353] - Turn down logging on some spewing unit tests - * [HBASE-12354] - Update dependencies in time for 1.0 release - * [HBASE-12355] - Update maven plugins - * [HBASE-12363] - Improve how KEEP_DELETED_CELLS works with MIN_VERSIONS - * [HBASE-12379] - Try surefire 2.18-SNAPSHOT - * [HBASE-12400] - Fix refguide so it does connection#getTable rather than new HTable everywhere: first cut! - * [HBASE-12404] - Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99) - * [HBASE-12471] - Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99) under src/main/java - * [HBASE-12517] - Several HConstant members are assignable - * [HBASE-12518] - Task 4 polish. Remove CM#{get,delete}Connection - * [HBASE-12519] - Remove tabs used as whitespace - * [HBASE-12526] - Remove unused imports - * [HBASE-12577] - Disable distributed log replay by default - - - -** Bug - * [HBASE-7211] - Improve hbase ref guide for the testing part. - * [HBASE-9003] - TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar - * [HBASE-9117] - Remove HTablePool and all HConnection pooling related APIs - * [HBASE-9157] - ZKUtil.blockUntilAvailable loops forever with non-recoverable errors - * [HBASE-9527] - Review all old api that takes a table name as a byte array and ensure none can pass ns + tablename - * [HBASE-10536] - ImportTsv should fail fast if any of the column family passed to the job is not present in the table - * [HBASE-10780] - HFilePrettyPrinter#processFile should return immediately if file does not exist - * [HBASE-11099] - Two situations where we could open a region with smaller sequence number - * [HBASE-11562] - CopyTable should provide an option to shuffle the mapper tasks - * [HBASE-11835] - Wrong managenement of non expected calls in the client - * [HBASE-12017] - Use Connection.createTable() instead of HTable constructors. - * [HBASE-12029] - Use Table and RegionLocator in HTable.getRegionLocations() - * [HBASE-12053] - SecurityBulkLoadEndPoint set 777 permission on input data files - * [HBASE-12072] - Standardize retry handling for master operations - * [HBASE-12083] - Deprecate new HBaseAdmin() in favor of Connection.getAdmin() - * [HBASE-12142] - Truncate command does not preserve ACLs table - * [HBASE-12194] - Make TestEncodedSeekers faster - * [HBASE-12219] - Cache more efficiently getAll() and get() in FSTableDescriptors - * [HBASE-12226] - TestAccessController#testPermissionList failing on master - * [HBASE-12229] - NullPointerException in SnapshotTestingUtils - * [HBASE-12234] - Make TestMultithreadedTableMapper a little more stable. - * [HBASE-12237] - HBaseZeroCopyByteString#wrap() should not be called in hbase-client code - * [HBASE-12238] - A few ugly exceptions on startup - * [HBASE-12240] - hbase-daemon.sh should remove pid file if process not found running - * [HBASE-12241] - The crash of regionServer when taking deadserver's replication queue breaks replication - * [HBASE-12242] - Fix new javadoc warnings in Admin, etc. - * [HBASE-12246] - Compilation with hadoop-2.3.x and 2.2.x is broken - * [HBASE-12247] - Replace setHTable() with initializeTable() in TableInputFormat. - * [HBASE-12248] - broken link in hbase shell help - * [HBASE-12252] - IntegrationTestBulkLoad fails with illegal partition error - * [HBASE-12257] - TestAssignmentManager unsynchronized access to regionPlans - * [HBASE-12258] - Make TestHBaseFsck less flaky - * [HBASE-12261] - Add checkstyle to HBase build process - * [HBASE-12263] - RegionServer listens on localhost in distributed cluster when DNS is unavailable - * [HBASE-12265] - HBase shell 'show_filters' points to internal Facebook URL - * [HBASE-12274] - Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception - * [HBASE-12277] - Refactor bulkLoad methods in AccessController to its own interface - * [HBASE-12278] - Race condition in TestSecureLoadIncrementalHFilesSplitRecovery - * [HBASE-12279] - Generated thrift files were generated with the wrong parameters - * [HBASE-12281] - ClonedPrefixTreeCell should implement HeapSize - * [HBASE-12285] - Builds are failing, possibly because of SUREFIRE-1091 - * [HBASE-12294] - Can't build the docs after the hbase-checkstyle module was added - * [HBASE-12301] - user_permission command does not show global permissions - * [HBASE-12302] - VisibilityClient getAuths does not propagate remote service exception correctly - * [HBASE-12304] - CellCounter will throw AIOBE when output directory is not specified - * [HBASE-12306] - CellCounter output's wrong value for Total Families Across all Rows in output file - * [HBASE-12308] - Fix typo in hbase-rest profile name - * [HBASE-12312] - Another couple of createTable race conditions - * [HBASE-12314] - Add chaos monkey policy to execute two actions concurrently - * [HBASE-12315] - Fix 0.98 Tests after checkstyle got parented - * [HBASE-12316] - test-patch.sh (Hadoop-QA) outputs the wrong release audit warnings URL - * [HBASE-12318] - Add license header to checkstyle xml files - * [HBASE-12319] - Inconsistencies during region recovery due to close/open of a region during recovery - * [HBASE-12322] - Add clean up command to ITBLL - * [HBASE-12327] - MetricsHBaseServerSourceFactory#createContextName has wrong conditions - * [HBASE-12329] - Table create with duplicate column family names quietly succeeds - * [HBASE-12334] - Handling of DeserializationException causes needless retry on failure - * [HBASE-12336] - RegionServer failed to shutdown for NodeFailoverWorker thread - * [HBASE-12337] - Import tool fails with NullPointerException if clusterIds is not initialized - * [HBASE-12346] - Scan's default auths behavior under Visibility labels - * [HBASE-12352] - Add hbase-annotation-tests to runtime classpath so can run hbase it tests. - * [HBASE-12356] - Rpc with region replica does not propagate tracing spans - * [HBASE-12359] - MulticastPublisher should specify IPv4/v6 protocol family when creating multicast channel - * [HBASE-12366] - Add login code to HBase Canary tool. - * [HBASE-12372] - [WINDOWS] Enable log4j configuration in hbase.cmd - * [HBASE-12375] - LoadIncrementalHFiles fails to load data in table when CF name starts with '_' - * [HBASE-12377] - HBaseAdmin#deleteTable fails when META region is moved around the same timeframe - * [HBASE-12384] - TestTags can hang on fast test hosts - * [HBASE-12386] - Replication gets stuck following a transient zookeeper error to remote peer cluster - * [HBASE-12398] - Region isn't assigned in an extreme race condition - * [HBASE-12399] - Master startup race between metrics and RpcServer - * [HBASE-12402] - ZKPermissionWatcher race condition in refreshing the cache leaving stale ACLs and causing AccessDenied - * [HBASE-12407] - HConnectionKey doesn't contain CUSTOM_CONTROLLER_CONF_KEY in CONNECTION_PROPERTIES - * [HBASE-12414] - Move HFileLink.exists() to base class - * [HBASE-12417] - Scan copy constructor does not retain small attribute - * [HBASE-12419] - "Partial cell read caused by EOF" ERRORs on replication source during replication - * [HBASE-12420] - BucketCache logged startup message is egregiously large - * [HBASE-12423] - Use a non-managed Table in TableOutputFormat - * [HBASE-12428] - region_mover.rb script is broken if port is not specified - * [HBASE-12440] - Region may remain offline on clean startup under certain race condition - * [HBASE-12445] - hbase is removing all remaining cells immediately after the cell marked with marker = KeyValue.Type.DeleteColumn via PUT - * [HBASE-12448] - Fix rate reporting in compaction progress DEBUG logging - * [HBASE-12449] - Use the max timestamp of current or old cell's timestamp in HRegion.append() - * [HBASE-12450] - Unbalance chaos monkey might kill all region servers without starting them back - * [HBASE-12459] - Use a non-managed Table in mapred.TableOutputFormat - * [HBASE-12460] - Moving Chore to hbase-common module. - * [HBASE-12461] - FSVisitor logging is excessive - * [HBASE-12464] - meta table region assignment stuck in the FAILED_OPEN state due to region server not fully ready to serve - * [HBASE-12478] - HBASE-10141 and MIN_VERSIONS are not compatible - * [HBASE-12479] - Backport HBASE-11689 (Track meta in transition) to 0.98 and branch-1 - * [HBASE-12490] - Replace uses of setAutoFlush(boolean, boolean) - * [HBASE-12491] - TableMapReduceUtil.findContainingJar() NPE - * [HBASE-12495] - Use interfaces in the shell scripts - * [HBASE-12513] - Graceful stop script does not restore the balancer state - * [HBASE-12514] - Cleanup HFileOutputFormat legacy code - * [HBASE-12520] - Add protected getters on TableInputFormatBase - * [HBASE-12533] - staging directories are not deleted after secure bulk load - * [HBASE-12536] - Reduce the effective scope of GLOBAL CREATE and ADMIN permission - * [HBASE-12537] - HBase should log the remote host on replication error - * [HBASE-12539] - HFileLinkCleaner logs are uselessly noisy - * [HBASE-12541] - Add misc debug logging to hanging tests in TestHCM and TestBaseLoadBalancer - * [HBASE-12544] - ops_mgt.xml missing in branch-1 - * [HBASE-12550] - Check all storefiles are referenced before splitting - * [HBASE-12560] - [WINDOWS] Append the classpath from Hadoop to HBase classpath in bin/hbase.cmd - * [HBASE-12576] - Add metrics for rolling the HLog if there are too few DN's in the write pipeline - * [HBASE-12580] - Zookeeper instantiated even though we might not need it in the shell - * [HBASE-12581] - TestCellACLWithMultipleVersions failing since task 5 HBASE-12404 (HBASE-12404 addendum) - * [HBASE-12584] - Fix branch-1 failing since task 5 HBASE-12404 (HBASE-12404 addendum) - * [HBASE-12595] - Use Connection.getTable() in table.rb - * [HBASE-12600] - Remove REPLAY tag dependency in Distributed Replay Mode - * [HBASE-12610] - Close connection in TableInputFormatBase - * [HBASE-12611] - Create autoCommit() method and remove clearBufferOnFail - * [HBASE-12614] - Potentially unclosed StoreFile(s) in DefaultCompactor#compact() - * [HBASE-12616] - We lost the IntegrationTestBigLinkedList COMMANDS in recent usage refactoring - - - - -** Improvement - * [HBASE-2609] - Harmonize the Get and Delete operations - * [HBASE-4955] - Use the official versions of surefire & junit - * [HBASE-8361] - Bulk load and other utilities should not create tables for user - * [HBASE-8572] - Enhance delete_snapshot.rb to call snapshot deletion API with regex - * [HBASE-10082] - Describe 'table' output is all on one line, could use better formatting - * [HBASE-10483] - Provide API for retrieving info port when hbase.master.info.port is set to 0 - * [HBASE-11639] - [Visibility controller] Replicate the visibility of Cells as strings - * [HBASE-11870] - Optimization : Avoid copy of key and value for tags addition in AC and VC - * [HBASE-12161] - Add support for grant/revoke on namespaces in AccessControlClient - * [HBASE-12243] - HBaseFsck should auto set ignorePreCheckPermission to true if no fix option is set. - * [HBASE-12249] - Script to help you adhere to the patch-naming guidelines - * [HBASE-12264] - ImportTsv should fail fast if output is not specified and table does not exist - * [HBASE-12271] - Add counters for files skipped during snapshot export - * [HBASE-12272] - Generate Thrift code through maven - * [HBASE-12328] - Need to separate JvmMetrics for Master and RegionServer - * [HBASE-12389] - Reduce the number of versions configured for the ACL table - * [HBASE-12390] - Change revision style from svn to git - * [HBASE-12411] - Optionally enable p-reads and private readers for compactions - * [HBASE-12416] - RegionServerCallable should report what host it was communicating with - * [HBASE-12424] - Finer grained logging and metrics for split transactions - * [HBASE-12432] - RpcRetryingCaller should log after fixed number of retries like AsyncProcess - * [HBASE-12434] - Add a command to compact all the regions in a regionserver - * [HBASE-12447] - Add support for setTimeRange for RowCounter and CellCounter - * [HBASE-12455] - Add 'description' to bean and attribute output when you do /jmx?description=true - * [HBASE-12529] - Use ThreadLocalRandom for RandomQueueBalancer - * [HBASE-12569] - Control MaxDirectMemorySize in the same manner as heap size - -** New Feature - * [HBASE-8707] - Add LongComparator for filter - * [HBASE-12286] - [shell] Add server/cluster online load of configuration changes - * [HBASE-12361] - Show data locality of region in table page - * [HBASE-12496] - A blockedRequestsCount metric - - - - - - - - -** Task - * [HBASE-10200] - Better error message when HttpServer fails to start due to java.net.BindException - * [HBASE-10870] - Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead - * [HBASE-12250] - Adding an endpoint for updating the regionserver config - * [HBASE-12344] - Split up TestAdmin - * [HBASE-12381] - Add maven enforcer rules for build assumptions - * [HBASE-12388] - Document that WALObservers don't get empty edits. - * [HBASE-12427] - Change branch-1 version from 0.99.2-SNAPSHOT to 0.99.3-SNAPSHOT - * [HBASE-12442] - Bring KeyValue#createFirstOnRow() back to branch-1 as deprecated methods - * [HBASE-12456] - Update surefire from 2.18-SNAPSHOT to 2.18 - * [HBASE-12516] - Clean up master so QA Bot is in known good state - * [HBASE-12522] - Backport WAL refactoring to branch-1 - - -** Test - * [HBASE-12317] - Run IntegrationTestRegionReplicaPerf w.o mapred - * [HBASE-12335] - IntegrationTestRegionReplicaPerf is flaky - * [HBASE-12367] - Integration tests should not restore the cluster if the CM is not destructive - * [HBASE-12378] - Add a test to verify that the read-replica is able to read after a compaction - * [HBASE-12401] - Add some timestamp signposts in IntegrationTestMTTR - * [HBASE-12403] - IntegrationTestMTTR flaky due to aggressive RS restart timeout - * [HBASE-12472] - Improve debuggability of IntegrationTestBulkLoad - * [HBASE-12549] - Fix TestAssignmentManagerOnCluster#testAssignRacingWithSSH() flaky test - * [HBASE-12554] - TestBaseLoadBalancer may timeout due to lengthy rack lookup - -** Umbrella - * [HBASE-10602] - Cleanup HTable public interface - * [HBASE-10856] - Prep for 1.0 - - - -Release Notes - HBase - Version 0.99.1 10/15/2014 - -** Sub-task - * [HBASE-11160] - Undo append waiting on region edit/sequence id update - * [HBASE-11178] - Remove deprecation annotations from mapred namespace - * [HBASE-11738] - Document improvements to LoadTestTool and PerformanceEvaluation - * [HBASE-11872] - Avoid usage of KeyValueUtil#ensureKeyValue from Compactor - * [HBASE-11874] - Support Cell to be passed to StoreFile.Writer rather than KeyValue - * [HBASE-11917] - Deprecate / Remove HTableUtil - * [HBASE-11920] - Add CP hooks for ReplicationEndPoint - * [HBASE-11930] - Document new permission check to roll WAL writer - * [HBASE-11980] - Change sync to hsync, remove unused InfoServer, and reference our httpserver instead of hadoops - * [HBASE-11997] - CopyTable with bulkload - * [HBASE-12023] - HRegion.applyFamilyMapToMemstore creates too many iterator objects. - * [HBASE-12046] - HTD/HCD setters should be builder-style - * [HBASE-12047] - Avoid usage of KeyValueUtil#ensureKeyValue in simple cases - * [HBASE-12050] - Avoid KeyValueUtil#ensureKeyValue from DefaultMemStore - * [HBASE-12051] - Avoid KeyValueUtil#ensureKeyValue from DefaultMemStore - * [HBASE-12059] - Create hbase-annotations module - * [HBASE-12062] - Fix usage of Collections.toArray - * [HBASE-12068] - [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell - * [HBASE-12069] - Finish making HFile.Writer Cell-centric; undo APIs that expect KV serializations. - * [HBASE-12076] - Move InterfaceAudience imports to hbase-annotations - * [HBASE-12077] - FilterLists create many ArrayList$Itr objects per row. - * [HBASE-12079] - Deprecate KeyValueUtil#ensureKeyValue(s) - * [HBASE-12082] - Find a way to set timestamp on Cells on the server - * [HBASE-12086] - Fix bugs in HTableMultiplexer - * [HBASE-12096] - In ZKSplitLog Coordination and AggregateImplementation replace enhaced for statements with basic for statement to avoid unnecessary object allocation - * [HBASE-12104] - Some optimization and bugfix for HTableMultiplexer - * [HBASE-12110] - Fix .arcconfig - * [HBASE-12112] - Avoid KeyValueUtil#ensureKeyValue some more simple cases - * [HBASE-12115] - Fix NumberFormat Exception in TableInputFormatBase. - * [HBASE-12189] - Fix new issues found by coverity static analysis - * [HBASE-12210] - Avoid KeyValue in Prefix Tree - -** Bug - * [HBASE-6994] - minor doc update about DEFAULT_ACCEPTABLE_FACTOR - * [HBASE-8808] - Use Jacoco to generate Unit Test coverage reports - * [HBASE-8936] - Fixing TestSplitLogWorker while running Jacoco tests. - * [HBASE-9005] - Improve documentation around KEEP_DELETED_CELLS, time range scans, and delete markers - * [HBASE-9513] - Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94? - * [HBASE-10314] - Add Chaos Monkey that doesn't touch the master - * [HBASE-10748] - hbase-daemon.sh fails to execute with 'sh' command - * [HBASE-10757] - Change HTable class doc so it sends people to HCM getting instances - * [HBASE-11145] - UNEXPECTED!!! when HLog sync: Queue full - * [HBASE-11266] - Remove shaded references to logger - * [HBASE-11394] - Replication can have data loss if peer id contains hyphen "-" - * [HBASE-11401] - Late-binding sequenceid presumes a particular KeyValue mvcc format hampering experiment - * [HBASE-11405] - Multiple invocations of hbck in parallel disables balancer permanently - * [HBASE-11804] - Raise default heap size if unspecified - * [HBASE-11815] - Flush and compaction could just close the tmp writer if there is an exception - * [HBASE-11890] - HBase REST Client is hard coded to http protocol - * [HBASE-11906] - Meta data loss with distributed log replay - * [HBASE-11967] - HMaster in standalone won't go down if it gets 'Unhandled exception' - * [HBASE-11974] - When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException - * [HBASE-11982] - Bootstraping hbase:meta table creates a WAL file in region dir - * [HBASE-11988] - AC/VC system table create on postStartMaster fails too often in test - * [HBASE-11991] - Region states may be out of sync - * [HBASE-11994] - PutCombiner floods the M/R log with repeated log messages. - * [HBASE-12007] - StochasticBalancer should avoid putting user regions on master - * [HBASE-12019] - hbase-daemon.sh overwrite HBASE_ROOT_LOGGER and HBASE_SECURITY_LOGGER variables - * [HBASE-12024] - Fix javadoc warning - * [HBASE-12025] - TestHttpServerLifecycle.testStartedServerWithRequestLog hangs frequently - * [HBASE-12034] - If I kill single RS in branch-1, all regions end up on Master! - * [HBASE-12038] - Replace internal uses of signatures with byte[] and String tableNames to use the TableName equivalents. - * [HBASE-12041] - AssertionError in HFilePerformanceEvaluation.UniformRandomReadBenchmark - * [HBASE-12042] - Replace internal uses of HTable(Configuration, String) with HTable(Configuration, TableName) - * [HBASE-12043] - REST server should respond with FORBIDDEN(403) code on AccessDeniedException - * [HBASE-12044] - REST delete operation should not retry disableTable for DoNotRetryIOException - * [HBASE-12045] - REST proxy users configuration in hbase-site.xml is ignored - * [HBASE-12052] - BulkLoad Failed due to no write permission on input files - * [HBASE-12054] - bad state after NamespaceUpgrade with reserved table names - * [HBASE-12056] - RPC logging too much in DEBUG mode - * [HBASE-12064] - hbase.master.balancer.stochastic.numRegionLoadsToRemember is not used - * [HBASE-12065] - Import tool is not restoring multiple DeleteFamily markers of a row - * [HBASE-12067] - Remove deprecated metrics classes. - * [HBASE-12078] - Missing Data when scanning using PREFIX_TREE DATA-BLOCK-ENCODING - * [HBASE-12095] - SecureWALCellCodec should handle the case where encryption is disabled - * [HBASE-12098] - User granted namespace table create permissions can't create a table - * [HBASE-12099] - TestScannerModel fails if using jackson 1.9.13 - * [HBASE-12106] - Move test annotations to test artifact - * [HBASE-12109] - user_permission command for namespace does not return correct result - * [HBASE-12119] - Master regionserver web UI NOT_FOUND - * [HBASE-12120] - HBase shell doesn't allow deleting of a cell by user with W-only permissions to it - * [HBASE-12122] - Try not to assign user regions to master all the time - * [HBASE-12123] - Failed assertion in BucketCache after 11331 - * [HBASE-12124] - Closed region could stay closed if master stops at bad time - * [HBASE-12126] - Region server coprocessor endpoint - * [HBASE-12130] - HBASE-11980 calls hflush and hsync doing near double the syncing work - * [HBASE-12134] - publish_website.sh script is too optimistic - * [HBASE-12135] - Website is broken - * [HBASE-12136] - Race condition between client adding tableCF replication znode and server triggering TableCFsTracker - * [HBASE-12137] - Alter table add cf doesn't do compression test - * [HBASE-12139] - StochasticLoadBalancer doesn't work on large lightly loaded clusters - * [HBASE-12140] - Add ConnectionFactory.createConnection() to create using default HBaseConfiguration. - * [HBASE-12145] - Fix javadoc and findbugs so new folks aren't freaked when they see them - * [HBASE-12146] - RegionServerTracker should escape data in log messages - * [HBASE-12149] - TestRegionPlacement is failing undeterministically - * [HBASE-12151] - Make dev scripts executable - * [HBASE-12153] - Fixing TestReplicaWithCluster - * [HBASE-12156] - TableName cache isn't used for one of valueOf methods. - * [HBASE-12158] - TestHttpServerLifecycle.testStartedServerWithRequestLog goes zombie on occasion - * [HBASE-12160] - Make Surefire's argLine configurable in the command line - * [HBASE-12164] - Check for presence of user Id in SecureBulkLoadEndpoint#secureBulkLoadHFiles() is inaccurate - * [HBASE-12165] - TestEndToEndSplitTransaction.testFromClientSideWhileSplitting fails - * [HBASE-12166] - TestDistributedLogSplitting.testMasterStartsUpWithLogReplayWork - * [HBASE-12167] - NPE in AssignmentManager - * [HBASE-12170] - TestReplicaWithCluster.testReplicaAndReplication timeouts - * [HBASE-12181] - Some tests create a table and try to use it before regions get assigned - * [HBASE-12183] - FuzzyRowFilter doesn't support reverse scans - * [HBASE-12184] - ServerShutdownHandler throws NPE - * [HBASE-12191] - Make TestCacheOnWrite faster. - * [HBASE-12196] - SSH should retry in case failed to assign regions - * [HBASE-12197] - Move REST - * [HBASE-12198] - Fix the bug of not updating location cache - * [HBASE-12199] - Make TestAtomicOperation and TestEncodedSeekers faster - * [HBASE-12200] - When an RPC server handler thread dies, throw exception - * [HBASE-12206] - NPE in RSRpcServices - * [HBASE-12209] - NPE in HRegionServer#getLastSequenceId - * [HBASE-12218] - Make HBaseCommonTestingUtil#deleteDir try harder - -** Improvement - * [HBASE-10153] - improve VerifyReplication to compute BADROWS more accurately - * [HBASE-10411] - [Book] Add a kerberos 'request is a replay (34)' issue at troubleshooting section - * [HBASE-11796] - Add client support for atomic checkAndMutate - * [HBASE-11879] - Change TableInputFormatBase to take interface arguments - * [HBASE-11907] - Use the joni byte[] regex engine in place of j.u.regex in RegexStringComparator - * [HBASE-11948] - graceful_stop.sh should use hbase-daemon.sh when executed on the decomissioned node - * [HBASE-12010] - Use TableName.META_TABLE_NAME instead of indirectly from HTableDescriptor - * [HBASE-12011] - Add namespace column during display of user tables - * [HBASE-12013] - Make region_mover.rb support multiple regionservers per host - * [HBASE-12021] - Hbase shell does not respect the HBASE_OPTS set by the user in console - * [HBASE-12032] - Script to stop regionservers via RPC - * [HBASE-12049] - Help for alter command is a bit confusing - * [HBASE-12090] - Bytes: more Unsafe, more Faster - * [HBASE-12118] - Explain how to grant permission to a namespace in grant command usage - * [HBASE-12176] - WALCellCodec Encoders support for non-KeyValue Cells - * [HBASE-12212] - HBaseTestingUtility#waitUntilAllRegionsAssigned should wait for RegionStates - * [HBASE-12220] - Add hedgedReads and hedgedReadWins metrics - -** New Feature - * [HBASE-11990] - Make setting the start and stop row for a specific prefix easier - * [HBASE-11995] - Use Connection and ConnectionFactory where possible - * [HBASE-12127] - Move the core Connection creation functionality into ConnectionFactory - * [HBASE-12133] - Add FastLongHistogram for metric computation - * [HBASE-12143] - Minor fix for Table code - -** Task - * [HBASE-9004] - Fix Documentation around Minor compaction and ttl - * [HBASE-11692] - Document how and why to do a manual region split - * [HBASE-11730] - Document release managers for non-deprecated branches - * [HBASE-11761] - Add a FAQ item for updating a maven-managed application from 0.94 -> 0.96+ - * [HBASE-11960] - Provide a sample to show how to use Thrift client authentication - * [HBASE-11978] - Backport 'HBASE-7767 Get rid of ZKTable, and table enable/disable state in ZK' to 1.0 - * [HBASE-11981] - Document how to find the units of measure for a given HBase metric - -** Test - * [HBASE-11798] - TestBucketWriterThread may hang due to WriterThread stopping prematurely - * [HBASE-11838] - Enable PREFIX_TREE in integration tests - * [HBASE-12008] - Remove IntegrationTestImportTsv#testRunFromOutputCommitter - * [HBASE-12055] - TestBucketWriterThread hangs flakily based on timing - - -Release Notes - HBase - Version 0.99.0 9/22/2014 - -** Sub-task - * [HBASE-2251] - PE defaults to 1k rows - uncommon use case, and easy to hit benchmarks - * [HBASE-5175] - Add DoubleColumnInterpreter - * [HBASE-6873] - Clean up Coprocessor loading failure handling - * [HBASE-8541] - implement flush-into-stripes in stripe compactions - * [HBASE-9149] - javadoc cleanup of to reflect .META. rename to hbase:meta - * [HBASE-9261] - Add cp hooks after {start|close}RegionOperation - * [HBASE-9489] - Add cp hooks in online merge before and after setting PONR - * [HBASE-9846] - Integration test and LoadTestTool support for cell ACLs - * [HBASE-9858] - Integration test and LoadTestTool support for cell Visibility - * [HBASE-9889] - Make sure we clean up scannerReadPoints upon any exceptions - * [HBASE-9941] - The context ClassLoader isn't set while calling into a coprocessor - * [HBASE-9966] - Create IntegrationTest for Online Bloom Filter Change - * [HBASE-9977] - Define C interface of HBase Client Asynchronous APIs - * [HBASE-10043] - Fix Potential Resouce Leak in MultiTableInputFormatBase - * [HBASE-10094] - Add batching to HLogPerformanceEvaluation - * [HBASE-10110] - Fix Potential Resource Leak in StoreFlusher - * [HBASE-10124] - Make Sub Classes Static When Possible - * [HBASE-10143] - Clean up dead local stores in FSUtils - * [HBASE-10150] - Write attachment Id of tested patch into JIRA comment - * [HBASE-10156] - FSHLog Refactor (WAS -> Fix up the HBASE-8755 slowdown when low contention) - * [HBASE-10158] - Add sync rate histogram to HLogPE - * [HBASE-10169] - Batch coprocessor - * [HBASE-10297] - LoadAndVerify Integration Test for cell visibility - * [HBASE-10347] - HRegionInfo changes for adding replicaId and MetaEditor/MetaReader changes for region replicas - * [HBASE-10348] - HTableDescriptor changes for region replicas - * [HBASE-10350] - Master/AM/RegionStates changes to create and assign region replicas - * [HBASE-10351] - LoadBalancer changes for supporting region replicas - * [HBASE-10352] - Region and RegionServer changes for opening region replicas, and refreshing store files - * [HBASE-10354] - Add an API for defining consistency per request - * [HBASE-10355] - Failover RPC's from client using region replicas - * [HBASE-10356] - Failover RPC's for multi-get - * [HBASE-10357] - Failover RPC's for scans - * [HBASE-10359] - Master/RS WebUI changes for region replicas - * [HBASE-10361] - Enable/AlterTable support for region replicas - * [HBASE-10362] - HBCK changes for supporting region replicas - * [HBASE-10391] - Deprecate KeyValue#getBuffer - * [HBASE-10420] - Replace KV.getBuffer with KV.get{Row|Family|Qualifier|Value|Tags}Array - * [HBASE-10513] - Provide user documentation for region replicas - * [HBASE-10517] - NPE in MetaCache.clearCache() - * [HBASE-10519] - Add handling for swallowed InterruptedException thrown by Thread.sleep in rest related files - * [HBASE-10520] - Add handling for swallowed InterruptedException thrown by Thread.sleep in MiniZooKeeperCluster - * [HBASE-10521] - Add handling for swallowed InterruptedException thrown by Thread.sleep in RpcServer - * [HBASE-10522] - Correct wrong handling and add proper handling for swallowed InterruptedException thrown by Thread.sleep in client - * [HBASE-10523] - Correct wrong handling and add proper handling for swallowed InterruptedException thrown by Thread.sleep in util - * [HBASE-10524] - Correct wrong handling and add proper handling for swallowed InterruptedException thrown by Thread.sleep in regionserver - * [HBASE-10526] - Using Cell instead of KeyValue in HFileOutputFormat - * [HBASE-10529] - Make Cell extend Cloneable - * [HBASE-10530] - Add util methods in CellUtil - * [HBASE-10531] - Revisit how the key byte[] is passed to HFileScanner.seekTo and reseekTo - * [HBASE-10532] - Make KeyValueComparator in KeyValue to accept Cell instead of KeyValue. - * [HBASE-10550] - Register HBase tokens with ServiceLoader - * [HBASE-10561] - Forward port: HBASE-10212 New rpc metric: number of active handler - * [HBASE-10572] - Create an IntegrationTest for region replicas - * [HBASE-10573] - Use Netty 4 - * [HBASE-10616] - Integration test for multi-get calls - * [HBASE-10620] - LoadBalancer.needsBalance() should check for co-located region replicas as well - * [HBASE-10630] - NullPointerException in ConnectionManager$HConnectionImplementation.locateRegionInMeta() due to missing region info - * [HBASE-10633] - StoreFileRefresherChore throws ConcurrentModificationException sometimes - * [HBASE-10634] - Multiget doesn't fully work - * [HBASE-10648] - Pluggable Memstore - * [HBASE-10650] - Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in RegionServer - * [HBASE-10651] - Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in Replication - * [HBASE-10652] - Fix incorrect handling of IE that restores current thread's interrupt status within while/for loops in rpc - * [HBASE-10661] - TestStochasticLoadBalancer.testRegionReplicationOnMidClusterWithRacks() is flaky - * [HBASE-10672] - Table snapshot should handle tables whose REGION_REPLICATION is greater than one - * [HBASE-10680] - Check if the block keys, index keys can be used as Cells instead of byte[] - * [HBASE-10688] - Add a draining_node script to manage nodes in draining mode - * [HBASE-10691] - test-patch.sh should continue even if compilation against hadoop 1.0 / 1.1 fails - * [HBASE-10697] - Convert TestSimpleTotalOrderPartitioner to junit4 test - * [HBASE-10701] - Cache invalidation improvements from client side - * [HBASE-10704] - BaseLoadBalancer#roundRobinAssignment() may add same region to assignment plan multiple times - * [HBASE-10717] - TestFSHDFSUtils#testIsSameHdfs fails with IllegalArgumentException running against hadoop 2.3 - * [HBASE-10723] - Convert TestExplicitColumnTracker to junit4 test - * [HBASE-10729] - Enable table doesn't balance out replicas evenly if the replicas were unassigned earlier - * [HBASE-10734] - Fix RegionStates.getRegionAssignments to not add duplicate regions - * [HBASE-10741] - Deprecate HTablePool and HTableFactory - * [HBASE-10743] - Replica map update is problematic in RegionStates - * [HBASE-10750] - Pluggable MemStoreLAB - * [HBASE-10778] - Unique keys accounting in MultiThreadedReader is incorrect - * [HBASE-10779] - Doc hadoop1 deprecated in 0.98 and NOT supported in hbase 1.0 - * [HBASE-10781] - Remove hadoop-one-compat module and all references to hadoop1 - * [HBASE-10791] - Add integration test to demonstrate performance improvement - * [HBASE-10794] - multi-get should handle replica location missing from cache - * [HBASE-10796] - Set default log level as INFO - * [HBASE-10801] - Ensure DBE interfaces can work with Cell - * [HBASE-10810] - LoadTestTool should share the connection and connection pool - * [HBASE-10815] - Master regionserver should be rolling-upgradable - * [HBASE-10817] - Add some tests on a real cluster for replica: multi master, replication - * [HBASE-10818] - Add integration test for bulkload with replicas - * [HBASE-10822] - Thread local addendum to HBASE-10656 Counter - * [HBASE-10841] - Scan,Get,Put,Delete,etc setters should consistently return this - * [HBASE-10855] - Enable hfilev3 by default - * [HBASE-10858] - TestRegionRebalancing is failing - * [HBASE-10859] - Use HFileLink in opening region files from secondaries - * [HBASE-10888] - Enable distributed log replay as default - * [HBASE-10915] - Decouple region closing (HM and HRS) from ZK - * [HBASE-10918] - [VisibilityController] System table backed ScanLabelGenerator - * [HBASE-10929] - Change ScanQueryMatcher to use Cells instead of KeyValue. - * [HBASE-10930] - Change Filters and GetClosestRowBeforeTracker to work with Cells - * [HBASE-10957] - HBASE-10070: HMaster can abort with NPE in #rebuildUserRegions - * [HBASE-10962] - Decouple region opening (HM and HRS) from ZK - * [HBASE-10963] - Refactor cell ACL tests - * [HBASE-10972] - OOBE in prefix key encoding - * [HBASE-10985] - Decouple Split Transaction from Zookeeper - * [HBASE-10993] - Deprioritize long-running scanners - * [HBASE-11025] - Infrastructure for pluggable consensus service - * [HBASE-11027] - Remove kv.isDeleteXX() and related methods and use CellUtil apis. - * [HBASE-11053] - Change DeleteTracker APIs to work with Cell - * [HBASE-11054] - Create new hook in StoreScanner to help user creating his own delete tracker - * [HBASE-11059] - ZK-less region assignment - * [HBASE-11069] - Decouple region merging from ZooKeeper - * [HBASE-11072] - Abstract WAL splitting from ZK - * [HBASE-11077] - [AccessController] Restore compatible early-out access denial - * [HBASE-11088] - Support Visibility Expression Deletes in Shell - * [HBASE-11092] - Server interface should have method getConsensusProvider() - * [HBASE-11094] - Distributed log replay is incompatible for rolling restarts - * [HBASE-11098] - Improve documentation around our blockcache options - * [HBASE-11101] - Documentation review - * [HBASE-11102] - Document JDK versions supported by each release - * [HBASE-11108] - Split ZKTable into interface and implementation - * [HBASE-11109] - flush region sequence id may not be larger than all edits flushed - * [HBASE-11135] - Change region sequenceid generation so happens earlier in the append cycle rather than just before added to file - * [HBASE-11140] - LocalHBaseCluster should create ConsensusProvider per each server - * [HBASE-11161] - Provide example of POJO encoding with protobuf - * [HBASE-11171] - More doc improvements on block cache options - * [HBASE-11214] - Fixes for scans on a replicated table - * [HBASE-11229] - Change block cache percentage metrics to be doubles rather than ints - * [HBASE-11280] - Document distributed log replay and distributed log splitting - * [HBASE-11307] - Deprecate SlabCache - * [HBASE-11318] - Classes in security subpackages missing @InterfaceAudience annotations. - * [HBASE-11332] - Fix for metas location cache from HBASE-10785 - * [HBASE-11367] - Pluggable replication endpoint - * [HBASE-11372] - Remove SlabCache - * [HBASE-11384] - [Visibility Controller]Check for users covering authorizations for every mutation - * [HBASE-11395] - Add logging for HBase table operations - * [HBASE-11471] - Move TableStateManager and ZkTableStateManager and Server to hbase-server - * [HBASE-11483] - Check the rest of the book for new XML validity errors and fix - * [HBASE-11508] - Document changes to IPC config parameters from HBASE-11492 - * [HBASE-11511] - Write flush events to WAL - * [HBASE-11512] - Write region open/close events to WAL - * [HBASE-11520] - Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache" - * [HBASE-11559] - Add dumping of DATA block usage to the BlockCache JSON report. - * [HBASE-11572] - Add support for doing get/scans against a particular replica_id - * [HBASE-11573] - Report age on eviction - * [HBASE-11610] - Enhance remote meta updates - * [HBASE-11651] - Add conf which disables MetaMigrationConvertingToPB check (for experts only) - * [HBASE-11722] - Document new shortcut commands introduced by HBASE-11649 - * [HBASE-11734] - Document changed behavior of hbase.hstore.time.to.purge.deletes - * [HBASE-11736] - Document SKIP_FLUSH snapshot option - * [HBASE-11737] - Document callQueue improvements from HBASE-11355 and HBASE-11724 - * [HBASE-11739] - Document blockCache contents report in the UI - * [HBASE-11740] - RegionStates.getRegionAssignments() gets stuck on clone - * [HBASE-11752] - Document blockcache prefetch option - * [HBASE-11753] - Document HBASE_SHELL_OPTS environment variable - * [HBASE-11781] - Document new TableMapReduceUtil scanning options - * [HBASE-11784] - Document global configuration for maxVersion - * [HBASE-11822] - Convert EnvironmentEdge#getCurrentTimeMillis to getCurrentTime - * [HBASE-11919] - Remove the deprecated pre/postGet CP hook - * [HBASE-11923] - Potential race condition in RecoverableZookeeper.checkZk() - * [HBASE-11934] - Support KeyValueCodec to encode non KeyValue cells. - * [HBASE-11941] - Rebuild site because of major structural changes to HTML - * [HBASE-11963] - Synchronize peer cluster replication connection attempts - -** Brainstorming - * [HBASE-9507] - Promote methods of WALActionsListener to WALObserver - * [HBASE-11209] - Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 - -** Bug - * [HBASE-3787] - Increment is non-idempotent but client retries RPC - * [HBASE-4931] - CopyTable instructions could be improved. - * [HBASE-5356] - region_mover.rb can hang if table region it belongs to is deleted. - * [HBASE-6506] - Setting CACHE_BLOCKS to false in an hbase shell scan doesn't work - * [HBASE-6642] - enable_all,disable_all,drop_all can call "list" command with regex directly. - * [HBASE-6701] - Revisit thrust of paragraph on splitting - * [HBASE-7226] - HRegion.checkAndMutate uses incorrect comparison result for <, <=, > and >= - * [HBASE-7963] - HBase VerifyReplication not working when security enabled - * [HBASE-8112] - Deprecate HTable#batch(final List) - * [HBASE-8269] - Fix data locallity documentation. - * [HBASE-8304] - Bulkload fails to remove files if fs.default.name / fs.defaultFS is configured without default port - * [HBASE-8529] - checkOpen is missing from multi, mutate, get and multiGet etc. - * [HBASE-8701] - distributedLogReplay need to apply wal edits in the receiving order of those edits - * [HBASE-8713] - [hadoop1] Log spam each time the WAL is rolled - * [HBASE-8803] - region_mover.rb should move multiple regions at a time - * [HBASE-8817] - Enhance The Apache HBase Reference Guide - * [HBASE-9151] - HBCK cannot fix when meta server znode deleted, this can happen if all region servers stopped and there are no logs to split. - * [HBASE-9292] - Syncer fails but we won't go down - * [HBASE-9294] - NPE in /rs-status during RS shutdown - * [HBASE-9346] - HBCK should provide an option to check if regions boundaries are the same in META and in stores. - * [HBASE-9445] - Snapshots should create column family dirs for empty regions - * [HBASE-9473] - Change UI to list 'system tables' rather than 'catalog tables'. - * [HBASE-9485] - TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart - * [HBASE-9708] - Improve Snapshot Name Error Message - * [HBASE-9721] - RegionServer should not accept regionOpen RPC intended for another(previous) server - * [HBASE-9745] - Append HBASE_CLASSPATH to end of Java classpath and use another env var for prefix - * [HBASE-9746] - RegionServer can't start when replication tries to replicate to an unknown host - * [HBASE-9754] - Eliminate threadlocal from MVCC code - * [HBASE-9778] - Add hint to ExplicitColumnTracker to avoid seeking - * [HBASE-9990] - HTable uses the conf for each "newCaller" - * [HBASE-10018] - Remove region location prefetching - * [HBASE-10061] - TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE - * [HBASE-10069] - Potential duplicate calls to log#appendNoSync() in HRegion#doMiniBatchMutation() - * [HBASE-10073] - Revert HBASE-9718 (Add a test scope dependency on org.slf4j:slf4j-api to hbase-client) - * [HBASE-10079] - Race in TableName cache - * [HBASE-10080] - Unnecessary call to locateRegion when creating an HTable instance - * [HBASE-10084] - [WINDOWS] bin\hbase.cmd should allow whitespaces in java.library.path and classpath - * [HBASE-10087] - Store should be locked during a memstore snapshot - * [HBASE-10090] - Master could hang in assigning meta - * [HBASE-10097] - Remove a region name string creation in HRegion#nextInternal - * [HBASE-10098] - [WINDOWS] pass in native library directory from hadoop for unit tests - * [HBASE-10099] - javadoc warning introduced by LabelExpander 188: warning - @return tag has no arguments - * [HBASE-10101] - testOfflineRegionReAssginedAfterMasterRestart times out sometimes. - * [HBASE-10103] - TestNodeHealthCheckChore#testRSHealthChore: Stoppable must have been stopped - * [HBASE-10107] - [JDK7] TestHBaseSaslRpcClient.testHBaseSaslRpcClientCreation failing on Jenkins - * [HBASE-10108] - NullPointerException thrown while use Canary with '-regionserver' option - * [HBASE-10112] - Hbase rest query params for maxVersions and maxValues are not parsed - * [HBASE-10114] - _scan_internal() in table.rb should accept argument that specifies reverse scan - * [HBASE-10117] - Avoid synchronization in HRegionScannerImpl.isFilterDone - * [HBASE-10118] - Major compact keeps deletes with future timestamps - * [HBASE-10120] - start-hbase.sh doesn't respect --config in non-distributed mode - * [HBASE-10123] - Change default ports; move them out of linux ephemeral port range - * [HBASE-10132] - sun.security.pkcs11.wrapper.PKCS11Exception: CKR_ARGUMENTS_BAD - * [HBASE-10135] - Remove ? extends from HLogSplitter#getOutputCounts - * [HBASE-10137] - GeneralBulkAssigner with retain assignment plan can be used in EnableTableHandler to bulk assign the regions - * [HBASE-10138] - incorrect or confusing test value is used in block caches - * [HBASE-10142] - TestLogRolling#testLogRollOnDatanodeDeath test failure - * [HBASE-10146] - Bump HTrace version to 2.04 - * [HBASE-10148] - [VisibilityController] Tolerate regions in recovery - * [HBASE-10149] - TestZKPermissionsWatcher.testPermissionsWatcher test failure - * [HBASE-10155] - HRegion isRecovering state is wrongly coming in postOpen hook - * [HBASE-10161] - [AccessController] Tolerate regions in recovery - * [HBASE-10163] - Example Thrift DemoClient is flaky - * [HBASE-10176] - Canary#sniff() should close the HTable instance - * [HBASE-10178] - Potential null object dereference in TablePermission#equals() - * [HBASE-10179] - HRegionServer underreports readRequestCounts by 1 under certain conditions - * [HBASE-10182] - Potential null object deference in AssignmentManager#handleRegion() - * [HBASE-10186] - region_mover.rb broken because ServerName constructor is changed to private - * [HBASE-10187] - AccessController#preOpen - Include 'labels' table also into special tables list. - * [HBASE-10193] - Cleanup HRegion if one of the store fails to open at region initialization - * [HBASE-10194] - [Usability]: Instructions in CompactionTool no longer accurate because of namespaces - * [HBASE-10196] - Enhance HBCK to understand the case after online region merge - * [HBASE-10205] - ConcurrentModificationException in BucketAllocator - * [HBASE-10207] - ZKVisibilityLabelWatcher : Populate the labels cache on startup - * [HBASE-10210] - during master startup, RS can be you-are-dead-ed by master in error - * [HBASE-10215] - TableNotFoundException should be thrown after removing stale znode in ETH - * [HBASE-10219] - HTTPS support for HBase in RegionServerListTmpl.jamon - * [HBASE-10220] - Put all test service principals into the superusers list - * [HBASE-10221] - Region from coprocessor invocations can be null on failure - * [HBASE-10223] - [VisibilityController] cellVisibility presence check on Delete mutation is wrong - * [HBASE-10225] - Bug in calls to RegionObsever.postScannerFilterRow - * [HBASE-10226] - [AccessController] Namespace grants not always checked - * [HBASE-10231] - Potential NPE in HBaseFsck#checkMetaRegion() - * [HBASE-10232] - Remove native profile from hbase-shell - * [HBASE-10233] - VisibilityController is too chatty at DEBUG level - * [HBASE-10249] - TestReplicationSyncUpTool fails because failover takes too long - * [HBASE-10251] - Restore API Compat for PerformanceEvaluation.generateValue() - * [HBASE-10260] - Canary Doesn't pick up Configuration properly. - * [HBASE-10264] - [MapReduce]: CompactionTool in mapred mode is missing classes in its classpath - * [HBASE-10267] - TestNamespaceCommands occasionally fails - * [HBASE-10268] - TestSplitLogWorker occasionally fails - * [HBASE-10271] - [regression] Cannot use the wildcard address since HBASE-9593 - * [HBASE-10272] - Cluster becomes nonoperational if the node hosting the active Master AND ROOT/META table goes offline - * [HBASE-10274] - MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers - * [HBASE-10284] - Build broken with svn 1.8 - * [HBASE-10292] - TestRegionServerCoprocessorExceptionWithAbort fails occasionally - * [HBASE-10298] - TestIOFencing occasionally fails - * [HBASE-10302] - Fix rat check issues in hbase-native-client. - * [HBASE-10304] - Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString - * [HBASE-10307] - IntegrationTestIngestWithEncryption assumes localhost cluster - * [HBASE-10310] - ZNodeCleaner session expired for /hbase/master - * [HBASE-10312] - Flooding the cluster with administrative actions leads to collapse - * [HBASE-10313] - Duplicate servlet-api jars in hbase 0.96.0 - * [HBASE-10315] - Canary shouldn't exit with 3 if there is no master running. - * [HBASE-10316] - Canary#RegionServerMonitor#monitorRegionServers() should close the scanner returned by table.getScanner() - * [HBASE-10317] - getClientPort method of MiniZooKeeperCluster does not always return the correct value - * [HBASE-10318] - generate-hadoopX-poms.sh expects the version to have one extra '-' - * [HBASE-10320] - Avoid ArrayList.iterator() ExplicitColumnTracker - * [HBASE-10321] - CellCodec has broken the 96 client to 98 server compatibility - * [HBASE-10322] - Strip tags from KV while sending back to client on reads - * [HBASE-10326] - Super user should be able scan all the cells irrespective of the visibility labels - * [HBASE-10327] - Remove remove(K, V) from type PoolMap - * [HBASE-10329] - Fail the writes rather than proceeding silently to prevent data loss when AsyncSyncer encounters null writer and its writes aren't synced by other Asyncer - * [HBASE-10330] - TableInputFormat/TableRecordReaderImpl leaks HTable - * [HBASE-10332] - Missing .regioninfo file during daughter open processing - * [HBASE-10333] - Assignments are not retained on a cluster start - * [HBASE-10334] - RegionServer links in table.jsp is broken - * [HBASE-10335] - AuthFailedException in zookeeper may block replication forever - * [HBASE-10336] - Remove deprecated usage of Hadoop HttpServer in InfoServer - * [HBASE-10337] - HTable.get() uninteruptible - * [HBASE-10338] - Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost - * [HBASE-10339] - Mutation::getFamilyMap method was lost in 98 - * [HBASE-10349] - Table became unusable when master balanced its region after table was dropped - * [HBASE-10365] - HBaseFsck should clean up connection properly when repair is completed - * [HBASE-10370] - Compaction in out-of-date Store causes region split failure - * [HBASE-10371] - Compaction creates empty hfile, then selects this file for compaction and creates empty hfile and over again - * [HBASE-10375] - hbase-default.xml hbase.status.multicast.address.port does not match code - * [HBASE-10384] - Failed to increment serveral columns in one Increment - * [HBASE-10392] - Correct references to hbase.regionserver.global.memstore.upperLimit - * [HBASE-10397] - Fix findbugs introduced from HBASE-9426 - * [HBASE-10400] - [hbck] Continue if region dir missing on region merge attempt - * [HBASE-10401] - [hbck] perform overlap group merges in parallel - * [HBASE-10407] - String Format Exception - * [HBASE-10412] - Distributed log replay : Cell tags getting missed - * [HBASE-10413] - Tablesplit.getLength returns 0 - * [HBASE-10417] - index is not incremented in PutSortReducer#reduce() - * [HBASE-10422] - ZeroCopyLiteralByteString.zeroCopyGetBytes has an unusable prototype and conflicts with AsyncHBase - * [HBASE-10426] - user_permission in security.rb calls non-existent UserPermission#getTable method - * [HBASE-10428] - Test jars should have scope test - * [HBASE-10429] - Make Visibility Controller to throw a better msg if it is of type RegionServerCoprocessor - * [HBASE-10431] - Rename com.google.protobuf.ZeroCopyLiteralByteString - * [HBASE-10432] - Rpc retries non-recoverable error - * [HBASE-10433] - SecureProtobufWALReader does not handle unencrypted WALs if configured to encrypt - * [HBASE-10434] - Store Tag compression information for a WAL in its header - * [HBASE-10435] - Lower the log level of Canary region server match - * [HBASE-10436] - restore regionserver lists removed from hbase 0.96+ jmx - * [HBASE-10438] - NPE from LRUDictionary when size reaches the max init value - * [HBASE-10441] - [docs] nit default max versions is now 1 instead of 3 after HBASE-8450 - * [HBASE-10442] - prepareDelete() isn't called before doPreMutationHook for a row deletion case - * [HBASE-10443] - IndexOutOfBoundExceptions when processing compressed tags in HFile - * [HBASE-10446] - Backup master gives Error 500 for debug dump - * [HBASE-10447] - Memstore flusher scans storefiles also when the scanner heap gets reset - * [HBASE-10448] - ZKUtil create and watch methods don't set watch in some cases - * [HBASE-10449] - Wrong execution pool configuration in HConnectionManager - * [HBASE-10451] - Enable back Tag compression on HFiles - * [HBASE-10452] - Fix potential bugs in exception handlers - * [HBASE-10454] - Tags presence file info can be wrong in HFiles when PrefixTree encoding is used - * [HBASE-10455] - cleanup InterruptedException management - * [HBASE-10456] - Balancer should not run if it's just turned off. - * [HBASE-10458] - Typo in book chapter 9 architecture.html - * [HBASE-10459] - Broken link F.1. HBase Videos - * [HBASE-10460] - Return value of Scan#setSmall() should be void - * [HBASE-10461] - table.close() in TableEventHandler#reOpenAllRegions() should be enclosed in finally block - * [HBASE-10469] - Hbase book client.html has a broken link - * [HBASE-10470] - Import generates huge log file while importing large amounts of data - * [HBASE-10472] - Manage the interruption in ZKUtil#getData - * [HBASE-10476] - HBase Master log grows very fast after stopped hadoop (due to connection exception) - * [HBASE-10477] - Regression from HBASE-10337 - * [HBASE-10478] - Hbase book presentations page has broken link - * [HBASE-10481] - API Compatibility JDiff script does not properly handle arguments in reverse order - * [HBASE-10482] - ReplicationSyncUp doesn't clean up its ZK, needed for tests - * [HBASE-10485] - PrefixFilter#filterKeyValue() should perform filtering on row key - * [HBASE-10486] - ProtobufUtil Append & Increment deserialization lost cell level timestamp - * [HBASE-10488] - 'mvn site' is broken due to org.apache.jasper.JspC not found - * [HBASE-10490] - Simplify RpcClient code - * [HBASE-10493] - InclusiveStopFilter#filterKeyValue() should perform filtering on row key - * [HBASE-10495] - upgrade script is printing usage two times with help option. - * [HBASE-10500] - Some tools OOM when BucketCache is enabled - * [HBASE-10501] - Improve IncreasingToUpperBoundRegionSplitPolicy to avoid too many regions - * [HBASE-10506] - Fail-fast if client connection is lost before the real call be executed in RPC layer - * [HBASE-10510] - HTable is not closed in LoadTestTool#loadTable() - * [HBASE-10514] - Forward port HBASE-10466, possible data loss when failed flushes - * [HBASE-10516] - Refactor code where Threads.sleep is called within a while/for loop - * [HBASE-10525] - Allow the client to use a different thread for writing to ease interrupt - * [HBASE-10533] - commands.rb is giving wrong error messages on exceptions - * [HBASE-10534] - Rowkey in TsvImporterTextMapper initializing with wrong length - * [HBASE-10537] - Let the ExportSnapshot mapper fail and retry on error - * [HBASE-10539] - HRegion.addAndGetGlobalMemstoreSize returns previous size - * [HBASE-10545] - RS Hangs waiting on region to close on shutdown; has to timeout before can go down - * [HBASE-10546] - Two scanner objects are open for each hbase map task but only one scanner object is closed - * [HBASE-10547] - TestFixedLengthWrapper#testReadWrite occasionally fails with the IBM JDK - * [HBASE-10548] - Correct commons-math dependency version - * [HBASE-10549] - When there is a hole, LoadIncrementalHFiles will hang in an infinite loop. - * [HBASE-10552] - HFilePerformanceEvaluation.GaussianRandomReadBenchmark fails sometimes. - * [HBASE-10556] - Possible data loss due to non-handled DroppedSnapshotException for user-triggered flush from client/shell - * [HBASE-10563] - Set name for FlushHandler thread - * [HBASE-10564] - HRegionServer.nextLong should be removed since it's not used anywhere, or should be used somewhere it meant to - * [HBASE-10565] - FavoredNodesPlan accidentally uses an internal Netty type - * [HBASE-10566] - cleanup rpcTimeout in the client - * [HBASE-10567] - Add overwrite manifest option to ExportSnapshot - * [HBASE-10575] - ReplicationSource thread can't be terminated if it runs into the loop to contact peer's zk ensemble and fails continuously - * [HBASE-10579] - [Documentation]: ExportSnapshot tool package incorrectly documented - * [HBASE-10580] - IntegrationTestingUtility#restoreCluster leak resource when running in a mini cluster mode - * [HBASE-10581] - ACL znode are left without PBed during upgrading hbase0.94* to hbase0.96+ - * [HBASE-10582] - 0.94->0.96 Upgrade: ACL can't be repopulated when ACL table contains row for table '-ROOT' or '.META.' - * [HBASE-10585] - Avoid early creation of Node objects in LRUDictionary.BidirectionalLRUMap - * [HBASE-10586] - hadoop2-compat IPC metric registred twice - * [HBASE-10587] - Master metrics clusterRequests is wrong - * [HBASE-10593] - FileInputStream in JenkinsHash#main() is never closed - * [HBASE-10594] - Speed up TestRestoreSnapshotFromClient - * [HBASE-10598] - Written data can not be read out because MemStore#timeRangeTracker might be updated concurrently - * [HBASE-10600] - HTable#batch() should perform validation on empty Put - * [HBASE-10604] - Fix parseArgs javadoc - * [HBASE-10606] - Bad timeout in RpcRetryingCaller#callWithRetries w/o parameters - * [HBASE-10608] - Acquire the FS Delegation Token for Secure ExportSnapshot - * [HBASE-10611] - Description for hbase:acl table is wrong on master-status#catalogTables - * [HBASE-10614] - Master could not be stopped - * [HBASE-10618] - User should not be allowed to disable/drop visibility labels table - * [HBASE-10621] - Unable to grant user permission to namespace - * [HBASE-10622] - Improve log and Exceptions in Export Snapshot - * [HBASE-10624] - Fix 2 new findbugs warnings introduced by HBASE-10598 - * [HBASE-10627] - A logic mistake in HRegionServer isHealthy - * [HBASE-10631] - Avoid extra seek on FileLink open - * [HBASE-10632] - Region lost in limbo after ArrayIndexOutOfBoundsException during assignment - * [HBASE-10637] - rpcClient: Setup the iostreams when writing - * [HBASE-10639] - Unload script displays wrong counts (off by one) when unloading regions - * [HBASE-10644] - TestSecureExportSnapshot#testExportFileSystemState fails on hadoop-1 - * [HBASE-10656] - high-scale-lib's Counter depends on Oracle (Sun) JRE, and also has some bug - * [HBASE-10660] - MR over snapshots can OOM when alternative blockcache is enabled - * [HBASE-10662] - RegionScanner is never closed if the region has been moved-out or re-opened when performing scan request - * [HBASE-10665] - TestCompaction and TestCompactionWithCoprocessor run too long - * [HBASE-10666] - TestMasterCoprocessorExceptionWithAbort hangs at shutdown - * [HBASE-10668] - TestExportSnapshot runs too long - * [HBASE-10669] - [hbck tool] Usage is wrong for hbck tool for -sidelineCorruptHfiles option - * [HBASE-10675] - IntegrationTestIngestWithACL should allow User to be passed as Parameter - * [HBASE-10677] - boundaries check in hbck throwing IllegalArgumentException - * [HBASE-10679] - Both clients get wrong scan results if the first scanner expires and the second scanner is created with the same scannerId on the same region - * [HBASE-10682] - region_mover.rb throws "can't convert nil into String" for regions moved - * [HBASE-10685] - [WINDOWS] TestKeyStoreKeyProvider fails on windows - * [HBASE-10686] - [WINDOWS] TestStripeStoreFileManager fails on windows - * [HBASE-10687] - Fix description about HBaseLocalFileSpanReceiver in reference manual - * [HBASE-10692] - The Multi TableMap job don't support the security HBase cluster - * [HBASE-10694] - TableSkewCostFunction#cost() casts integral division result to double - * [HBASE-10705] - CompactionRequest#toString() may throw NullPointerException - * [HBASE-10706] - Disable writeToWal in tests where possible - * [HBASE-10714] - SyncFuture hangs when sequence is 0 - * [HBASE-10715] - TimeRange has a poorly formatted error message - * [HBASE-10716] - [Configuration]: hbase.regionserver.region.split.policy should be part of hbase-default.xml - * [HBASE-10718] - TestHLogSplit fails when it sets a KV size to be negative - * [HBASE-10720] - rpcClient: Wrong log level when closing the connection - * [HBASE-10726] - Fix java.lang.ArrayIndexOutOfBoundsException in StochasticLoadBalancer$LocalityBasedCandidateGenerator - * [HBASE-10731] - Fix environment variables typos in scripts - * [HBASE-10735] - [WINDOWS] Set -XX:MaxPermSize for unit tests - * [HBASE-10736] - Fix Javadoc warnings introduced in HBASE-10169 - * [HBASE-10737] - HConnectionImplementation should stop RpcClient on close - * [HBASE-10738] - AssignmentManager should shut down executors on stop - * [HBASE-10739] - RS web UI NPE if master shuts down sooner - * [HBASE-10745] - Access ShutdownHook#fsShutdownHooks should be synchronized - * [HBASE-10749] - CellComparator.compareStatic() compares type wrongly - * [HBASE-10751] - TestHRegion testWritesWhileScanning occasional fail since HBASE-10514 went in - * [HBASE-10755] - MetricsRegionSourceImpl creates metrics that start with a lower case - * [HBASE-10760] - Wrong methods' names in ClusterLoadState class - * [HBASE-10762] - clone_snapshot doesn't check for missing namespace - * [HBASE-10766] - SnapshotCleaner allows to delete referenced files - * [HBASE-10770] - Don't exit from the Canary daemon mode if no regions are present - * [HBASE-10792] - RingBufferTruck does not release its payload - * [HBASE-10793] - AuthFailed as a valid zookeeper state - * [HBASE-10799] - [WINDOWS] TestImportTSVWithVisibilityLabels.testBulkOutputWithTsvImporterTextMapper fails on windows - * [HBASE-10802] - CellComparator.compareStaticIgnoreMvccVersion compares type wrongly - * [HBASE-10804] - Add a validations step to ExportSnapshot - * [HBASE-10805] - Speed up KeyValueHeap.next() a bit - * [HBASE-10806] - Two protos missing in hbase-protocol/pom.xml - * [HBASE-10809] - HBaseAdmin#deleteTable fails when META region happen to move around same time - * [HBASE-10814] - RpcClient: some calls can get stuck when connection is closing - * [HBASE-10825] - Add copy-from option to ExportSnapshot - * [HBASE-10829] - Flush is skipped after log replay if the last recovered edits file is skipped - * [HBASE-10830] - Integration test MR jobs attempt to load htrace jars from the wrong location - * [HBASE-10831] - IntegrationTestIngestWithACL is not setting up LoadTestTool correctly - * [HBASE-10833] - Region assignment may fail during cluster start up - * [HBASE-10838] - Insufficient AccessController covering permission check - * [HBASE-10839] - NullPointerException in construction of RegionServer in Security Cluster - * [HBASE-10840] - Fix findbug warn induced by HBASE-10569 - * [HBASE-10845] - Memstore snapshot size isn't updated in DefaultMemStore#rollback() - * [HBASE-10846] - Links between active and backup masters are broken - * [HBASE-10848] - Filter SingleColumnValueFilter combined with NullComparator does not work - * [HBASE-10849] - Fix increased javadoc warns - * [HBASE-10850] - essential column family optimization is broken - * [HBASE-10851] - Wait for regionservers to join the cluster - * [HBASE-10853] - NPE in RSRpcServices.get on trunk - * [HBASE-10854] - [VisibilityController] Apply MAX_VERSIONS from schema or request when scanning - * [HBASE-10860] - Insufficient AccessController covering permission check - * [HBASE-10862] - Update config field names in hbase-default.xml description for hbase.hregion.memstore.block.multiplier - * [HBASE-10863] - Scan doesn't return rows for user who has authorization by visibility label in secure deployment - * [HBASE-10864] - Spelling nit - * [HBASE-10890] - ExportSnapshot needs to add acquired token to job - * [HBASE-10895] - unassign a region fails due to the hosting region server is in FailedServerList - * [HBASE-10897] - On master start, deadlock if refresh UI - * [HBASE-10899] - [AccessController] Apply MAX_VERSIONS from schema or request when scanning - * [HBASE-10903] - HBASE-10740 regression; cannot pass commands for zk to run - * [HBASE-10917] - Fix hbase book "Tests" page - * [HBASE-10922] - Log splitting status should always be closed - * [HBASE-10931] - Enhance logs - * [HBASE-10941] - default for max version isn't updated in doc after change on 0.96 - * [HBASE-10948] - Fix hbase table file 'x' mode - * [HBASE-10949] - Reversed scan could hang - * [HBASE-10954] - Fix TestCloseRegionHandler.testFailedFlushAborts - * [HBASE-10955] - HBCK leaves the region in masters in-memory RegionStates if region hdfs dir is lost - * [HBASE-10958] - [dataloss] Bulk loading with seqids can prevent some log entries from being replayed - * [HBASE-10964] - Delete mutation is not consistent with Put wrt timestamp - * [HBASE-10966] - RowCounter misinterprets column names that have colons in their qualifier - * [HBASE-10967] - CatalogTracker.waitForMeta should not wait indefinitely silently - * [HBASE-10968] - Null check in TableSnapshotInputFormat#TableSnapshotRegionRecordReader#initialize() is redundant - * [HBASE-10970] - [AccessController] Issues with covering cell permission checks - * [HBASE-10976] - Start CatalogTracker after cluster ID is available - * [HBASE-10979] - Fix AnnotationReadingPriorityFunction "scan" handling - * [HBASE-10995] - Fix resource leak related to unclosed HBaseAdmin - * [HBASE-11005] - Remove dead code in HalfStoreFileReader#getScanner#seekBefore() - * [HBASE-11009] - We sync every hbase:meta table write twice - * [HBASE-11011] - Avoid extra getFileStatus() calls on Region startup - * [HBASE-11012] - InputStream is not properly closed in two methods of JarFinder - * [HBASE-11018] - ZKUtil.getChildDataAndWatchForNewChildren() will not return null as indicated - * [HBASE-11028] - FSLog: Avoid an extra sync if the current transaction is already sync'd - * [HBASE-11030] - HBaseTestingUtility.getMiniHBaseCluster should be able to return null - * [HBASE-11036] - Online schema change with region merge may cause data loss - * [HBASE-11038] - Filtered scans can bypass metrics collection - * [HBASE-11049] - HBase WALPlayer needs to add credentials to job to play to table - * [HBASE-11052] - Sending random data crashes thrift service - * [HBASE-11055] - Extends the sampling size - * [HBASE-11064] - Odd behaviors of TableName for empty namespace - * [HBASE-11081] - Trunk Master won't start; looking for Constructor that takes conf only - * [HBASE-11082] - Potential unclosed TraceScope in FSHLog#replaceWriter() - * [HBASE-11096] - stop method of Master and RegionServer coprocessor is not invoked - * [HBASE-11112] - PerformanceEvaluation should document --multiGet option on its printUsage. - * [HBASE-11117] - [AccessController] checkAndPut/Delete hook should check only Read permission - * [HBASE-11118] - non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString" - * [HBASE-11120] - Update documentation about major compaction algorithm - * [HBASE-11133] - Add an option to skip snapshot verification after Export - * [HBASE-11139] - BoundedPriorityBlockingQueue#poll() should check the return value from awaitNanos() - * [HBASE-11143] - Improve replication metrics - * [HBASE-11149] - Wire encryption is broken - * [HBASE-11150] - Images in website are broken - * [HBASE-11155] - Fix Validation Errors in Ref Guide - * [HBASE-11162] - RegionServer webui uses the default master info port irrespective of the user configuration. - * [HBASE-11168] - [docs] Remove references to RowLocks in post 0.96 docs. - * [HBASE-11169] - nit: fix incorrect javadoc in OrderedBytes about BlobVar and BlobCopy - * [HBASE-11176] - Make /src/main/xslt/configuration_to_docbook_section.xsl produce better Docbook - * [HBASE-11177] - 'hbase.rest.filter.classes' exists in hbase-default.xml twice - * [HBASE-11185] - Parallelize Snapshot operations - * [HBASE-11186] - Improve TestExportSnapshot verifications - * [HBASE-11189] - Subprocedure should be marked as complete upon failure - * [HBASE-11190] - Fix easy typos in documentation - * [HBASE-11194] - [AccessController] issue with covering permission check in case of concurrent op on same row - * [HBASE-11196] - Update description of -ROOT- in ref guide - * [HBASE-11202] - Cleanup on HRegion class - * [HBASE-11212] - Fix increment index in KeyValueSortReducer - * [HBASE-11215] - Deprecate void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) - * [HBASE-11217] - Race between SplitLogManager task creation + TimeoutMonitor - * [HBASE-11218] - Data loss in HBase standalone mode - * [HBASE-11226] - Document and increase the default value for hbase.hstore.flusher.count - * [HBASE-11234] - FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result - * [HBASE-11237] - Bulk load initiated by user other than hbase fails - * [HBASE-11238] - Add info about SlabCache and BucketCache to Ref Guide - * [HBASE-11239] - Forgot to svn add test that was part of HBASE-11171, TestCacheConfig - * [HBASE-11248] - KeyOnlyKeyValue#toString() passes wrong offset to keyToString() - * [HBASE-11251] - LoadTestTool should grant READ permission for the users that are given READ access for specific cells - * [HBASE-11252] - Fixing new javadoc warnings in master branch - * [HBASE-11253] - IntegrationTestWithCellVisibilityLoadAndVerify failing since HBASE-10326 - * [HBASE-11255] - Negative request num in region load - * [HBASE-11260] - hbase-default.xml refers to hbase.regionserver.global.memstore.upperLimit which is deprecated - * [HBASE-11267] - Dynamic metrics2 metrics may consume large amount of heap memory - * [HBASE-11268] - HTablePool is now a deprecated class, should update docs to reflect this - * [HBASE-11273] - Fix jersey and slf4j deps - * [HBASE-11275] - [AccessController] postCreateTable hook fails when another CP creates table on their startup - * [HBASE-11277] - RPCServer threads can wedge under high load - * [HBASE-11279] - Block cache could be disabled by mistake - * [HBASE-11285] - Expand coprocs info in Ref Guide - * [HBASE-11297] - Remove some synchros in the rpcServer responder - * [HBASE-11298] - Simplification in RpcServer code - * [HBASE-11302] - ReplicationSourceManager#sources is not thread safe - * [HBASE-11310] - Delete's copy constructor should copy the attributes also - * [HBASE-11311] - Secure Bulk Load does not execute chmod 777 on the files - * [HBASE-11312] - Minor refactoring of TestVisibilityLabels class - * [HBASE-11320] - Reenable bucket cache logging - * [HBASE-11324] - Update 2.5.2.8. Managed Compactions - * [HBASE-11327] - ExportSnapshot hit stackoverflow error when target snapshotDir doesn't contain uri - * [HBASE-11329] - Minor fixup of new blockcache tab number formatting - * [HBASE-11335] - Fix the TABLE_DIR param in TableSnapshotInputFormat - * [HBASE-11337] - Document how to create, modify, delete a table using Java - * [HBASE-11338] - Expand documentation on bloom filters - * [HBASE-11340] - Remove references to xcievers in documentation - * [HBASE-11341] - ZKProcedureCoordinatorRpcs should respond only to members - * [HBASE-11342] - The method isChildReadLock in class ZKInterProcessLockBase is wrong - * [HBASE-11347] - For some errors, the client can retry infinitely - * [HBASE-11353] - Wrong Write Request Count - * [HBASE-11363] - Access checks in preCompact and preCompactSelection are out of sync - * [HBASE-11371] - Typo in Thrift2 docs - * [HBASE-11373] - hbase-protocol compile failed for name conflict of RegionTransition - * [HBASE-11374] - RpcRetryingCaller#callWithoutRetries has a timeout of zero - * [HBASE-11378] - TableMapReduceUtil overwrites user supplied options for multiple tables/scaners job - * [HBASE-11380] - HRegion lock object is not being released properly, leading to snapshot failure - * [HBASE-11382] - Adding unit test for HBASE-10964 (Delete mutation is not consistent with Put wrt timestamp) - * [HBASE-11387] - metrics: wrong totalRequestCount - * [HBASE-11391] - Thrift table creation will fail with default TTL with sanity checks - * [HBASE-11396] - Invalid meta entries can lead to unstartable master - * [HBASE-11397] - When merging expired stripes, we need to create an empty file to preserve metadata. - * [HBASE-11399] - Improve Quickstart chapter and move Pseudo-distributed and distributed to it - * [HBASE-11403] - Fix race conditions around Object#notify - * [HBASE-11413] - [findbugs] RV: Negating the result of compareTo()/compare() - * [HBASE-11418] - build target "site" doesn't respect hadoop-two.version property - * [HBASE-11422] - Specification of scope is missing for certain Hadoop dependencies - * [HBASE-11423] - Visibility label and per cell ACL feature not working with HTable#mutateRow() and MultiRowMutationEndpoint - * [HBASE-11424] - Avoid usage of CellUtil#getTagArray(Cell cell) within server - * [HBASE-11430] - lastFlushSeqId has been updated wrongly during region open - * [HBASE-11432] - [AccessController] Remove cell first strategy - * [HBASE-11433] - LruBlockCache does not respect its configurable parameters - * [HBASE-11435] - Visibility labelled cells fail to getting replicated - * [HBASE-11439] - StripeCompaction may not obey the OffPeak rule to compaction - * [HBASE-11442] - ReplicationSourceManager doesn't cleanup the queues for recovered sources - * [HBASE-11445] - TestZKProcedure#testMultiCohortWithMemberTimeoutDuringPrepare is flaky - * [HBASE-11448] - Fix javadoc warnings - * [HBASE-11449] - IntegrationTestIngestWithACL fails to use different users after HBASE-10810 - * [HBASE-11457] - Increment HFile block encoding IVs accounting for ciper's internal use - * [HBASE-11458] - NPEs if RegionServer cannot initialize - * [HBASE-11460] - Deadlock in HMaster on masterAndZKLock in HConnectionManager - * [HBASE-11463] - (findbugs) HE: Class defines equals() and uses Object.hashCode() - * [HBASE-11465] - [VisibilityController] Reserved tags check not happening for Append/Increment - * [HBASE-11475] - Distributed log replay should also replay compaction events - * [HBASE-11476] - Expand 'Conceptual View' section of Data Model chapter - * [HBASE-11477] - book.xml has Docbook validity issues (again) - * [HBASE-11481] - TableSnapshotInputFormat javadoc wrongly claims HBase "enforces security" - * [HBASE-11487] - ScanResponse carries non-zero cellblock for CloseScanRequest (ScanRequest with close_scanner = true) - * [HBASE-11488] - cancelTasks in SubprocedurePool can hang during task error - * [HBASE-11489] - ClassNotFoundException while running IT tests in trunk using 'mvn verify' - * [HBASE-11492] - Hadoop configuration overrides some ipc parameters including tcpNoDelay - * [HBASE-11493] - Autorestart option is not working because of stale znode "shutdown" - * [HBASE-11496] - HBASE-9745 broke cygwin CLASSPATH translation - * [HBASE-11502] - Track down broken images in Ref Guide - * [HBASE-11505] - 'snapshot' shell command shadows 'snapshot' shell when 'help' is invoked - * [HBASE-11506] - IntegrationTestWithCellVisibilityLoadAndVerify allow User to be passed as arg - * [HBASE-11509] - Forward port HBASE-11039 to trunk and branch-1 after HBASE-11489 - * [HBASE-11510] - Visibility serialization format tag gets duplicated in Append/Increment'ed cells - * [HBASE-11514] - Fix findbugs warnings in blockcache - * [HBASE-11517] - TestReplicaWithCluster turns zombie - * [HBASE-11518] - doc update for how to create non-shared HConnection - * [HBASE-11523] - JSON serialization of PE Options is broke - * [HBASE-11525] - Region server holding in region states is out of sync with meta - * [HBASE-11527] - Cluster free memory limit check should consider L2 block cache size also when L2 cache is onheap. - * [HBASE-11530] - RegionStates adds regions to wrong servers - * [HBASE-11531] - RegionStates for regions under region-in-transition znode are not updated on startup - * [HBASE-11534] - Remove broken JAVA_HOME autodetection in hbase-config.sh - * [HBASE-11535] - ReplicationPeer map is not thread safe - * [HBASE-11536] - Puts of region location to Meta may be out of order which causes inconsistent of region location - * [HBASE-11537] - Avoid synchronization on instances of ConcurrentMap - * [HBASE-11540] - Document HBASE-11474 - * [HBASE-11541] - Wrong result when scaning meta with startRow - * [HBASE-11545] - mapred.TableSnapshotInputFormat is missing InterfaceAudience annotation - * [HBASE-11550] - Custom value for BUCKET_CACHE_BUCKETS_KEY should be sorted - * [HBASE-11551] - BucketCache$WriterThread.run() doesn't handle exceptions correctly - * [HBASE-11554] - Remove Reusable poolmap Rpc client type. - * [HBASE-11555] - TableSnapshotRegionSplit should be public - * [HBASE-11558] - Caching set on Scan object gets lost when using TableMapReduceUtil in 0.95+ - * [HBASE-11561] - deprecate ImmutableBytesWritable.getSize and replace with getLength - * [HBASE-11564] - Improve cancellation management in the rpc layer - * [HBASE-11565] - Stale connection could stay for a while - * [HBASE-11575] - Pseudo distributed mode does not work as documented - * [HBASE-11579] - CopyTable should check endtime value only if != 0 - * [HBASE-11582] - Fix Javadoc warning in DataInputInputStream and CacheConfig - * [HBASE-11586] - HFile's HDFS op latency sampling code is not used - * [HBASE-11588] - RegionServerMetricsWrapperRunnable misused the 'period' parameter - * [HBASE-11589] - AccessControlException should be a not retriable exception - * [HBASE-11591] - Scanner fails to retrieve KV from bulk loaded file with highest sequence id than the cell's mvcc in a non-bulk loaded file - * [HBASE-11593] - TestCacheConfig failing consistently in precommit builds - * [HBASE-11594] - Unhandled NoNodeException in distributed log replay mode - * [HBASE-11603] - Apply version of HADOOP-8027 to our JMXJsonServlet - * [HBASE-11609] - LoadIncrementalHFiles fails if the namespace is specified - * [HBASE-11617] - incorrect AgeOfLastAppliedOp and AgeOfLastShippedOp in replication Metrics when no new replication OP - * [HBASE-11620] - Record the class name of Writer in WAL header so that only proper Reader can open the WAL file - * [HBASE-11627] - RegionSplitter's rollingSplit terminated with "/ by zero", and the _balancedSplit file was not deleted properly - * [HBASE-11632] - Region split needs to clear force split flag at the end of SplitRequest run - * [HBASE-11659] - Region state RPC call is not idempotent - * [HBASE-11662] - Launching shell with long-form --debug fails - * [HBASE-11668] - Re-add HBASE_LIBRARY_PATH to bin/hbase - * [HBASE-11671] - TestEndToEndSplitTransaction fails on master - * [HBASE-11678] - BucketCache ramCache fills heap after running a few hours - * [HBASE-11687] - No need to abort on postOpenDeployTasks exception if region opening is cancelled - * [HBASE-11703] - Meta region state could be corrupted - * [HBASE-11705] - callQueueSize should be decremented in a fail-fast scenario - * [HBASE-11708] - RegionSplitter incorrectly calculates splitcount - * [HBASE-11709] - TestMasterShutdown can fail sometime - * [HBASE-11716] - LoadTestDataGeneratorWithVisibilityLabels should handle Delete mutations - * [HBASE-11717] - Remove unused config 'hbase.offheapcache.percentage' from hbase-default.xml and book - * [HBASE-11718] - Remove some logs in RpcClient.java - * [HBASE-11719] - Remove some unused paths in AsyncClient - * [HBASE-11725] - Backport failover checking change to 1.0 - * [HBASE-11726] - Master should fail-safe if starting with a pre 0.96 layout - * [HBASE-11727] - Assignment wait time error in case of ServerNotRunningYetException - * [HBASE-11728] - Data loss while scanning using PREFIX_TREE DATA-BLOCK-ENCODING - * [HBASE-11733] - Avoid copy-paste in Master/Region CoprocessorHost - * [HBASE-11744] - RpcServer code should not use a collection from netty internal - * [HBASE-11745] - FilterAllFilter should return ReturnCode.SKIP - * [HBASE-11755] - VisibilityController returns the wrong value for preBalanceSwitch() - * [HBASE-11766] - Backdoor CoprocessorHConnection is no longer being used for local writes - * [HBASE-11770] - TestBlockCacheReporting.testBucketCache is not stable - * [HBASE-11772] - Bulk load mvcc and seqId issues with native hfiles - * [HBASE-11773] - Wrong field used for protobuf construction in RegionStates. - * [HBASE-11782] - Document that hbase.MetaMigrationConvertingToPB needs to be set to true for migrations pre 0.96 - * [HBASE-11787] - TestRegionLocations is not categorized - * [HBASE-11788] - hbase is not deleting the cell when a Put with a KeyValue, KeyValue.Type.Delete is submitted - * [HBASE-11789] - LoadIncrementalHFiles is not picking up the -D option - * [HBASE-11794] - StripeStoreFlusher causes NullPointerException - * [HBASE-11797] - Create Table interface to replace HTableInterface - * [HBASE-11802] - Scan copy constructor doesn't copy reversed member variable - * [HBASE-11813] - CellScanner#advance may overflow stack - * [HBASE-11814] - TestAssignmentManager.testCloseFailed() and testOpenCloseRacing() is flaky - * [HBASE-11816] - Initializing custom Metrics implementation failed in Mapper or Reducer - * [HBASE-11820] - ReplicationSource : Set replication codec class as RPC codec class on a clonned Configuration - * [HBASE-11823] - Cleanup javadoc warnings. - * [HBASE-11832] - maven release plugin overrides command line arguments - * [HBASE-11836] - IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas tests simple get by default - * [HBASE-11839] - TestRegionRebalance is flakey - * [HBASE-11844] - region_mover.rb load enters an infinite loop if region already present on target server - * [HBASE-11851] - RpcClient can try to close a connection not ready to close - * [HBASE-11856] - hbase-common needs a log4j.properties resource for handling unit test logging output - * [HBASE-11857] - Restore ReaderBase.initAfterCompression() and WALCellCodec.create(Configuration, CompressionContext) - * [HBASE-11859] - 'hadoop jar' references in documentation should mention hbase-server.jar, not hbase.jar - * [HBASE-11863] - WAL files are not archived and stays in the WAL directory after splitting - * [HBASE-11876] - RegionScanner.nextRaw(...) should not update metrics - * [HBASE-11878] - TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController not yet initialized - * [HBASE-11880] - NPE in MasterStatusServlet - * [HBASE-11882] - Row level consistency may not be maintained with bulk load and compaction - * [HBASE-11886] - The creator of the table should have all permissions on the table - * [HBASE-11887] - Memory retention in branch-1; millions of instances of LiteralByteString for column qualifier and value - * [HBASE-11892] - configs contain stale entries - * [HBASE-11893] - RowTooBigException should be in hbase-client module - * [HBASE-11896] - LoadIncrementalHFiles fails in secure mode if the namespace is specified - * [HBASE-11898] - CoprocessorHost.Environment should cache class loader instance - * [HBASE-11905] - Add orca to server UIs and update logo. - * [HBASE-11921] - Minor fixups that come of testing branch-1 - * [HBASE-11932] - Stop the html-single from building a html-single of every chapter and cluttering the docbkx directory - * [HBASE-11936] - IsolationLevel must be attribute of a Query not a Scan - * [HBASE-11946] - Get xref and API docs to build properly again - * [HBASE-11947] - NoSuchElementException in balancer for master regions - * [HBASE-11949] - Setting hfile.block.cache.size=0 doesn't actually disable blockcache - * [HBASE-11959] - TestAssignmentManagerOnCluster is flaky - * [HBASE-11972] - The "doAs user" used in the update to hbase:acl table RPC is incorrect - * [HBASE-11976] - Server startcode is not checked for bulk region assignment - * [HBASE-11984] - TestClassFinder failing on occasion - * [HBASE-11989] - IntegrationTestLoadAndVerify cannot be configured anymore on distributed mode - -** Improvement - * [HBASE-2217] - VM OPTS for shell only - * [HBASE-3270] - When we create the .version file, we should create it in a tmp location and then move it into place - * [HBASE-4163] - Create Split Strategy for YCSB Benchmark - * [HBASE-4495] - CatalogTracker has an identity crisis; needs to be cut-back in scope - * [HBASE-5349] - Automagically tweak global memstore and block cache sizes based on workload - * [HBASE-5923] - Cleanup checkAndXXX logic - * [HBASE-6626] - Add a chapter on HDFS in the troubleshooting section of the HBase reference guide. - * [HBASE-6990] - Pretty print TTL - * [HBASE-7088] - Duplicate code in RowCounter - * [HBASE-7849] - Provide administrative limits around bulkloads of files into a single region - * [HBASE-7910] - Dont use reflection for security - * [HBASE-7987] - Snapshot Manifest file instead of multiple empty files - * [HBASE-8076] - add better doc for HBaseAdmin#offline API. - * [HBASE-8298] - In shell, provide alias of 'desc' for 'describe' - * [HBASE-8315] - Documentation should have more information of LRU Stats - * [HBASE-8332] - Add truncate as HMaster method - * [HBASE-8495] - Change ownership of the directory to bulk load - * [HBASE-8604] - improve reporting of incorrect peer address in replication - * [HBASE-8755] - A new write thread model for HLog to improve the overall HBase write throughput - * [HBASE-8763] - Combine MVCC and SeqId - * [HBASE-8807] - HBase MapReduce Job-Launch Documentation Misplaced - * [HBASE-8970] - [book] Filter language documentation is hidden - * [HBASE-9343] - Implement stateless scanner for Stargate - * [HBASE-9345] - Add support for specifying filters in scan - * [HBASE-9426] - Make custom distributed barrier procedure pluggable - * [HBASE-9501] - Provide throttling for replication - * [HBASE-9524] - Multi row get does not return any results even if any one of the rows specified in the query is missing and improve exception handling - * [HBASE-9542] - Have Get and MultiGet do cellblocks, currently they are pb all the time - * [HBASE-9829] - make the compaction logging less confusing - * [HBASE-9857] - Blockcache prefetch option - * [HBASE-9866] - Support the mode where REST server authorizes proxy users - * [HBASE-9892] - Add info port to ServerName to support multi instances in a node - * [HBASE-9999] - Add support for small reverse scan - * [HBASE-10010] - eliminate the put latency spike on the new log file beginning - * [HBASE-10048] - Add hlog number metric in regionserver - * [HBASE-10074] - consolidate and improve capacity/sizing documentation - * [HBASE-10086] - [book] document the HBase canary tool usage in the HBase Book - * [HBASE-10116] - SlabCache metrics improvements - * [HBASE-10128] - Improve the copy table doc to include information about versions - * [HBASE-10141] - instead of putting expired store files thru compaction, just archive them - * [HBASE-10157] - Provide CP hook post log replay - * [HBASE-10164] - Allow heapsize of different units to be passed as HBASE_HEAPSIZE - * [HBASE-10173] - Need HFile version check in security coprocessors - * [HBASE-10175] - 2-thread ChaosMonkey steps on its own toes - * [HBASE-10202] - Documentation is lacking information about rolling-restart.sh script. - * [HBASE-10211] - Improve AccessControl documentation in hbase book - * [HBASE-10213] - Add read log size per second metrics for replication source - * [HBASE-10228] - Support setCellVisibility and setAuthorizations in Shell - * [HBASE-10229] - Support OperationAttributes in Increment and Append in Shell - * [HBASE-10239] - Improve determinism and debugability of TestAccessController - * [HBASE-10252] - Don't write back to WAL/memstore when Increment amount is zero (mostly for query rather than update intention) - * [HBASE-10263] - make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block - * [HBASE-10265] - Upgrade to commons-logging 1.1.3 - * [HBASE-10277] - refactor AsyncProcess - * [HBASE-10289] - Avoid random port usage by default JMX Server. Create Custome JMX server - * [HBASE-10323] - Auto detect data block encoding in HFileOutputFormat - * [HBASE-10324] - refactor deferred-log-flush/Durability related interface/code/naming to align with changed semantic of the new write thread model - * [HBASE-10331] - Insure security tests use SecureTestUtil methods for grants - * [HBASE-10344] - Improve write performance by ignoring sync to hdfs when an asyncer's writes have been synced by other asyncer - * [HBASE-10346] - Add Documentation for stateless scanner - * [HBASE-10368] - Add Mutation.setWriteToWAL() back to 0.98 - * [HBASE-10373] - Add more details info for ACL group in HBase book - * [HBASE-10389] - Add namespace help info in table related shell commands - * [HBASE-10395] - endTime won't be set in VerifyReplication if startTime is not set - * [HBASE-10419] - Add multiget support to PerformanceEvaluation - * [HBASE-10423] - Report back the message of split or rollback failure to the master - * [HBASE-10427] - clean up HRegionLocation/ServerName usage - * [HBASE-10430] - Support compressTags in shell for enabling tag encoding - * [HBASE-10471] - Remove HTD.isAsyncLogFlush() from trunk - * [HBASE-10479] - HConnection interface is public but is used internally, and contains a bunch of methods - * [HBASE-10487] - Avoid allocating new KeyValue and according bytes-copying for appended kvs which don't have existing values - * [HBASE-10498] - Add new APIs to load balancer interface - * [HBASE-10511] - Add latency percentiles on PerformanceEvaluation - * [HBASE-10518] - DirectMemoryUtils.getDirectMemoryUsage spams when none is configured - * [HBASE-10569] - Co-locate meta and master - * [HBASE-10570] - Allow overrides of Surefire secondPartForkMode and testFailureIgnore - * [HBASE-10589] - Reduce unnecessary TestRowProcessorEndpoint resource usage - * [HBASE-10590] - Update contents about tracing in the Reference Guide - * [HBASE-10591] - Sanity check table configuration in createTable - * [HBASE-10592] - Refactor PerformanceEvaluation tool - * [HBASE-10597] - IOEngine#read() should return the number of bytes transferred - * [HBASE-10599] - Replace System.currentMillis() with EnvironmentEdge.currentTimeMillis in memstore flusher and related places - * [HBASE-10603] - Deprecate RegionSplitter CLI tool - * [HBASE-10615] - Make LoadIncrementalHFiles skip reference files - * [HBASE-10638] - Improve error message when there is no region server available for move - * [HBASE-10641] - Configurable Bucket Sizes in bucketCache - * [HBASE-10663] - Some code cleanup of class Leases and ScannerListener.leaseExpired - * [HBASE-10678] - Make verifyrep tool implement toolrunner - * [HBASE-10690] - Drop Hadoop-1 support - * [HBASE-10693] - Correct declarations of Atomic* fields from 'volatile' to 'final' - * [HBASE-10744] - AM#CloseRegion no need to retry on FailedServerException - * [HBASE-10746] - Bump the version of HTrace to 3.0 - * [HBASE-10752] - Port HBASE-10270 'Remove DataBlockEncoding from BlockCacheKey' to trunk - * [HBASE-10769] - hbase/bin/hbase-cleanup.sh has wrong usage string - * [HBASE-10771] - Primitive type put/get APIs in ByteRange - * [HBASE-10785] - Metas own location should be cached - * [HBASE-10788] - Add 99th percentile of latency in PE - * [HBASE-10797] - Add support for -h and --help to rolling_restart.sh and fix the usage string output - * [HBASE-10813] - Possible over-catch of exceptions - * [HBASE-10823] - Resolve LATEST_TIMESTAMP to current server time before scanning for ACLs - * [HBASE-10835] - DBE encode path improvements - * [HBASE-10842] - Some loggers not declared static final - * [HBASE-10861] - Extend ByteRange to create Mutable and Immutable ByteRange - * [HBASE-10871] - Indefinite OPEN/CLOSE wait on busy RegionServers - * [HBASE-10873] - Control number of regions assigned to backup masters - * [HBASE-10883] - Restrict the universe of labels and authorizations - * [HBASE-10884] - [REST] Do not disable block caching when scanning - * [HBASE-10885] - Support visibility expressions on Deletes - * [HBASE-10887] - tidy ThriftUtilities format - * [HBASE-10892] - [Shell] Add support for globs in user_permission - * [HBASE-10902] - Make Secure Bulk Load work across remote secure clusters - * [HBASE-10911] - ServerShutdownHandler#toString shows meaningless message - * [HBASE-10916] - [VisibilityController] Stackable ScanLabelGenerators - * [HBASE-10923] - Control where to put meta region - * [HBASE-10925] - Do not OOME, throw RowTooBigException instead - * [HBASE-10926] - Use global procedure to flush table memstore cache - * [HBASE-10934] - Provide Admin interface to abstract HBaseAdmin - * [HBASE-10950] - Add a configuration point for MaxVersion of Column Family - * [HBASE-10951] - Use PBKDF2 to generate test encryption keys in the shell - * [HBASE-10952] - [REST] Let the user turn off block caching if desired - * [HBASE-10960] - Enhance HBase Thrift 1 to include "append" and "checkAndPut" operations - * [HBASE-10984] - Add description about setting up htrace-zipkin to documentation - * [HBASE-11000] - Add autoflush option to PerformanceEvaluation - * [HBASE-11001] - Shell support for granting cell permissions for testing - * [HBASE-11002] - Shell support for changing cell visibility for testing - * [HBASE-11004] - Extend traces through FSHLog#sync - * [HBASE-11007] - BLOCKCACHE in schema descriptor seems not aptly named - * [HBASE-11008] - Align bulk load, flush, and compact to require Action.CREATE - * [HBASE-11026] - Provide option to filter out all rows in PerformanceEvaluation tool - * [HBASE-11044] - [Shell] Show groups for user in 'whoami' output - * [HBASE-11047] - Remove TimeoutMontior - * [HBASE-11048] - Support setting custom priority per client RPC - * [HBASE-11068] - Update code to use Admin factory method instead of constructor - * [HBASE-11074] - Have PE emit histogram stats as it runs rather than dump once at end of test - * [HBASE-11083] - ExportSnapshot should provide capability to limit bandwidth consumption - * [HBASE-11086] - Add htrace support for PerfEval - * [HBASE-11119] - Update ExportSnapShot to optionally not use a tmp file on external file system - * [HBASE-11123] - Upgrade instructions from 0.94 to 0.98 - * [HBASE-11126] - Add RegionObserver pre hooks that operate under row lock - * [HBASE-11128] - Add -target option to ExportSnapshot to export with a different name - * [HBASE-11134] - Add a -list-snapshots option to SnapshotInfo - * [HBASE-11136] - Add permission check to roll WAL writer - * [HBASE-11137] - Add mapred.TableSnapshotInputFormat - * [HBASE-11151] - move tracing modules from hbase-server to hbase-common - * [HBASE-11167] - Avoid usage of java.rmi package Exception in MemStore - * [HBASE-11201] - Enable global procedure members to return values to procedure master - * [HBASE-11211] - LoadTestTool option for specifying number of regions per server - * [HBASE-11219] - HRegionServer#createRegionLoad() should reuse RegionLoad.Builder instance when called in a loop - * [HBASE-11220] - Add listeners to ServerManager and AssignmentManager - * [HBASE-11240] - Print hdfs pipeline when hlog's sync is slow - * [HBASE-11259] - Compression.java different compressions load system classpath differently causing errors - * [HBASE-11304] - Enable HBaseAdmin.execProcedure to return data from procedure execution - * [HBASE-11305] - Remove bunch of unused imports in HConnectionManager - * [HBASE-11315] - Keeping MVCC for configurable longer time - * [HBASE-11319] - No need to use favored node mapping initialization to find all regions - * [HBASE-11326] - Use an InputFormat for ExportSnapshot - * [HBASE-11331] - [blockcache] lazy block decompression - * [HBASE-11344] - Hide row keys and such from the web UIs - * [HBASE-11348] - Make frequency and sleep times of chaos monkeys configurable - * [HBASE-11349] - [Thrift] support authentication/impersonation - * [HBASE-11350] - [PE] Allow random value size - * [HBASE-11355] - a couple of callQueue related improvements - * [HBASE-11362] - Minor improvements to LoadTestTool and PerformanceEvaluation - * [HBASE-11370] - SSH doesn't need to scan meta if not using ZK for assignment - * [HBASE-11376] - Presplit table in IntegrationTestBigLinkedList's Generator tool - * [HBASE-11390] - PerformanceEvaluation: add an option to use a single connection - * [HBASE-11398] - Print the stripes' state with file size info - * [HBASE-11407] - hbase-client should not require Jackson for pure HBase queries be executed - * [HBASE-11415] - [PE] Dump config before running test - * [HBASE-11421] - HTableInterface javadoc correction - * [HBASE-11434] - [AccessController] Disallow inbound cells with reserved tags - * [HBASE-11436] - Support start Row and stop Row in HBase Export - * [HBASE-11437] - Modify cell tag handling code to treat the length as unsigned - * [HBASE-11438] - [Visibility Controller] Support UTF8 character as Visibility Labels - * [HBASE-11440] - Make KeyValueCodecWithTags as the default codec for replication in trunk - * [HBASE-11444] - Remove use of reflection for User#getShortName - * [HBASE-11446] - Reduce the frequency of RNG calls in SecureWALCellCodec#EncryptedKvEncoder - * [HBASE-11450] - Improve file size info in SnapshotInfo tool - * [HBASE-11452] - add getUserPermission feature in AccessControlClient as client API - * [HBASE-11473] - Add BaseWALObserver class - * [HBASE-11474] - [Thrift2] support authentication/impersonation - * [HBASE-11491] - Add an option to sleep randomly during the tests with the PE tool - * [HBASE-11497] - Expose RpcScheduling implementations as LimitedPrivate interfaces - * [HBASE-11513] - Combine SingleMultiple Queue RpcExecutor into a single class - * [HBASE-11516] - Track time spent in executing coprocessors in each region. - * [HBASE-11553] - Abstract visibility label related services into an interface - * [HBASE-11566] - make ExportSnapshot extendable by removing 'final' - * [HBASE-11583] - Refactoring out the configuration changes for enabling VisibilityLabels in the unit tests. - * [HBASE-11623] - mutateRowsWithLocks might require updatesLock.readLock with waitTime=0 - * [HBASE-11630] - Refactor TestAdmin to use Admin interface instead of HBaseAdmin - * [HBASE-11631] - Wait a little till server is online in assigning meta - * [HBASE-11649] - Add shortcut commands to bin/hbase for test tools - * [HBASE-11650] - Write hbase.id to a temporary location and move into place - * [HBASE-11657] - Put HTable region methods in an interface - * [HBASE-11664] - Build broken - TestVisibilityWithCheckAuths - * [HBASE-11667] - Comment ClientScanner logic for NSREs. - * [HBASE-11674] - LoadIncrementalHFiles should be more verbose after unrecoverable error - * [HBASE-11679] - Replace "HTable" with "HTableInterface" where backwards-compatible - * [HBASE-11696] - Make CombinedBlockCache resizable. - * [HBASE-11697] - Improve the 'Too many blocks' message on UI blockcache status page - * [HBASE-11701] - Start and end of memstore flush log should be on the same level - * [HBASE-11702] - Better introspection of long running compactions - * [HBASE-11706] - Set versions for VerifyReplication - * [HBASE-11731] - Add option to only run a subset of the shell tests - * [HBASE-11748] - Cleanup and add pool usage tracing to Compression - * [HBASE-11749] - Better error logging when coprocessors loading has failed. - * [HBASE-11754] - [Shell] Record table property SPLITS_FILE in descriptor - * [HBASE-11757] - Provide a common base abstract class for both RegionObserver and MasterObserver - * [HBASE-11774] - Avoid allocating unnecessary tag iterators - * [HBASE-11777] - Find a way to set sequenceId on Cells on the server - * [HBASE-11790] - Bulk load should use HFileOutputFormat2 in all cases - * [HBASE-11805] - KeyValue to Cell Convert in WALEdit APIs - * [HBASE-11810] - Access SSL Passwords through Credential Provider API - * [HBASE-11821] - [ImportTSV] Abstract labels tags creation into pluggable Interface - * [HBASE-11825] - Create Connection and ConnectionManager - * [HBASE-11826] - Split each tableOrRegionName admin methods into two targetted methods - * [HBASE-11828] - callers of SeverName.valueOf should use equals and not == - * [HBASE-11845] - HFile tool should implement Tool, disable blockcache by default - * [HBASE-11846] - HStore#assertBulkLoadHFileOk should log if a full HFile verification will be performed during a bulkload - * [HBASE-11847] - HFile tool should be able to print block headers - * [HBASE-11865] - Result implements CellScannable; rather it should BE a CellScanner - * [HBASE-11873] - Hbase Version CLI enhancement - * [HBASE-11877] - Make TableSplit more readable - * [HBASE-11891] - Introduce HBaseInterfaceAudience level to denote class names that appear in configs. - * [HBASE-11897] - Add append and remove peer table-cfs cmds for replication - -** New Feature - * [HBASE-4089] - blockCache contents report - * [HBASE-6104] - Require EXEC permission to call coprocessor endpoints - * [HBASE-7667] - Support stripe compaction - * [HBASE-7840] - Enhance the java it framework to start & stop a distributed hbase & hadoop cluster - * [HBASE-8751] - Enable peer cluster to choose/change the ColumnFamilies/Tables it really want to replicate from a source cluster - * [HBASE-9047] - Tool to handle finishing replication when the cluster is offline - * [HBASE-10119] - Allow HBase coprocessors to clean up when they fail - * [HBASE-10151] - No-op HeapMemoryTuner - * [HBASE-10416] - Improvements to the import flow - * [HBASE-10881] - Support reverse scan in thrift2 - * [HBASE-10935] - support snapshot policy where flush memstore can be skipped to prevent production cluster freeze - * [HBASE-11724] - Add to RWQueueRpcExecutor the ability to split get and scan handlers - * [HBASE-11885] - Provide a Dockerfile to easily build and run HBase from source - * [HBASE-11909] - Region count listed by HMaster UI and hbck are different - -** Task - * [HBASE-4456] - [doc] Add a section about RS failover - * [HBASE-4920] - We need a mascot, a totem - * [HBASE-5697] - Audit HBase for usage of deprecated hadoop 0.20.x property names. - * [HBASE-6139] - Add troubleshooting section for CentOS 6.2 page allocation failure issue - * [HBASE-6192] - Document ACL matrix in the book - * [HBASE-7394] - Document security config requirements from HBASE-7357 - * [HBASE-8035] - Add site target check to precommit tests - * [HBASE-8844] - Document the removal of replication state AKA start/stop_replication - * [HBASE-9580] - Document the meaning of @InterfaceAudience in hbase ref guide - * [HBASE-9733] - Book should have individual Disqus comment per page - * [HBASE-9875] - NamespaceJanitor chore is not used - * [HBASE-10134] - Fix findbug warning in VisibilityController - * [HBASE-10159] - Replaced deprecated interface Closeable - * [HBASE-10206] - Explain tags in the hbase book - * [HBASE-10246] - Wrap long lines in recently added source files - * [HBASE-10364] - Allow configuration option for parent znode in LoadTestTool - * [HBASE-10388] - Add export control notice in README - * [HBASE-10439] - Document how to configure REST server impersonation - * [HBASE-10473] - Add utility for adorning http Context - * [HBASE-10601] - Upgrade hadoop dependency to 2.3.0 release - * [HBASE-10609] - Remove filterKeyValue(Cell ignored) from FilterBase - * [HBASE-10612] - Remove unnecessary dependency on org.eclipse.jdt:core - * [HBASE-10670] - HBaseFsck#connect() should use new connection - * [HBASE-10700] - IntegrationTestWithCellVisibilityLoadAndVerify should allow current user to be the admin - * [HBASE-10740] - Upgrade zookeeper to 3.4.6 release - * [HBASE-10786] - If snapshot verification fails with 'Regions moved', the message should contain the name of region causing the failure - * [HBASE-10787] - TestHCM#testConnection* take too long - * [HBASE-10821] - Make ColumnInterpreter#getValue() abstract - * [HBASE-10824] - Enhance detection of protobuf generated code in line length check - * [HBASE-10889] - test-patch.sh should exclude thrift generated code from long line detection - * [HBASE-10906] - Change error log for NamingException in TableInputFormatBase to WARN level - * [HBASE-10912] - setUp / tearDown in TestSCVFWithMiniCluster should be done once per run - * [HBASE-10956] - Upgrade hadoop-2 dependency to 2.4.0 - * [HBASE-11016] - Remove Filter#filterRow(List) - * [HBASE-11032] - Replace deprecated methods in FileSystem with their replacements - * [HBASE-11050] - Replace empty catch block in TestHLog#testFailedToCreateHLogIfParentRenamed with @Test(expected=) - * [HBASE-11076] - Update refguide on getting 0.94.x to run on hadoop 2.2.0+ - * [HBASE-11090] - Backport HBASE-11083 ExportSnapshot should provide capability to limit bandwidth consumption - * [HBASE-11107] - Provide utility method equivalent to 0.92's Result.getBytes().getSize() - * [HBASE-11154] - Document how to use Reverse Scan API - * [HBASE-11199] - One-time effort to pretty-print the Docbook XML, to make further patch review easier - * [HBASE-11203] - Clean up javadoc and findbugs warnings in trunk - * [HBASE-11204] - Document bandwidth consumption limit feature for ExportSnapshot - * [HBASE-11227] - Mention 8- and 16-bit fixed-with encodings in OrderedBytes docstring - * [HBASE-11230] - Remove getRowOrBefore from HTableInterface and HTable - * [HBASE-11317] - Expand unit testing to cover Mockito and MRUnit and give more examples - * [HBASE-11364] - [BlockCache] Add a flag to cache data blocks in L1 if multi-tier cache - * [HBASE-11600] - DataInputputStream and DoubleOutputStream are no longer being used - * [HBASE-11604] - Disable co-locating meta/master by default - * [HBASE-11621] - Make MiniDFSCluster run faster - * [HBASE-11666] - Enforce JDK7 javac for builds on branch-1 and master - * [HBASE-11682] - Explain hotspotting - * [HBASE-11723] - Document all options of bin/hbase command - * [HBASE-11735] - Document Configurable Bucket Sizes in bucketCache - * [HBASE-11762] - Record the class name of Codec in WAL header - * [HBASE-11800] - Coprocessor service methods in HTableInterface should be annotated public - * [HBASE-11849] - Clean up orphaned private audience classes - * [HBASE-11858] - Audit regionserver classes that are missing InterfaceAudience - -** Test - * [HBASE-8889] - TestIOFencing#testFencingAroundCompaction occasionally fails - * [HBASE-9928] - TestHRegion should clean up test-data directory upon completion - * [HBASE-9953] - PerformanceEvaluation: Decouple data size from client concurrency - * [HBASE-10044] - test-patch.sh should accept documents by known file extensions - * [HBASE-10130] - TestSplitLogManager#testTaskResigned fails sometimes - * [HBASE-10180] - TestByteBufferIOEngine#testByteBufferIOEngine occasionally fails - * [HBASE-10189] - Intermittent TestReplicationSyncUpTool failure - * [HBASE-10301] - TestAssignmentManagerOnCluster#testOpenCloseRacing fails intermittently - * [HBASE-10377] - Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure - * [HBASE-10394] - Test for Replication with tags - * [HBASE-10406] - Column family option is not effective in IntegrationTestSendTraceRequests - * [HBASE-10408] - Intermittent TestDistributedLogSplitting#testLogReplayForDisablingTable failure - * [HBASE-10440] - integration tests fail due to nonce collisions - * [HBASE-10465] - TestZKPermissionsWatcher.testPermissionsWatcher fails sometimes - * [HBASE-10475] - TestRegionServerCoprocessorExceptionWithAbort may timeout due to concurrent lease removal - * [HBASE-10480] - TestLogRollPeriod#testWithEdits may fail due to insufficient waiting - * [HBASE-10543] - Two rare test failures with TestLogsCleaner and TestSplitLogWorker - * [HBASE-10635] - thrift#TestThriftServer fails due to TTL validity check - * [HBASE-10649] - TestMasterMetrics fails occasionally - * [HBASE-10764] - TestLoadIncrementalHFilesSplitRecovery#testBulkLoadPhaseFailure takes too long - * [HBASE-10767] - Load balancer may interfere with tests in TestHBaseFsck - * [HBASE-10774] - Restore TestMultiTableInputFormat - * [HBASE-10782] - Hadoop2 MR tests fail occasionally because of mapreduce.jobhistory.address is no set in job conf - * [HBASE-10828] - TestRegionObserverInterface#testHBase3583 should wait for all regions to be assigned - * [HBASE-10852] - TestDistributedLogSplitting#testDisallowWritesInRecovering occasionally fails - * [HBASE-10867] - TestRegionPlacement#testRegionPlacement occasionally fails - * [HBASE-10868] - TestAtomicOperation should close HRegion instance after each subtest - * [HBASE-10988] - Properly wait for server in TestThriftServerCmdLine - * [HBASE-11010] - TestChangingEncoding is unnecessarily slow - * [HBASE-11019] - incCount() method should be properly stubbed in HConnectionTestingUtility#getMockedConnectionAndDecorate() - * [HBASE-11037] - Race condition in TestZKBasedOpenCloseRegion - * [HBASE-11051] - checkJavacWarnings in test-patch.sh should bail out early if there is compilation error - * [HBASE-11057] - Improve TestShell coverage of grant and revoke comamnds - * [HBASE-11104] - IntegrationTestImportTsv#testRunFromOutputCommitter misses credential initialization - * [HBASE-11152] - test-patch.sh should be able to handle the case where $TERM is not defined - * [HBASE-11166] - Categorize tests in hbase-prefix-tree module - * [HBASE-11328] - testMoveRegion could fail - * [HBASE-11345] - Add an option not to restore cluster after an IT test - * [HBASE-11375] - Validate compile-protobuf profile in test-patch.sh - * [HBASE-11404] - TestLogLevel should stop the server at the end - * [HBASE-11443] - TestIOFencing#testFencingAroundCompactionAfterWALSync times out - * [HBASE-11615] - TestZKLessAMOnCluster.testForceAssignWhileClosing failed on Jenkins - * [HBASE-11713] - Adding hbase shell unit test coverage for visibility labels. - * [HBASE-11918] - TestVisibilityLabelsWithDistributedLogReplay#testAddVisibilityLabelsOnRSRestart sometimes fails due to VisibilityController initialization not being recognized - * [HBASE-11942] - Fix TestHRegionBusyWait - * [HBASE-11966] - Minor error in TestHRegion.testCheckAndMutate_WithCorrectValue() - -** Umbrella - * [HBASE-7319] - Extend Cell usage through read path - * [HBASE-9945] - Coprocessor loading and execution improvements - * [HBASE-10909] - Abstract out ZooKeeper usage in HBase - phase 1 +Release 0.92.1 - Unreleased + BUG FIXES + HBASE-5176 AssignmentManager#getRegion: logging nit adds a redundant '+' (Karthik K) + HBASE-5237 Addendum for HBASE-5160 and HBASE-4397 (Ram) + HBASE-5235 HLogSplitter writer thread's streams not getting closed when any + of the writer threads has exceptions. (Ram) + HBASE-5243 LogSyncerThread not getting shutdown waiting for the interrupted flag (Ram) + HBASE-5255 Use singletons for OperationStatus to save memory (Benoit) + HBASE-5345 CheckAndPut doesn't work when value is empty byte[] (Evert Arckens) + HBASE-5466 Opening a table also opens the metatable and never closes it + (Ashley Taylor) + + TESTS + HBASE-5223 TestMetaReaderEditor is missing call to CatalogTracker.stop() + +Release 0.92.0 - 01/23/2012 + INCOMPATIBLE CHANGES + HBASE-2002 Coprocessors: Client side support; Support RPC interface + changes at runtime (Gary Helmling via Andrew Purtell) + HBASE-3677 Generate a globally unique cluster ID (changed + ClusterStatus serialization) + HBASE-3762 HTableFactory.releaseHTableInterface() should throw IOException + instead of wrapping in RuntimeException (Ted Yu via garyh) + HBASE-3629 Update our thrift to 0.6 (Moaz Reyad) + HBASE-1502 Remove need for heartbeats in HBase + HBASE-451 Remove HTableDescriptor from HRegionInfo (Subbu M Iyer) + HBASE-451 Remove HTableDescriptor from HRegionInfo + addendum that fixes TestTableMapReduce + HBASE-3534 Action should not store or serialize regionName (Ted Yu) + HBASE-4197 RegionServer expects all scanner to be subclasses of + HRegion.RegionScanner (Lars Hofhansl) + HBASE-4233 Update protobuf dependency to 2.4.0a (todd) + HBASE-4299 Update to Avro 1.5.3 and use Avro Maven plugin to generate + Avro classes. (Alejandro Abdelnur) + HBASE-4369 Deprecate HConnection#getZookeeperWatcher in prep for HBASE-1762 + HBASE-4247 Add isAborted method to the Abortable interface + (Akash Ashok) + HBASE-4503 Purge deprecated HBaseClusterTestCase + HBASE-4374 Up default regions size from 256M to 1G + HBASE-4648 Bytes.toBigDecimal() doesn't use offset (Bryan Keller via Lars H) + HBASE-4715 Remove stale broke .rb scripts from bin dir + HBASE-3433 Remove the KV copy of every KV in Scan; introduced by HBASE-3232 (Lars H) + HBASE-5017 Bump the default hfile.block.cache.size because of HFileV2 + + BUG FIXES + HBASE-3280 YouAreDeadException being swallowed in HRS getMaster + HBASE-3282 Need to retain DeadServers to ensure we don't allow + previously expired RS instances to rejoin cluster + HBASE-3283 NPE in AssignmentManager if processing shutdown of RS who + doesn't have any regions assigned to it + HBASE-3173 HBase 2984 breaks ability to specify BLOOMFILTER & + COMPRESSION via shell + HBASE-3310 Failing creating/altering table with compression agrument from + the HBase shell (Igor Ranitovic via Stack) + HBASE-3317 Javadoc and Throws Declaration for Bytes.incrementBytes() is + Wrong (Ed Kohlwey via Stack) + HBASE-1888 KeyValue methods throw NullPointerException instead of + IllegalArgumentException during parameter sanity check + HBASE-3337 Restore HBCK fix of unassignment and dupe assignment for new + master + HBASE-3332 Regions stuck in transition after RS failure + HBASE-3418 Increment operations can break when qualifiers are split + between memstore/snapshot and storefiles + HBASE-3403 Region orphaned after failure during split + HBASE-3492 NPE while splitting table with empty column family store + HBASE-3400 Coprocessor Support for Generic Interfaces + (Ed Kohlwey via Gary Helmling) + HBASE-3552 Coprocessors are unable to load if RegionServer is launched + using a different classloader than system default + HBASE-3578 TableInputFormat does not setup the configuration for HBase + mapreduce jobs correctly (Dan Harvey via Stack) + HBASE-3601 TestMasterFailover broken in TRUNK + HBASE-3605 Fix balancer log message + HBASE-3538 Column families allow to have slashes in name (Ian Knome via Stack) + HBASE-3313 Table name isn't checked in isTableEnabled/isTableDisabled + (Ted Yu via Stack) + HBASE-3514 Speedup HFile.Writer append (Matteo Bertozzi via Ryan) + HBASE-3665 tighten assertions for testBloomFilterSize + HBASE-3662 REST server does not respect client supplied max versions when + creating scanner + HBASE-3641 LruBlockCache.CacheStats.getHitCount() is not using the + correct variable + HBASE-3532 HRegion#equals is broken (Ted Yu via Stack) + HBASE-3697 Admin actions that use MetaReader to iterate regions need to + skip offline ones + HBASE-3583 Coprocessors: scannerNext and scannerClose hooks are called + when HRegionInterface#get is invoked (Mingjie Lai via + Andrew Purtell) + HBASE-3688 Setters of class HTableDescriptor do not work properly + HBASE-3702 Fix NPE in Exec method parameter serialization + HBASE-3709 HFile compression not sharing configuration + HBASE-3711 importtsv fails if rowkey length exceeds MAX_ROW_LENGTH + (Kazuki Ohta via todd) + HBASE-3716 Intermittent TestRegionRebalancing failure + (Ted Yu via Stack) + HBASE-3712 HTable.close() doesn't shutdown thread pool + (Ted Yu via Stack) + HBASE-3238 HBase needs to have the CREATE permission on the parent of its + ZooKeeper parent znode (Alex Newman via Stack) + HBASE-3728 NPE in HTablePool.closeTablePool (Ted Yu via Stack) + HBASE-3733 MemStoreFlusher.flushOneForGlobalPressure() shouldn't + be using TreeSet for HRegion (Ted Yu via J-D) + HBASE-3739 HMaster.getProtocolVersion() should distinguish + HMasterInterface and HMasterRegionInterface versions + HBASE-3723 Major compact should be done when there is only one storefile + and some keyvalue is outdated (Zhou Shuaifeng via Stack) + HBASE-3624 Only one coprocessor of each priority can be loaded for a table + HBASE-3598 Broken formatting in LRU stats output (Erik Onnen) + HBASE-3758 Delete triggers pre/postScannerOpen upcalls of RegionObserver + (Mingjie Lai via garyh) + HBASE-3790 Fix NPE in ExecResult.write() with null return value + HBASE-3781 hbase shell cannot start "NoMethodError: undefined method + `close' for nil:NilClass" (Mikael Sitruk) + HBASE-3802 Redundant list creation in HRegion + HBASE-3788 Two error handlings in AssignmentManager.setOfflineInZooKeeper() + (Ted Yu) + HBASE-3800 HMaster is not able to start due to AlreadyCreatedException + HBASE-3806 distributed log splitting double escapes task names + (Prakash Khemani) + HBASE-3819 TestSplitLogWorker has too many SLWs running -- makes for + contention and occasional failures + HBASE-3210 HBASE-1921 for the new master + HBASE-3827 hbase-1502, removing heartbeats, broke master joining a running + cluster and was returning master hostname for rs to use + HBASE-3829 TestMasterFailover failures in jenkins + HBASE-3843 splitLogWorker starts too early (Prakash Khemani) + HBASE-3838 RegionCoprocesorHost.preWALRestore throws npe in case there is + no RegionObserver registered (Himanshu Vashishtha) + HBASE-3847 Turn off DEBUG logging of RPCs in WriteableRPCEngine on TRUNK + HBASE-3777 Redefine Identity Of HBase Configuration (Karthick Sankarachary) + HBASE-3849 Fix master ui; hbase-1502 broke requests/second + HBASE-3853 Fix TestInfoServers to pass after HBASE-3835 (todd) + HBASE-3862 Race conditions in aggregate calculation (John Heitmann) + HBASE-3865 Failing TestWALReplay + HBASE-3864 Rename of hfile.min.blocksize.size in HBASE-2899 reverted in + HBASE-1861 (Aaron T. Myers) + HBASE-3876 TestCoprocessorInterface.testCoprocessorInterface broke on + jenkins and local + HBASE-3897 Docs (notsoquick guide) suggest invalid XML (Philip Zeyliger) + HBASE-3898 TestSplitTransactionOnCluster broke in TRUNK + HBASE-3826 Minor compaction needs to check if still over + compactionThreshold after compacting (Nicolas Spiegelberg) + HBASE-3912 [Stargate] Columns not handle by Scan + HBASE-3903 A successful write to client write-buffer may be lost or not + visible (Doug Meil) + HBASE-3894 Thread contention over row locks set monitor (Dave Latham) + HBASE-3959 hadoop-snappy version in the pom.xml is incorrect + (Alejandro Abdelnur) + HBASE-3971 Compression.java uses ClassLoader.getSystemClassLoader() + to load codec (Alejandro Abdelnur) + HBASE-3979 Trivial fixes in code, document (Ming Ma) + HBASE-3794 Ability to Discard Bad HTable Puts + HBASE-3923 HBASE-1502 Broke Shell's status 'simple' and 'detailed' + HBASE-3978 Rowlock lease renew doesn't work when custom coprocessor + indicates to bypass default action (Ming Ma) + HBASE-3963 Schedule all log-spliiting at startup all at once (mingjian) + HBASE-3983 list command in shell seems broken + HBASE-3793 HBASE-3468 Broke checkAndPut with null value (Ming Ma) + HBASE-3889 NPE in Distributed Log Splitting (Anirudh Todi) + HBASE-4000 You can't specify split points when you create a table in + the shell (Joey Echeverria) + HBASE-4029 Inappropriate checking of Logging Mode in HRegionServer + (Akash Ashok via Ted Yu) + HBASE-4037 Add timeout annotations to preempt surefire killing + all tests + HBASE-4024 Major compaction may not be triggered, even though region + server log says it is triggered (Ted Yu) + HBASE-4016 HRegion.incrementColumnValue() doesn't have a consistent + behavior when the field that we are incrementing is less + than 8 bytes long (Li Pi) + HBASE-4012 Further optimize byte comparison methods (Ted Yu) + HBASE-4037 Add timeout annotations to preempt surefire killing + all tests - TestFullLogReconstruction + HBASE-4051 [Coprocessors] Table coprocessor loaded twice when region is + initialized + HBASE-4059 If a region is split during RS shutdown process, the daughter + regions are NOT made online by master + HBASE-3904 HBA.createTable(final HTableDescriptor desc, byte [][] splitKeys) + should be synchronous + HBASE-4053 Most of the regions were added into AssignmentManager#servers twice + HBASE-4061 getTableDirs is missing directories to skip + HBASE-3867 when cluster is stopped and server which hosted meta region is + removed from cluster, master breaks down after restarting cluster. + HBASE-4074 When a RS has hostname with uppercase letter, there are two + RS entries in master (Weihua via Ted Yu) + HBASE-4077 Deadlock if WrongRegionException is thrown from getLock in + HRegion.delete (Adam Warrington via Ted Yu) + HBASE-3893 HRegion.internalObtainRowLock shouldn't wait forever + HBASE-4075 A bug in TestZKBasedOpenCloseRegion (Jieshan Bean via Ted Yu) + HBASE-4087 HBaseAdmin should perform validation of connection it holds + HBASE-4052 Enabling a table after master switch does not allow table scan, + throwing NotServingRegionException (ramkrishna via Ted Yu) + HBASE-4112 Creating table may throw NullPointerException (Jinchao via Ted Yu) + HBASE-4093 When verifyAndAssignRoot throws exception, the deadServers state + cannot be changed (fulin wang via Ted Yu) + HBASE-4118 method regionserver.MemStore#updateColumnValue: the check for + qualifier and family is missing (N Keywal via Ted Yu) + HBASE-4127 Don't modify table's name away in HBaseAdmin + HBASE-4105 Stargate does not support Content-Type: application/json and + Content-Encoding: gzip in parallel + HBASE-4116 [stargate] StringIndexOutOfBoundsException in row spec parse + (Allan Yan) + HBASE-3845 data loss because lastSeqWritten can miss memstore edits + (Prakash Khemani and ramkrishna.s.vasudevan) + HBASE-4083 If Enable table is not completed and is partial, then scanning of + the table is not working (ramkrishna.s.vasudevan) + HBASE-4138 If zookeeper.znode.parent is not specifed explicitly in Client + code then HTable object loops continuously waiting for the root region + by using /hbase as the base node.(ramkrishna.s.vasudevan) + HBASE-4032 HBASE-451 improperly breaks public API HRegionInfo#getTableDesc + HBASE-4003 Cleanup Calls Conservatively On Timeout (Karthick) + HBASE-3857 Fix TestHFileBlock.testBlockHeapSize test failure (Mikhail) + HBASE-4150 Don't enforce pool size limit with ThreadLocalPool + (Karthick Sankarachary via garyh) + HBASE-4171 HBase shell broken in trunk (Lars Hofhansl) + HBASE-4162 Fix TestHRegionInfo.testGetSetOfHTD: delete /tmp/hbase- + if it already exists (Mikhail Bautin) + HBASE-4179 Failed to run RowCounter on top of Hadoop branch-0.22 + (Michael Weng) + HBASE-4181 HConnectionManager can't find cached HRegionInterface and makes clients + work very slow (Jia Liu) + HBASE-4156 ZKConfig defaults clientPort improperly (Michajlo Matijkiw) + HBASE-4184 CatalogJanitor doesn't work properly when "fs.default.name" isn't + set in config file (Ming Ma) + HBASE-4186 No region is added to regionsInTransitionInRS + HBASE-4194 RegionSplitter: Split on under-loaded region servers first + HBASE-2399 Forced splits only act on the first family in a table (Ming Ma) + HBASE-4211 Do init-sizing of the StringBuilder making a ServerName + (Benoît Sigoure) + HBASE-4175 Fix FSUtils.createTableDescriptor() (Ramkrishna) + HBASE-4008 Problem while stopping HBase (Akash Ashok) + HBASE-4065 TableOutputFormat ignores failure to create table instance + (Brock Noland) + HBASE-4167 Potential leak of HTable instances when using HTablePool with + PoolType.ThreadLocal (Karthick Sankarachary) + HBASE-4239 HBASE-4012 introduced duplicate variable Bytes.LONG_BYTES + HBASE-4225 NoSuchColumnFamilyException in multi doesn't say which family + is bad (Ramkrishna Vasudevan) + HBASE-4220 Lots of DNS queries from client + HBASE-4253 Intermittent test failure because of missing config parameter in new + HTable(tablename) (Ramkrishna) + HBASE-4217 HRS.closeRegion should be able to close regions with only + the encoded name (ramkrishna.s.vasudevan) + HBASE-3229 HBASE-3229 Table creation, though using "async" call to master, + can actually run for a while and cause RPC timeout (Ming Ma) + HBASE-4252 TestLogRolling's low-probability failure (Jieshan Bean) + HBASE-4278 Race condition in Slab.java that occurs due to spinlock unlocking + early (Li Pi) + HBASE-4269 Add tests and restore semantics to TableInputFormat/TableRecordReader + (Jonathan Hsieh) + HBASE-4290 HLogSplitter doesn't mark its MonitoredTask as complete in + non-distributed case (todd) + HBASE-4303 HRegionInfo.toString has bad quoting (todd) + HBASE-4307 race condition in CacheTestUtils (Li Pi) + HBASE-4310 SlabCache metrics bugfix (Li Pi) + HBASE-4283 HBaseAdmin never recovers from restarted cluster (Lars Hofhansl) + HBASE-4315 RPC logging too verbose (todd) + HBASE-4273 java.lang.NullPointerException when a table is being disabled and + HMaster restarts (Ming Ma) + HBASE-4027 Off Heap Cache never creates Slabs (Li Pi) + HBASE-4265 zookeeper.KeeperException$NodeExistsException if HMaster restarts + while table is being disabled (Ming Ma) + HBASE-4338 Package build for rpm and deb are broken (Eric Yang) + HBASE-4309 slow query log metrics spewing warnings (Riley Patterson) + HBASE-4302 Only run Snappy compression tests if Snappy is available + (Alejandro Abdelnur via todd) + HBASE-4271 Clean up coprocessor handling of table operations + (Ming Ma via garyh) + HBASE-4341 HRS#closeAllRegions should take care of HRS#onlineRegions's + weak consistency (Jieshan Bean) + HBASE-4297 TableMapReduceUtil overwrites user supplied options + (Jan Lukavsky) + HBASE-4015 Refactor the TimeoutMonitor to make it less racy + (ramkrishna.s.vasudevan) + HBASE-4350 Fix a Bloom filter bug introduced by HFile v2 and + TestMultiColumnScanner that caught it (Mikhail Bautin) + HBASE-4007 distributed log splitting can get indefinitely stuck + (Prakash Khemani) + HBASE-4301 META migration from 0.90 to trunk fails (Subbu Iyer) + HBASE-4331 Bypassing default actions in prePut fails sometimes with + HTable client (Lars Hofhansl via garyh) + HBASE-4340 Hbase can't balance if ServerShutdownHandler encountered + exception (Jinchao Gao) + HBASE-4394 Add support for seeking hints to FilterList + HBASE-4406 TestOpenRegionHandler failing after HBASE-4287 (todd) + HBASE-4330 Fix races in slab cache (Li Pi & Todd) + HBASE-4383 SlabCache reports negative heap sizes (Li Pi) + HBASE-4351 If from Admin we try to unassign a region forcefully, + though a valid region name is given the master is not able + to identify the region to unassign (Ramkrishna) + HBASE-4363 [replication] ReplicationSource won't close if failing + to contact the sink (JD and Lars Hofhansl) + HBASE-4390 [replication] ReplicationSource's UncaughtExceptionHandler + shouldn't join + HBASE-4395 EnableTableHandler races with itself + HBASE-4414 Region splits by size not being triggered + HBASE-4322 HBASE-4322 [hbck] Update checkIntegrity/checkRegionChain + to present more accurate region split problem + (Jon Hseih) + HBASE-4417 HBaseAdmin.checkHBaseAvailable() doesn't close ZooKeeper connections + (Stefan Seelmann) + HBASE-4195 Possible inconsistency in a memstore read after a reseek, + possible performance improvement (nkeywal) + HBASE-4420 MasterObserver preMove() and postMove() should throw + IOException instead of UnknownRegionException + HBASE-4419 Resolve build warning messages (Praveen Patibandia) + HBASE-4428 Two methods in CacheTestUtils don't call setDaemon() on the threads + HBASE-4400 .META. getting stuck if RS hosting it is dead and znode state is in + RS_ZK_REGION_OPENED (Ramkrishna) + HBASE-3421 Very wide rows -- 30M plus -- cause us OOME (Nate Putnam) + HBASE-4153 Handle RegionAlreadyInTransitionException in AssignmentManager + (Ramkrishna) + HBASE-4452 Possibility of RS opening a region though tickleOpening fails due to + znode version mismatch (Ramkrishna) + HBASE-4446 Rolling restart RSs scenario, regions could stay in OPENING state + (Ming Ma) + HBASE-4468 Wrong resource name in an error massage: webapps instead of + hbase-webapps (nkeywal) + HBASE-4472 MiniHBaseCluster.shutdown() doesn't work if no active master + HBASE-4455 Rolling restart RSs scenario, -ROOT-, .META. regions are lost in + AssignmentManager (Ming Ma) + HBASE-4513 NOTICES.txt refers to Facebook for Thrift + HBASE-3130 [replication] ReplicationSource can't recover from session + expired on remote clusters (Chris Trezzo via JD) + HBASE-4212 TestMasterFailover fails occasionally (Gao Jinchao) + HBASE-4412 No need to retry scan operation on the same server in case of + RegionServerStoppedException (Ming Ma) + HBASE-4476 Compactions must fail if column tracker gets columns out of order + (Mikhail Bautin) + HBASE-4209 The HBase hbase-daemon.sh SIGKILLs master when stopping it + (Roman Shaposhnik) + HBASE-4496 HFile V2 does not honor setCacheBlocks when scanning (Lars and Mikhail) + HBASE-4531 hbase-4454 failsafe broke mvn site; back it out or fix + (Akash Ashok) + HBASE-4334 HRegion.get never validates row (Lars Hofhansl) + HBASE-4494 AvroServer:: get fails with NPE on a non-existent row + (Kay Kay) + HBASE-4481 TestMergeTool failed in 0.92 build 20 + HBASE-4386 Fix a potential NPE in TaskMonitor (todd) + HBASE-4402 Retaining locality after restart broken + HBASE-4482 Race Condition Concerning Eviction in SlabCache (Li Pi) + HBASE-4547 TestAdmin failing in 0.92 because .tableinfo not found + HBASE-4540 OpenedRegionHandler is not enforcing atomicity of the operation + it is performing(Ram) + HBASE-4335 Splits can create temporary holes in .META. that confuse clients + and regionservers (Lars H) + HBASE-4555 TestShell seems passed, but actually errors seen in test output + file (Mingjie Lai) + HBASE-4582 Store.java cleanup (failing TestHeapSize and has warnings) + HBASE-4556 Fix all incorrect uses of InternalScanner.next(...) (Lars H) + HBASE-4078 Validate store files after flush/compaction + HBASE-3417 CacheOnWrite is using the temporary output path for block + names, need to use a more consistent block naming scheme (jgray) + HBASE-4551 Fix pom and some test cases to compile and run against + Hadoop 0.23 (todd) + HBASE-3446 ProcessServerShutdown fails if META moves, orphaning lots of + regions + HBASE-4589 CacheOnWrite broken in some cases because it can conflict + with evictOnClose (jgray) + HBASE-4579 CST.requestCompaction semantics changed, logs are now + spammed when too many store files + HBASE-4620 I broke the build when I submitted HBASE-3581 (Send length + of the rpc response) + HBASE-4621 TestAvroServer fails quite often intermittently (Akash Ashok) + HBASE-4378 [hbck] Does not complain about regions with startkey==endkey. + (Jonathan Hsieh) + HBASE-4459 HbaseObjectWritable code is a byte, we will eventually run out of codes + HBASE-4430 Disable TestSlabCache and TestSingleSizedCache temporarily to + see if these are cause of build box failure though all tests + pass (Li Pi) + HBASE-4510 Check and workaround usage of internal HDFS APIs in HBase + (Harsh) + HBASE-4595 HFilePrettyPrinter Scanned kv count always 0 (Matteo Bertozzi) + HBASE-4580 Some invalid zk nodes were created when a clean cluster restarts + (Gaojinchao) + HBASE-4588 The floating point arithmetic to validate memory allocation + configurations need to be done as integers (dhruba) + HBASE-4647 RAT finds about 40 files missing licenses + HBASE-4642 Add Apache License Header + HBASE-4591 TTL for old HLogs should be calculated from last modification time. + HBASE-4578 NPE when altering a table that has moving regions (gaojinchao) + HBASE-4070 Improve region server metrics to report loaded coprocessors to + master (Eugene Koontz via apurtell) + HBASE-3512 Shell support for listing currently loaded coprocessors (Eugene + Koontz via apurtell) + HBASE-4670 Fix javadoc warnings + HBASE-4367 Deadlock in MemStore flusher due to JDK internally synchronizing + on current thread + HBASE-4645 Edits Log recovery losing data across column families + HBASE-4634 "test.build.data" property overused leading to write data at the + wrong place (nkeywal) + HBASE-4388 Second start after migration from 90 to trunk crashes + HBASE-4685 TestDistributedLogSplitting.testOrphanLogCreation failing because + of ArithmeticException: / by zero. + HBASE-4300 Start of new-version master fails if old master's znode is + hanging around + HBASE-4679 Thrift null mutation error + HBASE-4304 requestsPerSecond counter stuck at 0 (Li Pi) + HBASE-4692 HBASE-4300 broke the build + HBASE-4641 Block cache can be mistakenly instantiated on Master (jgray) + HBASE-4687 regionserver may miss zk-heartbeats to master when replaying + edits at region open (prakash via jgray) + HBASE-4701 TestMasterObserver fails up on jenkins + HBASE-4700 TestSplitTransactionOnCluster fails on occasion when it tries + to move a region + HBASE-4613 hbase.util.Threads#threadDumpingIsAlive sleeps 1 second, + slowing down the shutdown by 0.5s + HBASE-4552 multi-CF bulk load is not atomic across column families (Jonathan Hsieh) + HBASE-4710 UnknownProtocolException should abort client retries + HBASE-4695 WAL logs get deleted before region server can fully flush + (gaojinchao) + HBASE-4708 Revert safemode related pieces of hbase-4510 (Harsh J) + HBASE-3515 [replication] ReplicationSource can miss a log after RS comes out of GC + HBASE-4713 Raise debug level to warn on ExecutionException in + HConnectionManager$HConnectionImplementation (Lucian George Iordache) + HBASE-4716 Improve locking for single column family bulk load + HBASE-4609 ThriftServer.getRegionInfo() is expecting old ServerName format, need to + use new Addressing class instead (Jonathan Gray) + HBASE-4719 HBase script assumes pre-Hadoop 0.21 layout of jar files + (Roman Shposhnik) + HBASE-4553 The update of .tableinfo is not atomic; we remove then rename + HBASE-4725 NPE in AM#updateTimers + HBASE-4745 LRU statistics thread should be a daemon + HBASE-4749 TestMasterFailover#testMasterFailoverWithMockedRITOnDeadRS + occasionally fails + HBASE-4753 org.apache.hadoop.hbase.regionserver.TestHRegionInfo#testGetSetOfHTD + throws NPE on trunk (nkeywal) + HBASE-4754 FSTableDescriptors.getTableInfoPath() should handle FileNotFoundException + HBASE-4740 [bulk load] the HBASE-4552 API can't tell if errors on region server are recoverable + (Jonathan Hsieh) + HBASE-4741 Online schema change doesn't return errors + HBASE-4734 [bulk load] Warn if bulk load directory contained no files + HBASE-4723 Loads of NotAllMetaRegionsOnlineException traces when starting + the master + HBASE-4511 There is data loss when master failovers + HBASE-4577 Region server reports storefileSizeMB bigger than + storefileUncompressedSizeMB (gaojinchao) + HBASE-4478 Improve AssignmentManager.handleRegion so that it can process certain ZK state + in the case of RS offline + HBASE-4777 Write back to client 'incompatible' if we show up with wrong version + HBASE-4775 Remove -ea from all but tests; enable it if you need it testing + HBASE-4784 Void return types not handled correctly for CoprocessorProtocol + methods + HBASE-4792 SplitRegionHandler doesn't care if it deletes the znode or not, + leaves the parent region stuck offline + HBASE-4793 HBase shell still using deprecated methods removed in HBASE-4436 + HBASE-4801 alter_status shell prints sensible message at completion + HBASE-4796 Race between SplitRegionHandlers for the same region kills the master + HBASE-4816 Regionserver wouldn't go down because split happened exactly at same + time we issued bulk user region close call on our way out + HBASE-4815 Disable online altering by default, create a config for it + HBASE-4623 Remove @deprecated Scan methods in 0.90 from TRUNK and 0.92 + HBASE-4842 [hbck] Fix intermittent failures on TestHBaseFsck.testHBaseFsck + (Jon Hsieh) + HBASE-4308 Race between RegionOpenedHandler and AssignmentManager (Ram) + HBASE-4857 Recursive loop on KeeperException in + AuthenticationTokenSecretManager/ZKLeaderManager + HBASE-4739 Master dying while going to close a region can leave it in transition + forever (Gao Jinchao) + HBASE-4855 SplitLogManager hangs on cluster restart due to batch.installed doubly counted + HBASE-4877 TestHCM failing sporadically on jenkins and always for me on an + ubuntu machine + HBASE-4878 Master crash when splitting hlog may cause data loss (Chunhui Shen) + HBASE-4945 NPE in HRegion.bulkLoadHFiles (Andrew P and Lars H) + HBASE-4942 HMaster is unable to start of HFile V1 is used (Honghua Zhu) + HBASE-4610 Port HBASE-3380 (Master failover can split logs of live servers) to 92/trunk + HBASE-4946 HTable.coprocessorExec (and possibly coprocessorProxy) does not work with + dynamically loaded coprocessors (Andrei Dragomir) + HBASE-5026 Add coprocessor hook to HRegionServer.ScannerListener.leaseExpired() + HBASE-4935 hbase 0.92.0 doesn't work going against 0.20.205.0, its packaged hadoop + HBASE-5078 DistributedLogSplitter failing to split file because it has edits for + lots of regions + HBASE-5077 SplitLogWorker fails to let go of a task, kills the RS + HBASE-5096 Replication does not handle deletes correctly. (Lars H) + HBASE-5103 Fix improper master znode deserialization (Jonathan Hsieh) + HBASE-5099 ZK event thread waiting for root region assignment may block server + shutdown handler for the region sever the root region was on (Jimmy) + HBASE-5100 Rollback of split could cause closed region to be opened again (Chunhui) + HBASE-4397 -ROOT-, .META. tables stay offline for too long in recovery phase after all RSs + are shutdown at the same time (Ming Ma) + HBASE-5094 The META can hold an entry for a region with a different server name from the one + actually in the AssignmentManager thus making the region inaccessible. (Ram) + HBASE-5081 Distributed log splitting deleteNode races against splitLog retry (Prakash) + HBASE-4357 Region stayed in transition - in closing state (Ming Ma) + HBASE-5088 A concurrency issue on SoftValueSortedMap (Jieshan Bean and Lars H) + HBASE-5152 Region is on service before completing initialization when doing rollback of split, + it will affect read correctness (Chunhui) + HBASE-5137 MasterFileSystem.splitLog() should abort even if waitOnSafeMode() throws IOException(Ted) + HBASE-5121 MajorCompaction may affect scan's correctness (chunhui shen and Lars H) + HBASE-5143 Fix config typo in pluggable load balancer factory (Harsh J) + HBASE-5196 Failure in region split after PONR could cause region hole (Jimmy Xiang) + + TESTS + HBASE-4450 test for number of blocks read: to serve as baseline for expected + blocks read and for catching regressions (Kannan) + HBASE-4492 TestRollingRestart fails intermittently (Ted Yu and Ram) + HBASE-4512 JVMClusterUtil throwing wrong exception when master thread cannot be created (Ram) + HBASE-4479 TestMasterFailover failure in Hbase-0.92#17(Ram) + HBASE-4651 ConcurrentModificationException might be thrown in + TestHCM.testConnectionUniqueness (Jinchao) + HBASE-4518 TestServerCustomProtocol fails intermittently + HBASE-4790 Occasional TestDistributedLogSplitting failure (Jinchao) + HBASE-4864 TestMasterObserver#testRegionTransitionOperations occasionally + fails (Gao Jinchao) + HBASE-4868 TestOfflineMetaRebuildBase#testMetaRebuild occasionally fails + (Gao Jinchao) + HBASE-4874 Run tests with non-secure random, some tests hang otherwise (Lars H) + HBASE-5112 TestReplication#queueFailover flaky due to potentially + uninitialized Scan (Jimmy Xiang) + HBASE-5113 TestDrainingServer expects round robin region assignment but misses a + config parameter + HBASE-5105 TestImportTsv failed with hadoop 0.22 (Ming Ma) + + IMPROVEMENTS + HBASE-3290 Max Compaction Size (Nicolas Spiegelberg via Stack) + HBASE-3292 Expose block cache hit/miss/evict counts into region server + metrics + HBASE-2936 Differentiate between daemon & restart sleep periods + HBASE-3316 Add support for Java Serialization to HbaseObjectWritable + (Ed Kohlwey via Stack) + HBASE-1861 Multi-Family support for bulk upload tools + HBASE-3308 SplitTransaction.splitStoreFiles slows splits a lot + HBASE-3328 Added Admin API to specify explicit split points + HBASE-3377 Upgrade Jetty to 6.1.26 + HBASE-3393 Update Avro gateway to use Avro 1.4.1 and the new + server.join() method (Jeff Hammerbacher via Stack) + HBASE-3433 KeyValue API to explicitly distinguish between deep & shallow + copies + HBASE-3522 Unbundle our RPC versioning; rather than a global for all 4 + Interfaces -- region, master, region to master, and + coprocesssors -- instead version each individually + HBASE-3520 Update our bundled hadoop from branch-0.20-append to latest + (rpc version 43) + HBASE-3563 [site] Add one-page-only version of hbase doc + HBASE-3564 DemoClient.pl - a demo client in Perl + HBASE-3560 the hbase-default entry of "hbase.defaults.for.version" + causes tests not to run via not-maven + HBASE-3513 upgrade thrift to 0.5.0 and use mvn version + HBASE-3533 Allow HBASE_LIBRARY_PATH env var to specify extra locations + of native lib + HBASE-3631 CLONE - HBase 2984 breaks ability to specify BLOOMFILTER & + COMPRESSION via shell + HBASE-3630 DemoClient.Java is outdated (Moaz Reyed via Stack) + HBASE-3618 Add to HBase book, 'schema' chapter - pre-creating regions and + key types (Doug Meil via Stack) + HBASE-2495 Allow record filtering with selected row key values in HBase + Export (Subbu M Iyer via Stack) + HBASE-3440 Clean out load_table.rb and make sure all roads lead to + completebulkload tool (Vidhyashankar Venkataraman via Stack) + HBASE-3653 Parallelize Server Requests on HBase Client + HBASE-3657 reduce copying of HRegionInfo's (Ted Yu via Stack) + HBASE-3422 Balancer will try to rebalance thousands of regions in one go; + needs an upper bound added (Ted Yu via Stack) + HBASE-3676 Update region server load for AssignmentManager through + regionServerReport() (Ted Yu via Stack) + HBASE-3468 Enhance checkAndPut and checkAndDelete with comparators + HBASE-3683 NMapInputFormat should use a different config param for + number of maps + HBASE-3673 Reduce HTable Pool Contention Using Concurrent Collections + (Karthick Sankarachary via Stack) + HBASE-3474 HFileOutputFormat to use column family's compression algorithm + HBASE-3541 REST Multi Gets (Elliott Clark via Stack) + HBASE-3052 Add ability to have multiple ZK servers in a quorum in + MiniZooKeeperCluster for test writing (Liyin Tang via Stack) + HBASE-3693 isMajorCompaction() check triggers lots of listStatus DFS RPC + calls from HBase (Liyin Tang via Stack) + HBASE-3717 deprecate HTable isTableEnabled() methods in favor of + HBaseAdmin methods (David Butler via Stack) + HBASE-3720 Book.xml - porting conceptual-view / physical-view sections of + HBaseArchitecture wiki (Doug Meil via Stack) + HBASE-3705 Allow passing timestamp into importtsv (Andy Sautins via Stack) + HBASE-3715 Book.xml - adding architecture section on client, adding section + on spec-ex under mapreduce (Doug Meil via Stack) + HBASE-3684 Support column range filter (Jerry Chen via Stack) + HBASE-3647 Distinguish read and write request count in region + (Ted Yu via Stack) + HBASE-3704 Show per region request count in table.jsp + (Ted Yu via Stack) + HBASE-3694 high multiput latency due to checking global mem store size + in a synchronized function (Liyin Tang via Stack) + HBASE-3710 Book.xml - fill out descriptions of metrics + (Doug Meil via Stack) + HBASE-3738 Book.xml - expanding Architecture Client section + (Doug Meil via Stack) + HBASE-3587 Eliminate use of read-write lock to guard loaded + coprocessor collection + HBASE-3729 Get cells via shell with a time range predicate + (Ted Yu via Stack) + HBASE-3764 Book.xml - adding 2 FAQs (SQL and arch question) + HBASE-3770 Make FilterList accept var arg Filters in its constructor + as a convenience (Erik Onnen via Stack) + HBASE-3769 TableMapReduceUtil is inconsistent with other table-related + classes that accept byte[] as a table name (Erik Onnen via Stack) + HBASE-3768 Add best practice to book for loading row key only + (Erik Onnen via Stack) + HBASE-3765 metrics.xml - small format change and adding nav to hbase + book metrics section (Doug Meil) + HBASE-3759 Eliminate use of ThreadLocals for CoprocessorEnvironment + bypass() and complete() + HBASE-3701 revisit ArrayList creation (Ted Yu via Stack) + HBASE-3753 Book.xml - architecture, adding more Store info (Doug Meil) + HBASE-3784 book.xml - adding small subsection in architecture/client on + filters (Doug Meil) + HBASE-3785 book.xml - moving WAL into architecture section, plus adding + more description on what it does (Doug Meil) + HBASE-3699 Make RegionServerServices and MasterServices extend Server + (Erik Onnen) + HBASE-3757 Upgrade to ZK 3.3.3 + HBASE-3609 Improve the selection of regions to balance; part 2 (Ted Yu) + HBASE-2939 Allow Client-Side Connection Pooling (Karthik Sankarachary) + HBASE-3798 [REST] Allow representation to elide row key and column key + HBASE-3812 Tidy up naming consistency and documentation in coprocessor + framework (Mingjie Lai) + HBASE-1512 Support aggregate functions (Himanshu Vashishtha) + HBASE-3796 Per-Store Enties in Compaction Queue + HBASE-3670 Fix error handling in get(List gets) + (Harsh J Chouraria) + HBASE-3835 Switch master and region server pages to Jamon-based templates + HBASE-3721 Speedup LoadIncrementalHFiles (Ted Yu) + HBASE-3855 Performance degradation of memstore because reseek is linear + (dhruba borthakur) + HBASE-3797 StoreFile Level Compaction Locking + HBASE-1476 Multithreaded Compactions + HBASE-3877 Determine Proper Defaults for Compaction ThreadPools + HBASE-3880 Make mapper function in ImportTSV plug-able (Bill Graham) + HBASE-2938 HBASE-2938 Add Thread-Local Behavior To HTable Pool + (Karthick Sankarachary) + HBASE-3811 Allow adding attributes to Scan (Alex Baranau) + HBASE-3841 HTable and HTableInterface docs are inconsistent with + one another (Harsh J Chouraria) + HBASE-2937 Facilitate Timeouts In HBase Client (Karthick Sankarachary) + HBASE-3921 Allow adding arbitrary blobs to Put (dhruba borthakur) + HBASE-3931 Allow adding attributes to Get + HBASE-3942 The thrift scannerOpen functions should support row caching + (Adam Worthington) + HBASE-2556 Add convenience method to HBaseAdmin to get a collection of + HRegionInfo objects for each table (Ming Ma) + HBASE-3952 Guava snuck back in as a dependency via hbase-3777 + HBASE-3808 Implement Executor.toString for master handlers at least + (Brock Noland) + HBASE-3873 Mavenize Hadoop Snappy JAR/SOs project dependencies + (Alejandro Abdelnur) + HBASE-3941 "hbase version" command line should print version info + (Jolly Chen) + HBASE-3961 Add Delete.setWriteToWAL functionality (Bruno Dumon) + HBASE-3928 Some potential performance improvements to Bytes/KeyValue + HBASE-3982 Improvements to TestHFileSeek + HBASE-3940 HBase daemons should log version info at startup and possibly + periodically (Li Pi) + HBASE-3789 Cleanup the locking contention in the master + HBASE-3927 Display total uncompressed byte size of a region in web UI + HBASE-4011 New MasterObserver hook: post startup of active master + HBASE-3994 SplitTransaction has a window where clients can + get RegionOfflineException + HBASE-4010 HMaster.createTable could be heavily optimized + HBASE-3506 Ability to disable, drop and enable tables using regex expression + (Joey Echeverria via Ted Yu) + HBASE-3516 Coprocessors: add test cases for loading coprocessor jars + (Mingjie Lai via garyh) + HBASE-4036 Implementing a MultipleColumnPrefixFilter (Anirudh Todi) + HBASE-4048 [Coprocessors] Support configuration of coprocessor at load time + HBASE-3240 Improve documentation of importtsv and bulk loads. + (Aaron T. Myers via todd) + HBASE-4054 Usability improvement to HTablePool (Daniel Iancu) + HBASE-4079 HTableUtil - helper class for loading data (Doug Meil via Ted Yu) + HBASE-3871 Speedup LoadIncrementalHFiles by parallelizing HFile splitting + HBASE-4081 Issues with HRegion.compactStores methods (Ming Ma) + HBASE-3465 Hbase should use a HADOOP_HOME environment variable if available + (Alejandro Abdelnur) + HBASE-3899 enhance HBase RPC to support free-ing up server handler threads + even if response is not ready (Vlad Dogaru) + HBASE-4142 Advise against large batches in javadoc for HTable#put(List) + HBASE-4139 [stargate] Update ScannerModel with support for filter package + additions + HBASE-1938 Make in-memory table scanning faster (nkeywal) + HBASE-4143 HTable.doPut(List) should check the writebuffer length every so often + (Doug Meil via Ted Yu) + HBASE-3065 Retry all 'retryable' zk operations; e.g. connection loss (Liyin Tang) + HBASE-3810 Registering a coprocessor in HTableDescriptor should be easier + (Mingjie Lai via garyh) + HBASE-4158 Upgrade pom.xml to surefire 2.9 (Aaron Kushner & Mikhail) + HBASE-3899 Add ability for delayed RPC calls to set return value + immediately at call return. (Vlad Dogaru via todd) + HBASE-4169 FSUtils LeaseRecovery for non HDFS FileSystems (Lohit Vijayarenu) + HBASE-3807 Fix units in RS UI metrics (subramanian raghunathan) + HBASE-4193 Enhance RPC debug logging to provide more details on + call contents + HBASE-4190 Coprocessors: pull up some cp constants from cp package to + o.a.h.h.HConstants (Mingjie Lai) + HBASE-4227 Modify the webUI so that default values of column families are + not shown (Nileema Shingte) + HBASE-4229 Replace Jettison JSON encoding with Jackson in HLogPrettyPrinter + (Riley Patterson) + HBASE-4230 Compaction threads need names + HBASE-4236 Don't lock the stream while serializing the response (Benoit Sigoure) + HBASE-4237 Directly remove the call being handled from the map of outstanding RPCs + (Benoit Sigoure) + HBASE-4199 blockCache summary - backend (Doug Meil) + HBASE-4240 Allow Loadbalancer to be pluggable + HBASE-4244 Refactor bin/hbase help + HBASE-4241 Optimize flushing of the Memstore (Lars Hofhansl) + HBASE-4248 Enhancements for Filter Language exposing HBase filters through + the Thrift API (Anirudh Todi) + HBASE-3900 Expose progress of a major compaction in UI and/or in shell + (Brad Anderson) + HBASE-4291 Improve display of regions in transition in UI to be more + readable (todd) + HBASE-4281 Add facility to dump current state of all executors (todd) + HBASE-4275 RS should communicate fatal "aborts" back to the master (todd) + HBASE-4263 New config property for user-table only RegionObservers + (Lars Hofhansl) + HBASE-4257 Limit the number of regions in transitions displayed on + master webpage. (todd) + HBASE-1730 Online Schema Changes + HBASE-4206 jenkins hash implementation uses longs unnecessarily + (Ron Yang) + HBASE-3842 Refactor Coprocessor Compaction API + HBASE-4312 Deploy new hbase logo + HBASE-4327 Compile HBase against hadoop 0.22 (Joep Rottinghuis) + HBASE-4339 Improve eclipse documentation and project file generation + (Eric Charles) + HBASE-4342 Update Thrift to 0.7.0 (Moaz Reyad) + HBASE-4260 Expose a command to manually trigger an HLog roll + (ramkrishna.s.vasudevan) + HBASE-4347 Remove duplicated code from Put, Delete, Get, Scan, MultiPut + (Lars Hofhansl) + HBASE-4359 Show dead RegionServer names in the HMaster info page + (Harsh J) + HBASE-4287 If region opening fails, change region in transition into + a FAILED_OPEN state so that it can be retried quickly. (todd) + HBASE-4381 Refactor split decisions into a split policy class. (todd) + HBASE-4373 HBaseAdmin.assign() does not use force flag (Ramkrishna) + HBASE-4425 Provide access to RpcServer instance from RegionServerServices + HBASE-4411 When copying tables/CFs, allow CF names to be changed + (David Revell) + HBASE-4424 Provide coprocessors access to createTable() via + MasterServices + HBASE-4432 Enable/Disable off heap cache with config (Li Pi) + HBASE-4434 seek optimization: don't do eager HFile Scanner + next() unless the next KV is needed + (Kannan Muthukkaruppan) + HBASE-4280 [replication] ReplicationSink can deadlock itself via handlers + HBASE-4014 Coprocessors: Flag the presence of coprocessors in logged + exceptions (Eugene Koontz) + HBASE-4449 LoadIncrementalHFiles should be able to handle CFs with blooms + (David Revell) + HBASE-4454 Add failsafe plugin to build and rename integration tests + (Jesse Yates) + HBASE-4499 [replication] Source shouldn't update ZK if it didn't progress + (Chris Trezzo via JD) + HBASE-2794 Utilize ROWCOL bloom filter if multiple columns within same family + are requested in a Get (Mikhail Bautin) + HBASE-4487 The increment operation can release the rowlock before sync-ing + the Hlog (dhruba borthakur) + HBASE-4526 special case for stopping master in hbase-daemon.sh is no longer + required (Roman Shaposhnik) + HBASE-4520 Better handling of Bloom filter type discrepancy between HFile + and CF config (Mikhail Bautin) + HBASE-4558 Refactor TestOpenedRegionHandler and TestOpenRegionHandler.(Ram) + HBASE-4558 Addendum for TestMasterFailover (Ram) - Breaks the build + HBASE-4568 Make zk dump jsp response faster + HBASE-4606 Remove spam in HCM and fix a list.size == 0 + HBASE-3581 hbase rpc should send size of response + HBASE-4585 Avoid seek operation when current kv is deleted(Liyin Tang) + HBASE-4486 Improve Javadoc for HTableDescriptor (Akash Ashok) + HBASE-4604 hbase.client.TestHTablePool could start a single + cluster instead of one per method (nkeywal) + HBASE-3929 Add option to HFile tool to produce basic stats (Matteo + Bertozzi and todd via todd) + HBASE-4694 Some cleanup of log messages in RS and M + HBASE-4603 Uneeded sleep time for tests in + hbase.master.ServerManager#waitForRegionServers (nkeywal) + HBASE-4703 Improvements in tests (nkeywal) + HBASE-4611 Add support for Phabricator/Differential as an alternative code review tool + HBASE-3939 Some crossports of Hadoop IPC fixes + HBASE-4756 Enable tab-completion in HBase shell (Ryan Thiessen) + HBASE-4759 Migrate from JUnit 4.8.2 to JUnit 4.10 (nkeywal) + HBASE-4554 Allow set/unset coprocessor table attributes from shell + (Mingjie Lai) + HBASE-4779 TestHTablePool, TestScanWithBloomError, TestRegionSplitCalculator are + not tagged and TestPoolMap should not use TestSuite (N Keywal) + HBASE-4805 Allow better control of resource consumption in HTable (Lars H) + HBASE-4903 Return a result from RegionObserver.preIncrement + (Daniel Gómez Ferro via Lars H) + HBASE-4683 Always cache index and bloom blocks + + TASKS + HBASE-3559 Move report of split to master OFF the heartbeat channel + HBASE-3573 Move shutdown messaging OFF hearbeat; prereq for fix of + hbase-1502 + HBASE-3071 Graceful decommissioning of a regionserver + HBASE-3970 Address HMaster crash/failure half way through meta migration + (Subbu M Iyer) + HBASE-4013 Make ZooKeeperListener Abstract (Akash Ashok via Ted Yu) + HBASE-4025 Server startup fails during startup due to failure in loading + all table descriptors. (Subbu Iyer via Ted Yu) + HBASE-4017 BlockCache interface should be truly modular (Li Pi) + HBASE-4152 Rename o.a.h.h.regionserver.wal.WALObserver to + o.a.h.h.regionserver.wal.WALActionsListener + HBASE-4039 Users should be able to choose custom TableInputFormats without + modifying TableMapReduceUtil.initTableMapperJob() (Brock Noland) + HBASE-4185 Add doc for new hfilev2 format + HBASE-4315 RS requestsPerSecond counter seems to be off (subramanian raghunathan) + HBASE-4289 Move spinlock to SingleSizeCache rather than the slab allocator + (Li Pi) + HBASE-4296 Deprecate HTable[Interface].getRowOrBefore(...) (Lars Hofhansl) + HBASE-2195 Support cyclic replication (Lars Hofhansl) + HBASE-2196 Support more than one slave cluster (Lars Hofhansl) + HBASE-4429 Provide synchronous balanceSwitch() + HBASE-4437 Update hadoop in 0.92 (0.20.205?) + HBASE-4656 Note how dfs.support.append has to be enabled in 0.20.205.0 + clusters + HBASE-4699 Cleanup the UIs + HBASE-4552 Remove trivial 0.90 deprecated code from 0.92 and trunk. + (Jonathan Hsieh) + HBASE-4714 Don't ship w/ icms enabled by default + HBASE-4747 Upgrade maven surefire plugin to 2.10 + HBASE-4288 "Server not running" exception during meta verification causes RS abort + HBASE-4856 Upgrade zookeeper to 3.4.0 release + HBASE-5111 Upgrade zookeeper to 3.4.2 release + HBASE-5125 Upgrade hadoop to 1.0.0 + + NEW FEATURES + HBASE-2001 Coprocessors: Colocate user code with regions (Mingjie Lai via + Andrew Purtell) + HBASE-3287 Add option to cache blocks on hfile write and evict blocks on + hfile close + HBASE-3335 Add BitComparator for filtering (Nathaniel Cook via Stack) + HBASE-3260 Coprocessors: Add explicit lifecycle management + HBASE-3256 Coprocessors: Coprocessor host and observer for HMaster + HBASE-3345 Coprocessors: Allow observers to completely override base + function + HBASE-2824 A filter that randomly includes rows based on a configured + chance (Ferdy via Andrew Purtell) + HBASE-3455 Add memstore-local allocation buffers to combat heap + fragmentation in the region server. Enabled by default as of + 0.91 + HBASE-3257 Coprocessors: Extend server side API to include HLog operations + (Mingjie Lai via Andrew Purtell) + HBASE-3606 Create an package integration project (Eric Yang via Ryan) + HBASE-3488 Add CellCounter to count multiple versions of rows + (Subbu M. Iyer via Stack) + HBASE-1364 [performance] Distributed splitting of regionserver commit logs + (Prakash Khemani) + HBASE-3836 Add facility to track currently progressing actions and + workflows. (todd) + HBASE-3837 Show regions in transition on the master web page (todd) + HBASE-3839 Add monitoring of currently running tasks to the master and + RS web UIs + HBASE-3691 Add compressor support for 'snappy', google's compressor + (Nichole Treadway and Nicholas Telford) + HBASE-2233 Support both Hadoop 0.20 and 0.22 + HBASE-3857 Change the HFile Format (Mikhail & Liyin) + HBASE-4114 Metrics for HFile HDFS block locality (Ming Ma) + HBASE-4176 Exposing HBase Filters to the Thrift API (Anirudh Todi) + HBASE-4221 Changes necessary to build and run against Hadoop 0.23 + (todd) + HBASE-4071 Data GC: Remove all versions > TTL EXCEPT the last + written version (Lars Hofhansl) + HBASE-4242 Add documentation for HBASE-4071 (Lars Hofhansl) + HBASE-4027 Enable direct byte buffers LruBlockCache (Li Pi) + HBASE-4117 Slow Query Log and Client Operation Fingerprints + (Riley Patterson) + HBASE-4292 Add a debugging dump servlet to the master and regionserver + (todd) + HBASE-4057 Implement HBase version of "show processlist" (Riley Patterson) + HBASE-4219 Per Column Family Metrics + HBASE-4219 Addendum for failure of TestHFileBlock + HBASE-4377 [hbck] Offline rebuild .META. from fs data only + (Jonathan Hsieh) + HBASE-4298 Support to drain RS nodes through ZK (Aravind Gottipati) + HBASE-2742 Provide strong authentication with a secure RPC engine + HBASE-3025 Coprocessor based access control + +Release 0.90.7 - Unreleased + + BUG FIXES + HBASE-5271 Result.getValue and Result.getColumnLatest return the wrong column (Ghais Issa) + +Release 0.90.6 - Unreleased + + BUG FIXES + HBASE-4970 Add a parameter so that keepAliveTime of Htable thread pool can be changed (gaojinchao) + HBASE-5060 HBase client is blocked forever (Jinchao) + HBASE-5009 Failure of creating split dir if it already exists prevents splits from happening further + HBASE-5041 Major compaction on non existing table does not throw error (Shrijeet) + HBASE-5327 Print a message when an invalid hbase.rootdir is passed (Jimmy Xiang) + +Release 0.90.5 - Released + + BUG FIXES + HBASE-4160 HBase shell move and online may be unusable if region name + or server includes binary-encoded data (Jonathan Hsieh) + HBASE-4168 A client continues to try and connect to a powered down + regionserver (Anirudh Todi) + HBASE-4196 TableRecordReader may skip first row of region (Ming Ma) + HBASE-4170 createTable java doc needs to be improved (Mubarak Seyed) + HBASE-4144 RS does not abort if the initialization of RS fails + (ramkrishna.s.vasudevan) + HBASE-4148 HFileOutputFormat doesn't fill in TIMERANGE_KEY metadata + (Jonathan Hsieh) + HBASE-4159 HBaseServer - IPC Reader threads are not daemons (Douglas + Campbell) + HBASE-4095 Hlog may not be rolled in a long time if checkLowReplication's + request of LogRoll is blocked (Jieshan Bean) + HBASE-4253 TestScannerTimeOut.test3686a and TestHTablePool. + testReturnDifferentTable() failure because of using new + HTable(tablename) (ramkrishna.s.vasudevan) + HBASE-4124 ZK restarted while a region is being assigned, new active HM + re-assigns it but the RS warns 'already online on this server' + (Gaojinchao) + HBASE-4294 HLogSplitter sleeps with 1-second granularity (todd) + HBASE-4270 IOE ignored during flush-on-close causes dataloss + HBASE-4180 HBase should check the isSecurityEnabled flag before login + HBASE-4325 Improve error message when using STARTROW for meta scans + (Jonathan Hsieh) + HBASE-4238 CatalogJanitor can clear a daughter that split before + processing its parent + HBASE-4445 Not passing --config when checking if distributed mode or not + HBASE-4453 TestReplication failing up on builds.a.o because already + running zk with new format root servername + HBASE-4387 Error while syncing: DFSOutputStream is closed + (Lars Hofhansl) + HBASE-4295 rowcounter does not return the correct number of rows in + certain circumstances (David Revell) + HBASE-4515 User.getCurrent() can fail to initialize the current user + HBASE-4473 NPE when executors are down but events are still coming in + HBASE-4537 TestUser imports breaking build against secure Hadoop + HBASE-4501 [replication] Shutting down a stream leaves recovered + sources running + HBASE-4563 When error occurs in this.parent.close(false) of split, + the split region cannot write or read (bluedavy via Lars H) + HBASE-4570. Fix a race condition that could cause inconsistent results + from scans during concurrent writes. (todd and Jonathan Jsieh + via todd) + HBASE-4562 When split doing offlineParentInMeta encounters error, it'll + cause data loss (bluedavy via Lars H) + HBASE-4800 Result.compareResults is incorrect (James Taylor and Lars H) + HBASE-4848 TestScanner failing because hostname can't be null + HBASE-4862 Splitting hlog and opening region concurrently may cause data loss + (Chunhui Shen) + HBASE-4773 HBaseAdmin may leak ZooKeeper connections (Xufeng) + + IMPROVEMENT + HBASE-4205 Enhance HTable javadoc (Eric Charles) + HBASE-4222 Make HLog more resilient to write pipeline failures + HBASE-4293 More verbose logging in ServerShutdownHandler for meta/root + cases (todd) + HBASE-4276 AssignmentManager debug logs should be at INFO level for + META/ROOT regions (todd) + HBASE-4323 Add debug logging when AssignmentManager can't make a plan + for a region (todd) + HBASE-4313 Refactor TestHBaseFsck to make adding individual hbck tests + easier (Jonathan Hsieh) + HBASE-4272. Add -metaonly flag to hbck feature to only inspect and try + to repair META and ROOT. (todd) + HBASE-4321. Add a more comprehensive region split calculator for future use + in hbck. (Jonathan Hsieh) + HBASE-4384 Hard to tell what causes failure in CloseRegionHandler#getCurrentVersion + (Harsh J) + HBASE-4375 [hbck] Add region coverage visualization to hbck + (Jonathan Hsieh) + HBASE-4506 [hbck] Allow HBaseFsck to be instantiated without connecting + (Jonathan Hsieh) + HBASE-4509 [hbck] Improve region map output + (Jonathan Hsieh) + HBASE-4806 Fix logging message in HbaseObjectWritable + (Jonathan Hsieh via todd) + +Release 0.90.4 - August 10, 2011 + + BUG FIXES + HBASE-3878 Hbase client throws NoSuchElementException (Ted Yu) + HBASE-3881 Add disable balancer in graceful_stop.sh script + HBASE-3895 Fix order of parameters after HBASE-1511 + HBASE-3874 ServerShutdownHandler fails on NPE if a plan has a random + region assignment + HBASE-3902 Add Bytes.toBigDecimal and Bytes.toBytes(BigDecimal) + (Vaibhav Puranik) + HBASE-3820 Splitlog() executed while the namenode was in safemode may + cause data-loss (Jieshan Bean) + HBASE-3905 HBaseAdmin.createTableAsync() should check for invalid split + keys (Ted Yu) + HBASE-3908 TableSplit not implementing "hashCode" problem (Daniel Iancu) + HBASE-3915 Binary row keys in hbck and other miscellaneous binary key + display issues + HBASE-3914 ROOT region appeared in two regionserver's onlineRegions at + the same time (Jieshan Bean) + HBASE-3934 MemStoreFlusher.getMemStoreLimit() doesn't honor defaultLimit + (Ted Yu) + HBASE-3946 The splitted region can be online again while the standby + hmaster becomes the active one (Jieshan Bean) + HBASE-3916 Fix the default bind address of ThriftServer to be wildcard + instead of localhost. (Li Pi) + HBASE-3985 Same Region could be picked out twice in LoadBalance + (Jieshan Bean) + HBASE-3987 Fix a NullPointerException on a failure to load Bloom filter data + (Mikhail Bautin) + HBASE-3948 Improve split/compact result page for RegionServer status page + (Li Pi) + HBASE-3988 Infinite loop for secondary master (Liyin Tang) + HBASE-3995 HBASE-3946 broke TestMasterFailover + HBASE-2077 NullPointerException with an open scanner that expired causing + an immediate region server shutdown -- part 2. + HBASE-4005 close_region bugs + HBASE-4028 Hmaster crashes caused by splitting log. + (gaojinchao via Ted Yu) + HBASE-4035 Fix local-master-backup.sh - parameter order wrong + (Lars George via Ted Yu) + HBASE-4020 "testWritesWhileGetting" unit test needs to be fixed. + (Vandana Ayyalasomayajula via Ted Yu) + HBASE-3984 CT.verifyRegionLocation isn't doing a very good check, + can delay cluster recovery + HBASE-4045 [replication] NPE in ReplicationSource when ZK is gone + HBASE-4034 HRegionServer should be stopped even if no META regions + are hosted by the HRegionServer (Akash Ashok) + HBASE-4033 The shutdown RegionServer could be added to + AssignmentManager.servers again (Jieshan Bean) + HBASE-4088 npes in server shutdown + HBASE-3872 Hole in split transaction rollback; edits to .META. need + to be rolled back even if it seems like they didn't make it + HBASE-4101 Regionserver Deadlock (ramkrishna.s.vasudevan) + HBASE-4115 HBase shell assign and unassign unusable if region name + includes binary-encoded data (Ryan Brush) + HBASE-4126 Make timeoutmonitor timeout after 30 minutes instead of 3 + HBASE-4129 HBASE-3872 added a warn message 'CatalogJanitor: Daughter regiondir + does not exist' that is triggered though its often legit that daughter + is not present + + IMPROVEMENT + HBASE-3882 hbase-config.sh needs to be updated so it can auto-detects the + sun jre provided by RHEL6 (Roman Shaposhnik) + HBASE-3920 HLog hbase.regionserver.flushlogentries no longer supported + (Dave Latham) + HBASE-3919 More places output binary data to text (Dave Latham) + HBASE-3873 HBase IRB shell: Don't pretty-print the output when stdout + isn't a TTY (Benoît Sigoure) + HBASE-3969 Outdated data can not be cleaned in time (Zhou Shuaifeng) + HBASE-3968 HLog Pretty Printer (Riley Patterson) + +Release 0.90.3 - May 19th, 2011 + + BUG FIXES + HBASE-3746 Clean up CompressionTest to not directly reference + DistributedFileSystem (todd) + HBASE-3734 HBaseAdmin creates new configurations in getCatalogTracker + HBASE-3756 Can't move META or ROOT from shell + HBASE-3740 hbck doesn't reset the number of errors when retrying + HBASE-3744 createTable blocks until all regions are out of transition + (Ted Yu via Stack) + HBASE-3750 HTablePool.putTable() should call releaseHTableInterface() + for discarded tables (Ted Yu via garyh) + HBASE-3755 Catch zk's ConnectionLossException and augment error + message with more help + HBASE-3722 A lot of data is lost when name node crashed (gaojinchao) + HBASE-3771 All jsp pages don't clean their HBA + HBASE-3685 when multiple columns are combined with TimestampFilter, only + one column is returned (Jerry Chen) + HBASE-3708 createAndFailSilent is not so silent; leaves lots of logging + in ensemble logs (Dmitriy Ryaboy) + HBASE-3783 hbase-0.90.2.jar exists in hbase root and in 'lib/' + HBASE-3539 Improve shell help to reflect all possible options + (Harsh J Chouraria) + HBASE-3817 HBase Shell has an issue accepting FILTER for the 'scan' command. + (Harsh J Chouraria) + HBASE-3634 Fix JavaDoc for put(List puts) in HTableInterface + (Harsh J Chouraria) + HBASE-3749 Master can't exit when open port failed (gaojinchao) + HBASE-3794 TestRpcMetrics fails on machine where region server is running + (Alex Newman) + HBASE-3741 Make HRegionServer aware of the regions it's opening/closing + HBASE-3597 ageOfLastAppliedOp should update after cluster replication + failures + HBASE-3821 "NOT flushing memstore for region" keep on printing for half + an hour (zhoushuaifeng) + + IMPROVEMENTS + HBASE-3747 ReplicationSource should differanciate remote and local exceptions + HBASE-3652 Speed up tests by lowering some sleeps + HBASE-3767 Improve how HTable handles threads used for multi actions + HBASE-3795 Remove the "Cache hit for row" message + HBASE-3580 Remove RS from DeadServer when new instance checks in + HBASE-2470 Add Scan.setTimeRange() support in Shell (Harsh J Chouraria) + HBASE-3805 Log RegionState that are processed too late in the master + HBASE-3695 Some improvements to Hbck to test the entire region chain in + Meta and provide better error reporting (Marc Limotte) + HBASE-3813 Change RPC callQueue size from 'handlerCount * + MAX_QUEUE_SIZE_PER_HANDLER;' + HBASE-3860 HLog shouldn't create a new HBC when rolling + + TASKS + HBASE-3748 Add rolling of thrift/rest daemons to graceful_stop.sh script + HBASE-3846 Set RIT timeout higher + +Release 0.90.2 - 20110408 + + BUG FIXES + HBASE-3545 Possible liveness issue with MasterServerAddress in + HRegionServer getMaster (Greg Bowyer via Stack) + HBASE-3548 Fix type in documentation of pseudo distributed mode + HBASE-3553 HTable ThreadPoolExecutor does not properly initialize + for hbase.htable.threads.max threads + (Himanshu Vashishtha via garyh) + HBASE-3566 writeToWAL is not serialized for increment operation + HBASE-3576 MasterAddressTracker is registered to ZooKeeperWatcher twice + HBASE-3561 OPTS arguments are duplicated + HBASE-3572 memstore lab can leave half inited data structs (bad!) + HBASE-3589 test jar should not include mapred-queues.xml and + log4j.properties + HBASE-3593 DemoClient.cpp is outdated + HBASE-3591 completebulkload doesn't honor generic -D options + HBASE-3594 Rest server fails because of missing asm jar + HBASE-3582 Allow HMaster and HRegionServer to login from keytab + when on secure Hadoop + HBASE-3608 MemstoreFlusher error message doesnt include exception! + HBASE-1960 Master should wait for DFS to come up when creating + hbase.version; use alternate strategy for waiting for DNs + HBASE-3612 HBaseAdmin::isTableAvailable returns true when the table does + not exit + HBASE-3626 Update instructions in thrift demo files (Moaz Reyad via Stack) + HBASE-3633 ZKUtil::createSetData should only create a node when it + nonexists (Guanpeng Xu via Stack) + HBASE-3636 a bug about deciding whether this key is a new key for the ROWCOL + bloomfilter (Liyin Tang via Stack) + HBASE-3639 FSUtils.getRootDir should qualify path + HBASE-3648 [replication] failover is sloppy with znodes + HBASE-3613 NPE in MemStoreFlusher + HBASE-3650 HBA.delete can return too fast + HBASE-3659 Fix TestHLog to pass on newer versions of Hadoop + HBASE-3595 get_counter broken in shell + HBASE-3664 [replication] Adding a slave when there's none may kill the cluster + HBASE-3671 Split report before we finish parent region open; workaround + till 0.92; Race between split and OPENED processing + HBASE-3674 Treat ChecksumException as we would a ParseException splitting + logs; else we replay split on every restart + HBASE-3621 The timeout handler in AssignmentManager does an RPC while + holding lock on RIT; a big no-no (Ted Yu via Stack) + HBASE-3575 Update rename table script + HBASE-3687 Bulk assign on startup should handle a ServerNotRunningException + HBASE-3617 NoRouteToHostException during balancing will cause Master abort + (Ted Yu via Stack) + HBASE-3668 CatalogTracker.waitForMeta can wait forever and totally stall a RS + HBASE-3627 NPE in EventHandler when region already reassigned + HBASE-3660 HMaster will exit when starting with stale data in cached locations + such as -ROOT- or .META. + HBASE-3654 Weird blocking between getOnlineRegion and createRegionLoad + (Subbu M Iyer via Stack) + HBASE-3666 TestScannerTimeout fails occasionally + HBASE-3497 TableMapReduceUtil.initTableReducerJob broken due to setConf + method in TableOutputFormat + HBASE-3686 ClientScanner skips too many rows on recovery if using scanner + caching (Sean Sechrist via Stack) + + IMPROVEMENTS + HBASE-3542 MultiGet methods in Thrift + HBASE-3586 Improve the selection of regions to balance (Ted Yu via Andrew + Purtell) + HBASE-3603 Remove -XX:+HeapDumpOnOutOfMemoryError autodump of heap option + on OOME + HBASE-3285 Hlog recovery takes too much time + HBASE-3623 Allow non-XML representable separator characters in the ImportTSV tool + (Harsh J Chouraria via Stack) + HBASE-3620 Make HBCK utility faster + HBASE-3625 improve/fix support excluding Tests via Maven -D property + (Alejandro Abdelnur via todd) + HBASE-3437 Support Explict Split Points from the Shell + HBASE-3448 RegionSplitter, utility class to manually split tables + HBASE-3610 Improve RegionSplitter performance + HBASE-3496 HFile CLI Improvements + HBASE-3596 [replication] Wait a few seconds before transferring queues + HBASE-3600 Update our jruby to 1.6.0 + HBASE-3640 [replication] Transferring queues shouldn't be done inline with RS startup + HBASE-3658 Alert when heap is over committed (Subbu M Iyer via Stack) + HBASE-3681 Check the sloppiness of the region load before balancing (Ted Yu via JD) + HBASE-3703 hbase-config.sh needs to be updated so it can auto-detect + the sun jdk provided by RHEL6 (Bruno Mahe via todd) + +Release 0.90.1 - February 9th, 2011 + + NEW FEATURES + HBASE-3455 Add memstore-local allocation buffers to combat heap + fragmentation in the region server. Experimental / disabled + by default in 0.90.1 + + BUG FIXES + HBASE-3445 Master crashes on data that was moved from different host + HBASE-3449 Server shutdown handlers deadlocked waiting for META + HBASE-3456 Fix hardcoding of 20 second socket timeout down in HBaseClient + HBASE-3476 HFile -m option need not scan key values + (Prakash Khemani via Lars George) + HBASE-3481 max seq id in flushed file can be larger than its correct value + causing data loss during recovery + HBASE-3493 HMaster sometimes hangs during initialization due to missing + notify call (Bruno Dumon via Stack) + HBASE-3483 Memstore lower limit should trigger asynchronous flushes + HBASE-3494 checkAndPut implementation doesnt verify row param and writable + row are the same + HBASE-3416 For intra-row scanning, the update readers notification resets + the query matcher and can lead to incorrect behavior + HBASE-3495 Shell is failing on subsequent split calls + HBASE-3502 Can't open region because can't open .regioninfo because + AlreadyBeingCreatedException + HBASE-3501 Remove the deletion limit in LogCleaner + HBASE-3500 Documentation update for replicatio + HBASE-3419 If re-transition to OPENING during log replay fails, server + aborts. Instead, should just cancel region open. + HBASE-3524 NPE from CompactionChecker + HBASE-3531 When under global memstore pressure, dont try to flush + unflushable regions. + HBASE-3550 FilterList reports false positives (Bill Graham via Andrew + Purtell) + + IMPROVEMENTS + HBASE-3305 Allow round-robin distribution for table created with + multiple regions (ted yu via jgray) + HBASE-3508 LruBlockCache statistics thread should have a name + HBASE-3511 Allow rolling restart to apply to only RS or only masters + HBASE-3510 Add thread name for IPC reader threads + HBASE-3509 Add metric for flush queue length + HBASE-3517 Store build version in hbase-default.xml and verify at runtime + +Release 0.90.0 - January 19th, 2011 + INCOMPATIBLE CHANGES + HBASE-1822 Remove the deprecated APIs + HBASE-1848 Fixup shell for HBASE-1822 + HBASE-1854 Remove the Region Historian + HBASE-1930 Put.setTimeStamp misleading (doesn't change timestamp on + existing KeyValues, not copied in copy constructor) + (Dave Latham via Stack) + HBASE-1360 move up to Thrift 0.2.0 (Kay Kay and Lars Francke via Stack) + HBASE-2212 Refactor out lucene dependencies from HBase + (Kay Kay via Stack) + HBASE-2219 stop using code mapping for method names in the RPC + HBASE-1728 Column family scoping and cluster identification + HBASE-2099 Move build to Maven (Paul Smith via Stack) + HBASE-2260 Remove all traces of Ant and Ivy (Lars Francke via Stack) + HBASE-2255 take trunk back to hadoop 0.20 + HBASE-2378 Bulk insert with multiple reducers broken due to improper + ImmutableBytesWritable comparator (Todd Lipcon via Stack) + HBASE-2392 Upgrade to ZooKeeper 3.3.0 + HBASE-2294 Enumerate ACID properties of HBase in a well defined spec + (Todd Lipcon via Stack) + HBASE-2541 Remove transactional contrib (Clint Morgan via Stack) + HBASE-2542 Fold stargate contrib into core + HBASE-2565 Remove contrib module from hbase + HBASE-2397 Bytes.toStringBinary escapes printable chars + HBASE-2771 Update our hadoop jar to be latest from 0.20-append branch + HBASE-2803 Remove remaining Get code from Store.java,etc + HBASE-2553 Revisit IncrementColumnValue implementation in 0.22 + HBASE-2692 Master rewrite and cleanup for 0.90 + (Karthik Ranganathan, Jon Gray & Stack) + HBASE-2961 Close zookeeper when done with it (HCM, Master, and RS) + HBASE-2641 HBASE-2641 Refactor HLog splitLog, hbase-2437 continued; + break out split code as new classes + (James Kennedy via Stack) + + BUG FIXES + HBASE-1791 Timeout in IndexRecordWriter (Bradford Stephens via Andrew + Purtell) + HBASE-1737 Regions unbalanced when adding new node (recommit) + HBASE-1792 [Regression] Cannot save timestamp in the future + HBASE-1793 [Regression] HTable.get/getRow with a ts is broken + HBASE-1698 Review documentation for o.a.h.h.mapreduce + HBASE-1798 [Regression] Unable to delete a row in the future + HBASE-1790 filters are not working correctly (HBASE-1710 HBASE-1807 too) + HBASE-1779 ThriftServer logged error if getVer() result is empty + HBASE-1778 Improve PerformanceEvaluation (Schubert Zhang via Stack) + HBASE-1751 Fix KeyValue javadoc on getValue for client-side + HBASE-1795 log recovery doesnt reset the max sequence id, new logfiles can + get tossed as 'duplicates' + HBASE-1794 recovered log files are not inserted into the storefile map + HBASE-1824 [stargate] default timestamp should be LATEST_TIMESTAMP + HBASE-1740 ICV has a subtle race condition only visible under high load + HBASE-1808 [stargate] fix how columns are specified for scanners + HBASE-1828 CompareFilters are broken from client-side + HBASE-1836 test of indexed hbase broken + HBASE-1838 [javadoc] Add javadoc to Delete explaining behavior when no + timestamp provided + HBASE-1821 Filtering by SingleColumnValueFilter bug + HBASE-1840 RowLock fails when used with IndexTable + (Keith Thomas via Stack) + HBASE-818 HFile code review and refinement (Schubert Zhang via Stack) + HBASE-1830 HbaseObjectWritable methods should allow null HBCs + for when Writable is not Configurable (Stack via jgray) + HBASE-1847 Delete latest of a null qualifier when non-null qualifiers + exist throws a RuntimeException + HBASE-1850 src/examples/mapred do not compile after HBASE-1822 + HBASE-1853 Each time around the regionserver core loop, we clear the + messages to pass master, even if we failed to deliver them + HBASE-1815 HBaseClient can get stuck in an infinite loop while attempting + to contact a failed regionserver + HBASE-1856 HBASE-1765 broke MapReduce when using Result.list() + (Lars George via Stack) + HBASE-1857 WrongRegionException when setting region online after .META. + split (Cosmin Lehane via Stack) + HBASE-1809 NPE thrown in BoundedRangeFileInputStream + HBASE-1859 Misc shell fixes patch (Kyle Oba via Stack) + HBASE-1865 0.20.0 TableInputFormatBase NPE + HBASE-1866 Scan(Scan) copy constructor does not copy value of + cacheBlocks + HBASE-1869 IndexedTable delete fails when used in conjunction with + RowLock (Keith Thomas via Stack) + HBASE-1858 Master can't split logs created by THBase (Clint Morgan via + Andrew Purtell) + HBASE-1871 Wrong type used in TableMapReduceUtil.initTableReduceJob() + (Lars George via Stack) + HBASE-1883 HRegion passes the wrong minSequenceNumber to + doReconstructionLog (Clint Morgan via Stack) + HBASE-1878 BaseScanner results can't be trusted at all (Related to + hbase-1784) + HBASE-1831 Scanning API must be reworked to allow for fully functional + Filters client-side + HBASE-1890 hbase-1506 where assignment is done at regionserver doesn't + work + HBASE-1889 ClassNotFoundException on trunk for REST + HBASE-1905 Remove unused config. hbase.hstore.blockCache.blockSize + HBASE-1906 FilterList of prefix and columnvalue not working properly with + deletes and multiple values + HBASE-1896 WhileMatchFilter.reset should call encapsulated filter reset + HBASE-1912 When adding a secondary index to an existing table, it will + cause NPE during re-indexing (Mingjui Ray Liao via Andrew + Purtell) + HBASE-1916 FindBugs and javac warnings cleanup + HBASE-1908 ROOT not reassigned if only one regionserver left + HBASE-1915 HLog.sync is called way too often, needs to be only called one + time per RPC + HBASE-1777 column length is not checked before saved to memstore + HBASE-1925 IllegalAccessError: Has not been initialized (getMaxSequenceId) + HBASE-1929 If hbase-default.xml is not in CP, zk session timeout is 10 + seconds! + HBASE-1927 Scanners not closed properly in certain circumstances + HBASE-1934 NullPointerException in ClientScanner (Andrew Purtell via Stack) + HBASE-1946 Unhandled exception at regionserver (Dmitriy Lyfar via Stack) + HBASE-1682 IndexedRegion does not properly handle deletes + (Andrew McCall via Clint Morgan and Stack) + HBASE-1953 Overhaul of overview.html (html fixes, typos, consistency) - + no content changes (Lars Francke via Stack) + HBASE-1954 Transactional scans do not see newest put (Clint Morgan via + Stack) + HBASE-1919 code: HRS.delete seems to ignore exceptions it shouldnt + HBASE-1951 Stack overflow when calling HTable.checkAndPut() + when deleting a lot of values + HBASE-1781 Weird behavior of WildcardColumnTracker.checkColumn(), + looks like recursive loop + HBASE-1949 KeyValue expiration by Time-to-Live during major compaction is + broken (Gary Helmling via Stack) + HBASE-1957 Get-s can't set a Filter + HBASE-1928 ROOT and META tables stay in transition state (making the system + not usable) if the designated regionServer dies before the + assignment is complete (Yannis Pavlidis via Stack) + HBASE-1962 Bulk loading script makes regions incorrectly (loadtable.rb) + HBASE-1966 Apply the fix from site/ to remove the forrest dependency on + Java 5 + HBASE-1967 [Transactional] client.TestTransactions.testPutPutScan fails + sometimes -- Temporary fix + HBASE-1841 If multiple of same key in an hfile and they span blocks, may + miss the earlier keys on a lookup + (Schubert Zhang via Stack) + HBASE-1977 Add ts and allow setting VERSIONS when scanning in shell + HBASE-1979 MurmurHash does not yield the same results as the reference C++ + implementation when size % 4 >= 2 (Olivier Gillet via Andrew + Purtell) + HBASE-1999 When HTable goes away, close zk session in shutdown hook or + something... + HBASE-1997 zk tick time bounds maximum zk session time + HBASE-2003 [shell] deleteall ignores column if specified + HBASE-2018 Updates to .META. blocked under high MemStore load + HBASE-1994 Master will lose hlog entries while splitting if region has + empty oldlogfile.log (Lars George via Stack) + HBASE-2022 NPE in housekeeping kills RS + HBASE-2034 [Bulk load tools] loadtable.rb calls an undefined method + 'descendingIterator' (Ching-Shen Chen via Stack) + HBASE-2033 Shell scan 'limit' is off by one + HBASE-2040 Fixes to group commit + HBASE-2047 Example command in the "Getting Started" + documentation doesn't work (Benoit Sigoure via JD) + HBASE-2048 Small inconsistency in the "Example API Usage" + (Benoit Sigoure via JD) + HBASE-2044 HBASE-1822 removed not-deprecated APIs + HBASE-1960 Master should wait for DFS to come up when creating + hbase.version + HBASE-2054 memstore size 0 is >= than blocking -2.0g size + HBASE-2064 Cannot disable a table if at the same the Master is moving + its regions around + HBASE-2065 Cannot disable a table if any of its region is opening + at the same time + HBASE-2026 NPE in StoreScanner on compaction + HBASE-2072 fs.automatic.close isn't passed to FileSystem + HBASE-2075 Master requires HDFS superuser privileges due to waitOnSafeMode + HBASE-2077 NullPointerException with an open scanner that expired causing + an immediate region server shutdown (Sam Pullara via JD) + HBASE-2078 Add JMX settings as commented out lines to hbase-env.sh + (Lars George via JD) + HBASE-2082 TableInputFormat is ignoring input scan's stop row setting + (Scott Wang via Andrew Purtell) + HBASE-2068 MetricsRate is missing "registry" parameter + (Lars George and Gary Helmling via Stack) + HBASE-2093 [stargate] RowSpec parse bug + HBASE-2114 Can't start HBase in trunk (JD and Kay Kay via JD) + HBASE-2115 ./hbase shell would not launch due to missing jruby dependency + (Kay Kay via JD) + HBASE-2101 KeyValueSortReducer collapses all values to last passed + HBASE-2119 Fix top-level NOTICES.txt file. Its stale. + HBASE-2120 [stargate] Unable to delete column families (Greg Lu via Andrew + Purtell) + HBASE-2123 Remove 'master' command-line option from PE + HBASE-2024 [stargate] Deletes not working as expected (Greg Lu via Andrew + Purtell) + HBASE-2122 [stargate] Initializing scanner column families doesn't work + (Greg Lu via Andrew Purtell) + HBASE-2124 Useless exception in HMaster on startup + HBASE-2127 randomWrite mode of PerformanceEvaluation benchmark program + writes only to a small range of keys (Kannan Muthukkaruppan + via Stack) + HBASE-2126 Fix build break - ec2 (Kay Kay via JD) + HBASE-2134 Ivy nit regarding checking with latest snapshots (Kay Kay via + Andrew Purtell) + HBASE-2138 unknown metrics type (Stack via JD) + HBASE-2137 javadoc warnings from 'javadoc' target (Kay Kay via Stack) + HBASE-2135 ant javadoc complains about missing classe (Kay Kay via Stack) + HBASE-2130 bin/* scripts - not to include lib/test/**/*.jar + (Kay Kay via Stack) + HBASE-2140 findbugs issues - 2 performance warnings as suggested by + findbugs (Kay Kay via Stack) + HBASE-2139 findbugs task in build.xml (Kay Kay via Stack) + HBASE-2147 run zookeeper in the same jvm as master during non-distributed + mode + HBASE-65 Thrift Server should have an option to bind to ip address + (Lars Francke via Stack) + HBASE-2146 RPC related metrics are missing in 0.20.3 since recent changes + (Gary Helmling via Lars George) + HBASE-2150 Deprecated HBC(Configuration) constructor doesn't call this() + HBASE-2154 Fix Client#next(int) javadoc + HBASE-2152 Add default jmxremote.{access|password} files into conf + (Lars George and Gary Helmling via Stack) + HBASE-2156 HBASE-2037 broke Scan - only a test for trunk + HBASE-2057 Cluster won't stop (Gary Helmling and JD via JD) + HBASE-2160 Can't put with ts in shell + HBASE-2144 Now does \x20 for spaces + HBASE-2163 ZK dependencies - explicitly add them until ZK artifacts are + published to mvn repository (Kay Kay via Stack) + HBASE-2164 Ivy nit - clean up configs (Kay Kay via Stack) + HBASE-2184 Calling HTable.getTableDescriptor().* on a full cluster takes + a long time (Cristian Ivascu via Stack) + HBASE-2193 Better readability of - hbase.regionserver.lease.period + (Kay Kay via Stack) + HBASE-2199 hbase.client.tableindexed.IndexSpecification, lines 72-73 + should be reversed (Adrian Popescu via Stack) + HBASE-2224 Broken build: TestGetRowVersions.testGetRowMultipleVersions + HBASE-2129 ant tar build broken since switch to Ivy (Kay Kay via Stack) + HBASE-2226 HQuorumPeerTest doesnt run because it doesnt start with the + word Test + HBASE-2230 SingleColumnValueFilter has an ungaurded debug log message + HBASE-2258 The WhileMatchFilter doesn't delegate the call to filterRow() + HBASE-2259 StackOverflow in ExplicitColumnTracker when row has many columns + HBASE-2268 [stargate] Failed tests and DEBUG output is dumped to console + since move to Mavenized build + HBASE-2276 Hbase Shell hcd() method is broken by the replication scope + parameter (Alexey Kovyrin via Lars George) + HBASE-2244 META gets inconsistent in a number of crash scenarios + HBASE-2284 fsWriteLatency metric may be incorrectly reported + (Kannan Muthukkaruppan via Stack) + HBASE-2063 For hfileoutputformat, on timeout/failure/kill clean up + half-written hfile (Ruslan Salyakhov via Stack) + HBASE-2281 Hbase shell does not work when started from the build dir + (Alexey Kovyrin via Stack) + HBASE-2293 CME in RegionManager#isMetaServer + HBASE-2261 The javadoc in WhileMatchFilter and it's tests in TestFilter + are not accurate/wrong + HBASE-2299 [EC2] mapreduce fixups for PE + HBASE-2295 Row locks may deadlock with themselves + (dhruba borthakur via Stack) + HBASE-2308 Fix the bin/rename_table.rb script, make it work again + HBASE-2307 hbase-2295 changed hregion size, testheapsize broke... fix it + HBASE-2269 PerformanceEvaluation "--nomapred" may assign duplicate random + seed over multiple testing threads (Tatsuya Kawano via Stack) + HBASE-2287 TypeError in shell (Alexey Kovyrin via Stack) + HBASE-2023 Client sync block can cause 1 thread of a multi-threaded client + to block all others (Karthik Ranganathan via Stack) + HBASE-2305 Client port for ZK has no default (Suraj Varma via Stack) + HBASE-2323 filter.RegexStringComparator does not work with certain bytes + (Benoit Sigoure via Stack) + HBASE-2313 Nit-pick about hbase-2279 shell fixup, if you do get with + non-existant column family, throws lots of exceptions + (Alexey Kovyrin via Stack) + HBASE-2334 Slimming of Maven dependency tree - improves assembly build + speed (Paul Smith via Stack) + HBASE-2336 Fix build broken with HBASE-2334 (Lars Francke via Lars George) + HBASE-2283 row level atomicity (Kannan Muthukkaruppan via Stack) + HBASE-2355 Unsynchronized logWriters map is mutated from several threads in + HLog splitting (Todd Lipcon via Andrew Purtell) + HBASE-2358 Store doReconstructionLog will fail if oldlogfile.log is empty + and won't load region (Cosmin Lehene via Stack) + HBASE-2370 saveVersion.sh doesnt properly grab the git revision + HBASE-2373 Remove confusing log message of how "BaseScanner GET got + different address/startcode than SCAN" + HBASE-2361 WALEdit broke replication scope + HBASE-2365 Double-assignment around split + HBASE-2398 NPE in HLog.append when calling writer.getLength + (Kannan Muthukkaruppan via Stack) + HBASE-2410 spurious warnings from util.Sleeper + HBASE-2335 mapred package docs don't say zookeeper jar is a dependent + HBASE-2417 HCM.locateRootRegion fails hard on "Connection refused" + HBASE-2346 Usage of FilterList slows down scans + HBASE-2341 ZK settings for initLimit/syncLimit should not have been removed + from hbase-default.xml + HBASE-2439 HBase can get stuck if updates to META are blocked + (Kannan Muthukkaruppan via Stack) + HBASE-2451 .META. by-passes cache; BLOCKCACHE=>'false' + HBASE-2453 Revisit compaction policies after HBASE-2248 commit + (Jonathan Gray via Stack) + HBASE-2458 Client stuck in TreeMap,remove (Todd Lipcon via Stack) + HBASE-2460 add_table.rb deletes any tables for which the target table name + is a prefix (Todd Lipcon via Stack) + HBASE-2463 Various Bytes.* functions silently ignore invalid arguments + (Benoit Sigoure via Stack) + HBASE-2443 IPC client can throw NPE if socket creation fails + (Todd Lipcon via Stack) + HBASE-2447 LogSyncer.addToSyncQueue doesn't check if syncer is still + running before waiting (Todd Lipcon via Stack) + HBASE-2494 Does not apply new.name parameter to CopyTable + (Yoonsik Oh via Stack) + HBASE-2481 Client is not getting UnknownScannerExceptions; they are + being eaten (Jean-Daniel Cryans via Stack) + HBASE-2448 Scanner threads are interrupted without acquiring lock properly + (Todd Lipcon via Stack) + HBASE-2491 master.jsp uses absolute links to table.jsp. This broke when + master.jsp moved under webapps/master(Cristian Ivascu via Stack) + HBASE-2487 Uncaught exceptions in receiving IPC responses orphan clients + (Todd Lipcon via Stack) + HBASE-2497 ProcessServerShutdown throws NullPointerException for offline + regiond (Miklos Kurucz via Stack) + HBASE-2499 Race condition when disabling a table leaves regions in transition + HBASE-2489 Make the "Filesystem needs to be upgraded" error message more + useful (Benoit Sigoure via Stack) + HBASE-2482 regions in transition do not get reassigned by master when RS + crashes (Todd Lipcon via Stack) + HBASE-2513 hbase-2414 added bug where we'd tight-loop if no root available + HBASE-2503 PriorityQueue isn't thread safe, KeyValueHeap uses it that way + HBASE-2431 Master does not respect generation stamps, may result in meta + getting permanently offlined + HBASE-2515 ChangeTableState considers split&&offline regions as being served + HBASE-2544 Forward port branch 0.20 WAL to TRUNK + HBASE-2546 Specify default filesystem in both the new and old way (needed + if we are to run on 0.20 and 0.21 hadoop) + HBASE-1895 HConstants.MAX_ROW_LENGTH is incorrectly 64k, should be 32k + HBASE-1968 Give clients access to the write buffer + HBASE-2028 Add HTable.incrementColumnValue support to shell + (Lars George via Andrew Purtell) + HBASE-2138 unknown metrics type + HBASE-2551 Forward port fixes that are in branch but not in trunk (part of + the merge of old 0.20 into TRUNK task) -- part 1. + HBASE-2474 Bug in HBASE-2248 - mixed version reads (not allowed by spec) + HBASE-2509 NPEs in various places, HRegion.get, HRS.close + HBASE-2344 InfoServer and hence HBase Master doesn't fully start if you + have HADOOP-6151 patch (Kannan Muthukkaruppan via Stack) + HBASE-2382 Don't rely on fs.getDefaultReplication() to roll HLogs + (Nicolas Spiegelberg via Stack) + HBASE-2415 Disable META splitting in 0.20 (Todd Lipcon via Stack) + HBASE-2421 Put hangs for 10 retries on failed region servers + HBASE-2442 Log lease recovery catches IOException too widely + (Todd Lipcon via Stack) + HBASE-2457 RS gets stuck compacting region ad infinitum + HBASE-2562 bin/hbase doesn't work in-situ in maven + (Todd Lipcon via Stack) + HBASE-2449 Local HBase does not stop properly + HBASE-2539 Cannot start ZK before the rest in tests anymore + HBASE-2561 Scanning .META. while split in progress yields + IllegalArgumentException (Todd Lipcon via Stack) + HBASE-2572 hbase/bin/set_meta_block_caching.rb:72: can't convert + Java::JavaLang::String into String (TypeError) - little + issue with script + HBASE-2483 Some tests do not use ephemeral ports + HBASE-2573 client.HConnectionManager$TableServers logs non-printable + binary bytes (Benoît Sigoure via Stack) + HBASE-2576 TestHRegion.testDelete_mixed() failing on hudson + HBASE-2581 Bloom commit broke some tests... fix + HBASE-2582 TestTableSchemaModel not passing after commit of blooms + HBASE-2583 Make webapps work in distributed mode again and make webapps + deploy at / instead of at /webapps/master/master.jsp + HBASE-2590 Failed parse of branch element in saveVersion.sh + HBASE-2591 HBASE-2587 hardcoded the port that dfscluster runs on + HBASE-2519 StoreFileScanner.seek swallows IOEs (Todd Lipcon via Stack) + HBASE-2516 Ugly IOE when region is being closed; rather, should NSRE + (Daniel Ploeg via Stack) + HBASE-2589 TestHRegion.testWritesWhileScanning flaky on trunk + (Todd Lipcon via Stack) + HBASE-2590 Failed parse of branch element in saveVersion.sh + (Benoît Sigoure via Stack) + HBASE-2586 Move hbase webapps to a hbase-webapps dir (Todd Lipcon via + Andrew Purtell) + HBASE-2610 ValueFilter copy pasted javadoc from QualifierFilter + HBASE-2619 HBase shell 'alter' command cannot set table properties to False + (Christo Wilson via Stack) + HBASE-2621 Fix bad link to HFile documentation in javadoc + (Jeff Hammerbacher via Todd Lipcon) + HBASE-2371 Fix 'list' command in shell (Alexey Kovyrin via Todd Lipcon) + HBASE-2620 REST tests don't use ephemeral ports + HBASE-2635 ImmutableBytesWritable ignores offset in several cases + HBASE-2654 Add additional maven repository temporarily to fetch Guava + HBASE-2560 Fix IllegalArgumentException when manually splitting table + from web UI + HBASE-2657 TestTableResource is broken in trunk + HBASE-2662 TestScannerResource.testScannerResource broke in trunk + HBASE-2667 TestHLog.testSplit failing in trunk (Cosmin and Stack) + HBASE-2614 killing server in TestMasterTransitions causes NPEs and test deadlock + HBASE-2615 M/R on bulk imported tables + HBASE-2676 TestInfoServers should use ephemeral ports + HBASE-2616 TestHRegion.testWritesWhileGetting flaky on trunk + HBASE-2684 TestMasterWrongRS flaky in trunk + HBASE-2691 LeaseStillHeldException totally ignored by RS, wrongly named + HBASE-2703 ui not working in distributed context + HBASE-2710 Shell should use default terminal width when autodetection fails + (Kannan Muthukkaruppan via Todd Lipcon) + HBASE-2712 Cached region location that went stale won't recover if + asking for first row + HBASE-2732 TestZooKeeper was broken, HBASE-2691 showed it + HBASE-2670 Provide atomicity for readers even when new insert has + same timestamp as current row. + HBASE-2733 Replacement of LATEST_TIMESTAMP with real timestamp was broken + by HBASE-2353. + HBASE-2734 TestFSErrors should catch all types of exceptions, not just RTE + HBASE-2738 TestTimeRangeMapRed updated now that we keep multiple cells with + same timestamp in MemStore + HBASE-2725 Shutdown hook management is gone in trunk; restore + HBASE-2740 NPE in ReadWriteConsistencyControl + HBASE-2752 Don't retry forever when waiting on too many store files + HBASE-2737 CME in ZKW introduced in HBASE-2694 (Karthik Ranganathan via JD) + HBASE-2756 MetaScanner.metaScan doesn't take configurations + HBASE-2656 HMaster.getRegionTableClosest should not return null for closed + regions + HBASE-2760 Fix MetaScanner TableNotFoundException when scanning starting at + the first row in a table. + HBASE-1025 Reconstruction log playback has no bounds on memory used + HBASE-2757 Fix flaky TestFromClientSide test by forcing region assignment + HBASE-2741 HBaseExecutorService needs to be multi-cluster friendly + (Karthik Ranganathan via JD) + HBASE-2769 Fix typo in warning message for HBaseConfiguration + HBASE-2768 Fix teardown order in TestFilter + HBASE-2763 Cross-port HADOOP-6833 IPC parameter leak bug + HBASE-2758 META region stuck in RS2ZK_REGION_OPENED state + (Karthik Ranganathan via jgray) + HBASE-2767 Fix reflection in tests that was made incompatible by HDFS-1209 + HBASE-2617 Load balancer falls into pathological state if one server under + average - slop; endless churn + HBASE-2729 Interrupted or failed memstore flushes should not corrupt the + region + HBASE-2772 Scan doesn't recover from region server failure + HBASE-2775 Update of hadoop jar in HBASE-2771 broke TestMultiClusters + HBASE-2774 Spin in ReadWriteConsistencyControl eating CPU (load > 40) and + no progress running YCSB on clean cluster startup + HBASE-2785 TestScannerTimeout.test2772 is flaky + HBASE-2787 PE is confused about flushCommits + HBASE-2707 Can't recover from a dead ROOT server if any exceptions happens + during log splitting + HBASE-2501 Refactor StoreFile Code + HBASE-2806 DNS hiccups cause uncaught NPE in HServerAddress#getBindAddress + (Benoit Sigoure via Stack) + HBASE-2806 (small compile fix via jgray) + HBASE-2797 Another NPE in ReadWriteConsistencyControl + HBASE-2831 Fix '$bin' path duplication in setup scripts + (Nicolas Spiegelberg via Stack) + HBASE-2781 ZKW.createUnassignedRegion doesn't make sure existing znode is + in the right state (Karthik Ranganathan via JD) + HBASE-2727 Splits writing one file only is untenable; need dir of recovered + edits ordered by sequenceid + HBASE-2843 Readd bloomfilter test over zealously removed by HBASE-2625 + HBASE-2846 Make rest server be same as thrift and avro servers + HBASE-1511 Pseudo distributed mode in LocalHBaseCluster + (Nicolas Spiegelberg via Stack) + HBASE-2851 Remove testDynamicBloom() unit test + (Nicolas Spiegelberg via Stack) + HBASE-2853 TestLoadIncrementalHFiles fails on TRUNK + HBASE-2854 broken tests on trunk + HBASE-2859 Cleanup deprecated stuff in TestHLog (Alex Newman via Stack) + HBASE-2858 TestReplication.queueFailover fails half the time + HBASE-2863 HBASE-2553 removed an important edge case + HBASE-2866 Region permanently offlined + HBASE-2849 HBase clients cannot recover when their ZooKeeper session + becomes invalid (Benôit Sigoure via Stack) + HBASE-2876 HBase hbck: false positive error reported for parent regions + that are in offline state in meta after a split + HBASE-2815 not able to run the test suite in background because TestShell + gets suspended on tty output (Alexey Kovyrin via Stack) + HBASE-2852 Bloom filter NPE (pranav via jgray) + HBASE-2820 hbck throws an error if HBase root dir isn't on the default FS + HBASE-2884 TestHFileOutputFormat flaky when map tasks generate identical + data + HBASE-2890 Initialize RPC JMX metrics on startup (Gary Helmling via Stack) + HBASE-2755 Duplicate assignment of a region after region server recovery + (Kannan Muthukkaruppan via Stack) + HBASE-2892 Replication metrics aren't updated + HBASE-2461 Split doesn't handle IOExceptions when creating new region + reference files + HBASE-2871 Make "start|stop" commands symmetric for Master & Cluster + (Nicolas Spiegelberg via Stack) + HBASE-2901 HBASE-2461 broke build + HBASE-2823 Entire Row Deletes not stored in Row+Col Bloom + (Alexander Georgiev via Stack) + HBASE-2897 RowResultGenerator should handle NoSuchColumnFamilyException + HBASE-2905 NPE when inserting mass data via REST interface (Sandy Yin via + Andrew Purtell) + HBASE-2908 Wrong order of null-check [in TIF] (Libor Dener via Stack) + HBASE-2909 SoftValueSortedMap is broken, can generate NPEs + HBASE-2919 initTableReducerJob: Unused method parameter + (Libor Dener via Stack) + HBASE-2923 Deadlock between HRegion.internalFlushCache and close + HBASE-2927 BaseScanner gets stale HRegionInfo in some race cases + HBASE-2928 Fault in logic in BinaryPrefixComparator leads to + ArrayIndexOutOfBoundsException (pranav via jgray) + HBASE-2924 TestLogRolling doesn't use the right HLog half the time + HBASE-2931 Do not throw RuntimeExceptions in RPC/HbaseObjectWritable + code, ensure we log and rethrow as IOE + (Karthik Ranganathan via Stack) + HBASE-2915 Deadlock between HRegion.ICV and HRegion.close + HBASE-2920 HTable.checkAndPut/Delete doesn't handle null values + HBASE-2944 cannot alter bloomfilter setting for a column family from + hbase shell (Kannan via jgray) + HBASE-2948 bin/hbase shell broken (after hbase-2692) + (Sebastian Bauer via Stack) + HBASE-2954 Fix broken build caused by hbase-2692 commit + HBASE-2918 SequenceFileLogWriter doesnt make it clear if there is no + append by config or by missing lib/feature + HBASE-2799 "Append not enabled" warning should not show if hbase + root dir isn't on DFS + HBASE-2943 major_compact (and other admin commands) broken for .META. + HBASE-2643 Figure how to deal with eof splitting logs + (Nicolas Spiegelberg via Stack) + HBASE-2925 LRU of HConnectionManager.HBASE_INSTANCES breaks if + HBaseConfiguration is changed + (Robert Mahfoud via Stack) + HBASE-2964 Deadlock when RS tries to RPC to itself inside SplitTransaction + HBASE-1485 Wrong or indeterminate behavior when there are duplicate + versions of a column (pranav via jgray) + HBASE-2967 Failed split: IOE 'File is Corrupt!' -- sync length not being + written out to SequenceFile + HBASE-2969 missing sync in HTablePool.getTable() + (Guilherme Mauro Germoglio Barbosa via Stack) + HBASE-2973 NPE in LogCleaner + HBASE-2974 LoadBalancer ArithmeticException: / by zero + HBASE-2975 DFSClient names in master and RS should be unique + HBASE-2978 LoadBalancer IndexOutOfBoundsException + HBASE-2983 TestHLog unit test is mis-comparing an assertion + (Alex Newman via Todd Lipcon) + HBASE-2986 multi writable can npe causing client hang + HBASE-2979 Fix failing TestMultParrallel in hudson build + HBASE-2899 hfile.min.blocksize.size ignored/documentation wrong + HBASE-3006 Reading compressed HFile blocks causes way too many DFS RPC + calls severly impacting performance + (Kannan Muthukkaruppan via Stack) + HBASE-3010 Can't start/stop/start... cluster using new master + HBASE-3015 recovered.edits files not deleted if it only contain edits that + have already been flushed; hurts perf for all future opens of + the region + HBASE-3018 Bulk assignment on startup runs serially through the cluster + servers assigning in bulk to one at a time + HBASE-3023 NPE processing server crash in MetaReader. getServerUserRegions + HBASE-3024 NPE processing server crash in MetaEditor.addDaughter + HBASE-3026 Fixup of "missing" daughters on split is too aggressive + HBASE-3003 ClassSize constants dont use 'final' + HBASE-3002 Fix zookeepers.sh to work properly with strange JVM options + HBASE-3028 No basescanner means no GC'ing of split, offlined parent regions + HBASE-2989 [replication] RSM won't cleanup after locking if 0 peers + HBASE-2992 [replication] MalformedObjectNameException in ReplicationMetrics + HBASE-3037 When new master joins running cluster does "Received report from + unknown server -- telling it to STOP_REGIONSERVER. + HBASE-3039 Stuck in regionsInTransition because rebalance came in at same + time as a split + HBASE-3042 Use LO4J in SequenceFileLogReader + (Nicolas Spiegelberg via Stack) + HBASE-2995 Incorrect dependency on Log class from Jetty + HBASE-3038 WALReaderFSDataInputStream.getPos() fails if Filesize > MAX_INT + (Nicolas Spiegelberg via Stack) + HBASE-3047 If new master crashes, restart is messy + HBASE-3054 Remore TestEmptyMetaInfo; it doesn't make sense any more. + HBASE-3056 Fix ordering in ZKWatcher constructor to prevent weird race + condition + HBASE-3057 Race condition when closing regions that causes flakiness in + TestRestartCluster + HBASE-3058 Fix REST tests on trunk + HBASE-3068 IllegalStateException when new server comes online, is given + 200 regions to open and 200th region gets timed out of regions + in transition + HBASE-3064 Long sleeping in HConnectionManager after thread is interrupted + (Bruno Dumon via Stack) + HBASE-2753 Remove sorted() methods from Result now that Gets are Scans + HBASE-3059 TestReadWriteConsistencyControl occasionally hangs (Hairong + via Ryan) + HBASE-2906 [rest/stargate] URI decoding in RowResource + HBASE-3008 Memstore.updateColumnValue passes wrong flag to heapSizeChange + (Causes memstore size to go negative) + HBASE-3089 REST tests are broken locally and up in hudson + HBASE-3062 ZooKeeper KeeperException$ConnectionLossException is a + "recoverable" exception; we should retry a while on server + startup at least. + HBASE-3074 Zookeeper test failing on hudson + HBASE-3089 REST tests are broken locally and up in hudson + HBASE-3085 TestSchemaResource broken on TRUNK up on HUDSON + HBASE-3080 TestAdmin hanging on hudson + HBASE-3063 TestThriftServer failing in TRUNK + HBASE-3094 Fixes for miscellaneous broken tests + HBASE-3060 [replication] Reenable replication on trunk with unit tests + HBASE-3041 [replication] ReplicationSink shouldn't kill the whole RS when + it fails to replicate + HBASE-3044 [replication] ReplicationSource won't cleanup logs if there's + nothing to replicate + HBASE-3113 Don't reassign regions if cluster is being shutdown + HBASE-2933 Skip EOF Errors during Log Recovery + (Nicolas Spiegelberg via Stack) + HBASE-3081 Log Splitting & Replay: Distinguish between Network IOE and + Parsing IOE (Nicolas Spiegelberg via Stack) + HBASE-3098 TestMetaReaderEditor is broken in TRUNK; hangs + HBASE-3110 TestReplicationSink failing in TRUNK up on Hudson + HBASE-3101 bin assembly doesn't include -tests or -source jars + HBASE-3121 [rest] Do not perform cache control when returning results + HBASE-2669 HCM.shutdownHook causes data loss with + hbase.client.write.buffer != 0 + HBASE-2985 HRegionServer.multi() no longer calls HRegion.put(List) when + possible + HBASE-3031 CopyTable MR job named "Copy Table" in Driver + HBASE-2658 REST (stargate) TableRegionModel Regions need to be updated to + work w/ new region naming convention from HBASE-2531 + HBASE-3140 Rest schema modification throw null pointer exception + (David Worms via Stack) + HBASE-2998 rolling-restart.sh shouldn't rely on zoo.cfg + HBASE-3145 importtsv fails when the line contains no data + (Kazuki Ohta via Todd Lipcon) + HBASE-2984 [shell] Altering a family shouldn't reset to default unchanged + attributes + HBASE-3143 Adding the tests' hbase-site.xml to the jar breaks some clients + HBASE-3139 Server shutdown processor stuck because meta not online + HBASE-3136 Stale reads from ZK can break the atomic CAS operations we + have in ZKAssign + HBASE-2753 Remove sorted() methods from Result now that Gets are Scans + HBASE-3147 Regions stuck in transition after rolling restart, perpetual + timeout handling but nothing happens + HBASE-3158 Bloom File Writes Broken if keySize is large + (Nicolas Spiegelberg via Stack) + HBASE-3155 HFile.appendMetaBlock() uses wrong comparator + (Nicolas Spiegelberg via Stack) + HBASE-3012 TOF doesn't take zk client port for remote clusters + HBASE-3159 Double play of OpenedRegionHandler for a single region + and assorted fixes around this + TestRollingRestart added + HBASE-3160 Use more intelligent priorities for PriorityCompactionQueue + (Nicolas Spiegelberg via Stack) + HBASE-3172 Reverse order of AssignmentManager and MetaNodeTracker in + ZooKeeperWatcher + HBASE-2406 Define semantics of cell timestamps/versions + HBASE-3175 Commit of HBASE-3160 broke TestPriorityCompactionQueue up on + hudson (nicolas via jgray) + HBASE-3163 If we timeout PENDING_CLOSE and send another closeRegion RPC, + need to handle NSRE from RS (comes as a RemoteException) + HBASE-3164 Handle case where we open META, ROOT has been closed but + znode location not deleted yet, and try to update META + location in ROOT + HBASE-2006 Documentation of hbase-site.xml parameters + HBASE-2672 README.txt should contain basic information like how to run + or build HBase + HBASE-3179 Enable ReplicationLogsCleaner only if replication is, + and fix its test + HBASE-3185 User-triggered compactions are triggering splits! + HBASE-1932 Encourage use of 'lzo' compression... add the wiki page to + getting started + HBASE-3151 NPE when trying to read regioninfo from .META. + HBASE-3191 FilterList with MUST_PASS_ONE and SCVF isn't working + (Stefan Seelmann via Stack) + HBASE-2471 Splitting logs, we'll make an output file though the + region no longer exists + HBASE-3095 Client needs to reconnect if it expires its zk session + HBASE-2935 Refactor "Corrupt Data" Tests in TestHLogSplit + (Alex Newman via Stack) + HBASE-3202 Closing a region, if we get a ConnectException, handle + it rather than abort + HBASE-3198 Log rolling archives files prematurely + HBASE-3203 We can get an order to open a region while shutting down + and it'll hold up regionserver shutdown + HBASE-3204 Reenable deferred log flush + HBASE-3195 [rest] Fix TestTransform breakage on Hudson + HBASE-3205 TableRecordReaderImpl.restart NPEs when first next is restarted + HBASE-3208 HLog.findMemstoresWithEditsOlderThan needs to look for edits + that are equal to too + HBASE-3141 Master RPC server needs to be started before an RS can check in + HBASE-3112 Enable and disable of table needs a bit of loving in new master + HBASE-3207 If we get IOException when closing a region, we should still + remove it from online regions and complete the close in ZK + HBASE-3199 large response handling: some fixups and cleanups + HBASE-3212 More testing of enable/disable uncovered base condition not in + place; i.e. that only one enable/disable runs at a time + HBASE-2898 MultiPut makes proper error handling impossible and leads to + corrupted data + HBASE-3213 If do abort of backup master will get NPE instead of graceful + abort + HBASE-3214 TestMasterFailover.testMasterFailoverWithMockedRITOnDeadRS is + failing (Gary via jgray) + HBASE-3216 Move HBaseFsck from client to util + HBASE-3219 Split parents are reassigned on restart and on disable/enable + HBASE-3222 Regionserver region listing in UI is no longer ordered + HBASE-3221 Race between splitting and disabling + HBASE-3224 NPE in KeyValue$KVComparator.compare when compacting + HBASE-3233 Fix Long Running Stats + HBASE-3232 Fix KeyOnlyFilter + Add Value Length (Nicolas via Ryan) + HBASE-3235 Intermittent incrementColumnValue failure in TestHRegion + (Gary via Ryan) + HBASE-3241 check to see if we exceeded hbase.regionserver.maxlogs limit is + incorrect (Kannan Muthukkaruppan via JD) + HBASE-3239 Handle null regions to flush in HLog.cleanOldLogs (Kannan + Muthukkaruppan via JD) + HBASE-3237 Split request accepted -- BUT CURRENTLY A NOOP + HBASE-3252 TestZooKeeperNodeTracker sometimes fails due to a race condition + in test notification (Gary Helmling via Andrew Purtell) + HBASE-3253 Thrift's missing from all the repositories in pom.xml + HBASE-3258 EOF when version file is empty + HBASE-3259 Can't kill the region servers when they wait on the master or + the cluster state znode + HBASE-3249 Typing 'help shutdown' in the shell shouldn't shutdown the cluster + HBASE-3262 TestHMasterRPCException uses non-ephemeral port for master + HBASE-3272 Remove no longer used options + HBASE-3269 HBase table truncate semantics seems broken as "disable" table + is now async by default + HBASE-3275 [rest] No gzip/deflate content encoding support + HBASE-3261 NPE out of HRS.run at startup when clock is out of sync + HBASE-3277 HBase Shell zk_dump command broken + HBASE-3267 close_region shell command breaks region + HBASE-3265 Regionservers waiting for ROOT while Master waiting for RegionServers + HBASE-3263 Stack overflow in AssignmentManager + HBASE-3234 hdfs-724 "breaks" TestHBaseTestingUtility multiClusters + HBASE-3286 Master passes IP and not hostname back to region server + HBASE-3297 If rows in .META. with no HRegionInfo cell, then hbck fails read + of .META. + HBASE-3294 WARN org.apache.hadoop.hbase.regionserver.Store: Not in set + (double-remove?) org.apache.hadoop.hbase.regionserver.StoreScanner@76607d3d + HBASE-3299 If failed open, we don't output the IOE + HBASE-3291 If split happens while regionserver is going down, we can stick open. + HBASE-3295 Dropping a 1k+ regions table likely ends in a client socket timeout + and it's very confusing + HBASE-3301 Treat java.net.SocketTimeoutException same as ConnectException + assigning/unassigning regions + HBASE-3296 Newly created table ends up disabled instead of assigned + HBASE-3304 Get spurious master fails during bootup + HBASE-3298 Regionserver can close during a split causing double assignment + HBASE-3309 " Not running balancer because dead regionserver processing" is a lie + HBASE-3314 [shell] 'move' is broken + HBASE-3315 Add debug output for when balancer makes bad balance + HBASE-3278 AssertionError in LoadBalancer + HBASE-3318 Split rollback leaves parent with writesEnabled=false + HBASE-3334 Refresh our hadoop jar because of HDFS-1520 + HBASE-3347 Can't truncate/disable table that has rows in .META. that have empty + info:regioninfo column + HBASE-3321 Replication.join shouldn't clear the logs znode + HBASE-3352 enabling a non-existent table from shell prints no error + HBASE-3353 table.jsp doesn't handle entries in META without server info + HBASE-3351 ReplicationZookeeper goes to ZK every time a znode is modified + HBASE-3326 Replication state's znode should be created else it + defaults to false + HBASE-3355 Stopping a stopped cluster leaks an HMaster + HBASE-3356 Add more checks in replication if RS is stopped + HBASE-3358 Recovered replication queue wait on themselves when terminating + HBASE-3359 LogRoller not added as a WAL listener when replication is enabled + HBASE-3360 ReplicationLogCleaner is enabled by default in 0.90 -- causes NPE + HBASE-3363 ReplicationSink should batch delete + HBASE-3365 EOFE contacting crashed RS causes Master abort + HBASE-3362 If .META. offline between OPENING and OPENED, then wrong server + location in .META. is possible + HBASE-3368 Split message can come in before region opened message; results + in 'Region has been PENDING_CLOSE for too long' cycle + HBASE-3366 WALObservers should be notified before the lock + HBASE-3367 Failed log split not retried + HBASE-3370 ReplicationSource.openReader fails to locate HLogs when they + aren't split yet + HBASE-3371 Race in TestReplication can make it fail + HBASE-3323 OOME in master splitting logs + HBASE-3374 Our jruby jar has *GPL jars in it; fix + HBASE-3343 Server not shutting down after losing log lease + HBASE-3381 Interrupt of a region open comes across as a successful open + HBASE-3386 NPE in TableRecordReaderImpl.restart + HBASE-3388 NPE processRegionInTransition(AssignmentManager.java:264) + doing rolling-restart.sh + HBASE-3383 [0.90RC1] bin/hbase script displays "no such file" warning on + target/cached_classpath.txt + HBASE-3344 Master aborts after RPC to server that was shutting down + HBASE-3408 AssignmentManager NullPointerException + HBASE-3402 Web UI shows two META regions + HBASE-3409 Failed server shutdown processing when retrying hlog split + HBASE-3412 HLogSplitter should handle missing HLogs + HBASE-3420 Handling a big rebalance, we can queue multiple instances of + a Close event; messes up state + HBASE-3423 hbase-env.sh over-rides HBASE_OPTS incorrectly (Ted Dunning via + Andrew Purtell) + HBASE-3407 hbck should pause between fixing and re-checking state + HBASE-3401 Region IPC operations should be high priority + HBASE-3430 hbase-daemon.sh should clean up PID files on process stop + + + IMPROVEMENTS + HBASE-1760 Cleanup TODOs in HTable + HBASE-1759 Ability to specify scanner caching on a per-scan basis + (Ken Weiner via jgray) + HBASE-1763 Put writeToWAL methods do not have proper getter/setter names + (second commit to fix compile error in hregion) + HBASE-1770 HTable.setWriteBufferSize does not flush the writeBuffer when + its size is set to a value lower than its current size. + (Mathias via jgray) + HBASE-1771 PE sequentialWrite is 7x slower because of + MemStoreFlusher#checkStoreFileCount + HBASE-1758 Extract interface out of HTable (Vaibhav Puranik via Andrew + Purtell) + HBASE-1776 Make rowcounter enum public + HBASE-1276 [testing] Upgrade to JUnit 4.x and use @BeforeClass + annotations to optimize tests + HBASE-1800 Too many ZK connections + HBASE-1819 Update to 0.20.1 hadoop and zk 3.2.1 + HBASE-1820 Update jruby from 1.2 to 1.3.1 + HBASE-1687 bin/hbase script doesn't allow for different memory settings + for each daemon type + HBASE-1823 Ability for Scanners to bypass the block cache + HBASE-1827 Add disabling block cache scanner flag to the shell + HBASE-1835 Add more delete tests + HBASE-1574 Client and server APIs to do batch deletes + HBASE-1833 hfile.main fixes + HBASE-1684 Backup (Export/Import) contrib tool for 0.20 + HBASE-1860 Change HTablePool#createHTable from private to protected + HBASE-48 Bulk load tools + HBASE-1855 HMaster web application doesn't show the region end key in the + table detail page (Andrei Dragomir via Stack) + HBASE-1870 Bytes.toFloat(byte[], int) is marked private + HBASE-1874 Client Scanner mechanism that is used for HbaseAdmin methods + (listTables, tableExists), is very slow if the client is far + away from the HBase cluster (Andrei Dragomir via Stack) + HBASE-1879 ReadOnly transactions generate WAL activity (Clint Morgan via + Stack) + HBASE-1875 Compression test utility + HBASE-1832 Faster enable/disable/delete + HBASE-1481 Add fast row key only scanning + HBASE-1506 [performance] Make splits faster + HBASE-1722 Add support for exporting HBase metrics via JMX + (Gary Helming via Stack) + HBASE-1899 Use scanner caching in shell count + HBASE-1887 Update hbase trunk to latests on hadoop 0.21 branch so we can + all test sync/append + HBASE-1902 Let PerformanceEvaluation support setting tableName and compress + algorithm (Schubert Zhang via Stack) + HBASE-1885 Simplify use of IndexedTable outside Java API + (Kevin Patterson via Stack) + HBASE-1903 Enable DEBUG by default + HBASE-1907 Version all client writables + HBASE-1914 hlog should be able to set replication level for the log + indendently from any other files + HBASE-1537 Intra-row scanning + HBASE-1918 Don't do DNS resolving in .META. scanner for each row + HBASE-1756 Refactor HLog (changing package first) + HBASE-1926 Remove unused xmlenc jar from trunk + HBASE-1936 HLog group commit + HBASE-1921 When the Master's session times out and there's only one, + cluster is wedged + HBASE-1942 Update hadoop jars in trunk; update to r831142 + HBASE-1943 Remove AgileJSON; unused + HBASE-1944 Add a "deferred log flush" attribute to HTD + HBASE-1945 Remove META and ROOT memcache size bandaid + HBASE-1947 If HBase starts/stops often in less than 24 hours, + you end up with lots of store files + HBASE-1829 Make use of start/stop row in TableInputFormat + (Lars George via Stack) + HBASE-1867 Tool to regenerate an hbase table from the data files + HBASE-1904 Add tutorial for installing HBase on Windows using Cygwin as + a test and development environment (Wim Van Leuven via Stack) + HBASE-1963 Output to multiple tables from Hadoop MR without use of HTable + (Kevin Peterson via Andrew Purtell) + HBASE-1975 SingleColumnValueFilter: Add ability to match the value of + previous versions of the specified column + (Jeremiah Jacquet via Stack) + HBASE-1971 Unit test the full WAL replay cycle + HBASE-1970 Export does one version only; make it configurable how many + it does + HBASE-1987 The Put object has no simple read methods for checking what + has already been added (Ryan Smith via Stack) + HBASE-1985 change HTable.delete(ArrayList) to HTable.delete(List) + HBASE-1958 Remove "# TODO: PUT BACK !!! "${HADOOP_HOME}"/bin/hadoop + dfsadmin -safemode wait" + HBASE-2011 Add zktop like output to HBase's master UI (Lars George via + Andrew Purtell) + HBASE-1995 Add configurable max value size check (Lars George via Andrew + Purtell) + HBASE-2017 Set configurable max value size check to 10MB + HBASE-2029 Reduce shell exception dump on console + (Lars George and J-D via Stack) + HBASE-2027 HConnectionManager.HBASE_INSTANCES leaks TableServers + (Dave Latham via Stack) + HBASE-2013 Add useful helpers to HBaseTestingUtility.java (Lars George + via J-D) + HBASE-2031 When starting HQuorumPeer, try to match on more than 1 address + HBASE-2043 Shell's scan broken + HBASE-2044 HBASE-1822 removed not-deprecated APIs + HBASE-2049 Cleanup HLog binary log output (Dave Latham via Stack) + HBASE-2052 Make hbase more 'live' when comes to noticing table creation, + splits, etc., for 0.20.3 + HBASE-2059 Break out WAL reader and writer impl from HLog + HBASE-2060 Missing closing tag in mapreduce package info (Lars George via + Andrew Purtell) + HBASE-2028 Add HTable.incrementColumnValue support to shell (Lars George + via Andrew Purtell) + HBASE-2062 Metrics documentation outdated (Lars George via JD) + HBASE-2045 Update trunk and branch zk to just-release 3.2.2. + HBASE-2074 Improvements to the hadoop-config script (Bassam Tabbara via + Stack) + HBASE-2076 Many javadoc warnings + HBASE-2068 MetricsRate is missing "registry" parameter (Lars George via JD) + HBASE-2025 0.20.2 accessed from older client throws + UndeclaredThrowableException; frustrates rolling upgrade + HBASE-2081 Set the retries higher in shell since client pause is lower + HBASE-1956 Export HDFS read and write latency as a metric + HBASE-2036 Use Configuration instead of HBaseConfiguration (Enis Soztutar + via Stack) + HBASE-2085 StringBuffer -> StringBuilder - conversion of references as + necessary (Kay Kay via Stack) + HBASE-2052 Upper bound of outstanding WALs can be overrun + HBASE-2086 Job(configuration,String) deprecated (Kay Kay via Stack) + HBASE-1996 Configure scanner buffer in bytes instead of number of rows + (Erik Rozendaal and Dave Latham via Stack) + HBASE-2090 findbugs issues (Kay Kay via Stack) + HBASE-2089 HBaseConfiguration() ctor. deprecated (Kay Kay via Stack) + HBASE-2035 Binary values are formatted wrong in shell + HBASE-2095 TIF shuold support more confs for the scanner (Bassam Tabbara + via Andrew Purtell) + HBASE-2107 Upgrading Lucene 2.2 to Lucene 3.0.0 (Kay Kay via Stack) + HBASE-2111 Move to ivy broke our being able to run in-place; i.e. + ./bin/start-hbase.sh in a checkout + HBASE-2136 Forward-port the old mapred package + HBASE-2133 Increase default number of client handlers + HBASE-2109 status 'simple' should show total requests per second, also + the requests/sec is wrong as is + HBASE-2151 Remove onelab and include generated thrift classes in javadoc + (Lars Francke via Stack) + HBASE-2149 hbase.regionserver.global.memstore.lowerLimit is too low + HBASE-2157 LATEST_TIMESTAMP not replaced by current timestamp in KeyValue + (bulk loading) + HBASE-2153 Publish generated HTML documentation for Thrift on the website + (Lars Francke via Stack) + HBASE-1373 Update Thrift to use compact/framed protocol (Lars Francke via + Stack) + HBASE-2172 Add constructor to Put for row key and timestamp + (Lars Francke via Stack) + HBASE-2178 Hooks for replication + HBASE-2180 Bad random read performance from synchronizing + hfile.fddatainputstream + HBASE-2194 HTable - put(Put) , put(ListHRS problem, regions are + not reassigned + HBASE-1568 Client doesnt consult old row filter interface in + filterSaysStop() - could result in NPE or excessive scanning + HBASE-1564 in UI make host addresses all look the same -- not IP sometimes + and host at others + HBASE-1567 cant serialize new filters + HBASE-1585 More binary key/value log output cleanup + (Lars George via Stack) + HBASE-1563 incrementColumnValue does not write to WAL (Jon Gray via Stack) + HBASE-1569 rare race condition can take down a regionserver + HBASE-1450 Scripts passed to hbase shell do not have shell context set up + for them + HBASE-1566 using Scan(startRow,stopRow) will cause you to iterate the + entire table + HBASE-1560 TIF can't seem to find one region + HBASE-1580 Store scanner does not consult filter.filterRow at end of scan + (Clint Morgan via Stack) + HBASE-1437 broken links in hbase.org + HBASE-1582 Translate ColumnValueFilter and RowFilterSet to the new Filter + interface + HBASE-1594 Fix scan addcolumns after hbase-1385 commit (broke hudson build) + HBASE-1595 hadoop-default.xml and zoo.cfg in hbase jar + HBASE-1602 HRegionServer won't go down since we added in new LruBlockCache + HBASE-1608 TestCachedBlockQueue failing on some jvms (Jon Gray via Stack) + HBASE-1615 HBASE-1597 introduced a bug when compacting after a split + (Jon Gray via Stack) + HBASE-1616 Unit test of compacting referenced StoreFiles (Jon Gray via + Stack) + HBASE-1618 Investigate further into the MemStoreFlusher StoreFile limit + (Jon Gray via Stack) + HBASE-1625 Adding check to Put.add(KeyValue), to see that it has the same + row as when instantiated (Erik Holstad via Stack) + HBASE-1629 HRS unable to contact master + HBASE-1633 Can't delete in TRUNK shell; makes it hard doing admin repairs + HBASE-1641 Stargate build.xml causes error in Eclipse + HBASE-1627 TableInputFormatBase#nextKeyValue catches the wrong exception + (Doğacan Güney via Stack) + HBASE-1644 Result.row is cached in getRow; this breaks MapReduce + (Doğacan Güney via Stack) + HBASE-1639 clean checkout with empty hbase-site.xml, zk won't start + HBASE-1646 Scan-s can't set a Filter (Doğacan Güney via Stack) + HBASE-1649 ValueFilter may not reset its internal state + (Doğacan Güney via Stack) + HBASE-1651 client is broken, it requests ROOT region location from ZK too + much + HBASE-1650 HBASE-1551 broke the ability to manage non-regionserver + start-up/shut down. ie: you cant start/stop thrift on a cluster + anymore + HBASE-1658 Remove UI refresh -- its annoying + HBASE-1659 merge tool doesnt take binary regions with \x escape format + HBASE-1663 Request compaction only once instead of every time 500ms each + time we cycle the hstore.getStorefilesCount() > + this.blockingStoreFilesNumber loop + HBASE-1058 Disable 1058 on catalog tables + HBASE-1583 Start/Stop of large cluster untenable + HBASE-1668 hbase-1609 broke TestHRegion.testScanSplitOnRegion unit test + HBASE-1669 need dynamic extensibility of HBaseRPC code maps and interface + lists (Clint Morgan via Stack) + HBASE-1359 After a large truncating table HBase becomes unresponsive + HBASE-1215 0.19.0 -> 0.20.0 migration (hfile, HCD changes, HSK changes) + HBASE-1689 Fix javadoc warnings and add overview on client classes to + client package + HBASE-1680 FilterList writable only works for HBaseObjectWritable + defined types (Clint Morgan via Stack and Jon Gray) + HBASE-1607 transactions / indexing fixes: trx deletes not handeled, index + scan can't specify stopRow (Clint Morgan via Stack) + HBASE-1693 NPE close_region ".META." in shell + HBASE-1706 META row with missing HRI breaks UI + HBASE-1709 Thrift getRowWithColumns doesn't accept column-family only + (Mathias Lehmann via Stack) + HBASE-1692 Web UI is extremely slow / freezes up if you have many tables + HBASE-1686 major compaction can create empty store files, causing AIOOB + when trying to read + HBASE-1705 Thrift server: deletes in mutateRow/s don't delete + (Tim Sell and Ryan Rawson via Stack) + HBASE-1703 ICVs across /during a flush can cause multiple keys with the + same TS (bad) + HBASE-1671 HBASE-1609 broke scanners riding across splits + HBASE-1717 Put on client-side uses passed-in byte[]s rather than always + using copies + HBASE-1647 Filter#filterRow is called too often, filters rows it shouldn't + have (Doğacan Güney via Ryan Rawson and Stack) + HBASE-1718 Reuse of KeyValue during log replay could cause the wrong + data to be used + HBASE-1573 Holes in master state change; updated startcode and server + go into .META. but catalog scanner just got old values (redux) + HBASE-1534 Got ZooKeeper event, state: Disconnected on HRS and then NPE + on reinit + HBASE-1725 Old TableMap interface's definitions are not generic enough + (Doğacan Güney via Stack) + HBASE-1732 Flag to disable regionserver restart + HBASE-1727 HTD and HCD versions need update + HBASE-1604 HBaseClient.getConnection() may return a broken connection + without throwing an exception (Eugene Kirpichov via Stack) + HBASE-1737 Regions unbalanced when adding new node + HBASE-1739 hbase-1683 broke splitting; only split three logs no matter + what N was + HBASE-1745 [tools] Tool to kick region out of inTransistion + HBASE-1757 REST server runs out of fds + HBASE-1768 REST server has upper limit of 5k PUT + HBASE-1766 Add advanced features to HFile.main() to be able to analyze + storefile problems + HBASE-1761 getclosest doesn't understand delete family; manifests as + "HRegionInfo was null or empty in .META" A.K.A the BS problem + HBASE-1738 Scanner doesnt reset when a snapshot is created, could miss + new updates into the 'kvset' (active part) + HBASE-1767 test zookeeper broken in trunk and 0.20 branch; broken on + hudson too + HBASE-1780 HTable.flushCommits clears write buffer in finally clause + HBASE-1784 Missing rows after medium intensity insert + HBASE-1809 NPE thrown in BoundedRangeFileInputStream + HBASE-1810 ConcurrentModificationException in region assignment + (Mathias Herberts via Stack) + HBASE-1804 Puts are permitted (and stored) when including an appended colon + HBASE-1715 Compaction failure in ScanWildcardColumnTracker.checkColumn + HBASE-2352 Small values for hbase.client.retries.number and + ipc.client.connect.max.retries breaks long ops in hbase shell + (Alexey Kovyrin via Stack) + HBASE-2531 32-bit encoding of regionnames waaaaaaayyyyy too susceptible to + hash clashes (Kannan Muthukkaruppan via Stack) + + IMPROVEMENTS + HBASE-1089 Add count of regions on filesystem to master UI; add percentage + online as difference between whats open and whats on filesystem + (Samuel Guo via Stack) + HBASE-1130 PrefixRowFilter (Michael Gottesman via Stack) + HBASE-1139 Update Clover in build.xml + HBASE-876 There are a large number of Java warnings in HBase; part 1, + part 2, part 3, part 4, part 5, part 6, part 7 and part 8 + (Evgeny Ryabitskiy via Stack) + HBASE-896 Update jruby from 1.1.2 to 1.1.6 + HBASE-1031 Add the Zookeeper jar + HBASE-1142 Cleanup thrift server; remove Text and profuse DEBUG messaging + (Tim Sell via Stack) + HBASE-1064 HBase REST xml/json improvements (Brian Beggs working of + initial Michael Gottesman work via Stack) + HBASE-5121 Fix shell usage for format.width + HBASE-845 HCM.isTableEnabled doesn't really tell if it is, or not + HBASE-903 [shell] Can't set table descriptor attributes when I alter a + table + HBASE-1166 saveVersion.sh doesn't work with git (Nitay Joffe via Stack) + HBASE-1167 JSP doesn't work in a git checkout (Nitay Joffe via Andrew + Purtell) + HBASE-1178 Add shutdown command to shell + HBASE-1184 HColumnDescriptor is too restrictive with family names + (Toby White via Andrew Purtell) + HBASE-1180 Add missing import statements to SampleUploader and remove + unnecessary @Overrides (Ryan Smith via Andrew Purtell) + HBASE-1191 ZooKeeper ensureParentExists calls fail + on absolute path (Nitay Joffe via Jean-Daniel Cryans) + HBASE-1187 After disabling/enabling a table, the regions seems to + be assigned to only 1-2 region servers + HBASE-1210 Allow truncation of output for scan and get commands in shell + (Lars George via Stack) + HBASE-1221 When using ant -projecthelp to build HBase not all the important + options show up (Erik Holstad via Stack) + HBASE-1189 Changing the map type used internally for HbaseMapWritable + (Erik Holstad via Stack) + HBASE-1188 Memory size of Java Objects - Make cacheable objects implement + HeapSize (Erik Holstad via Stack) + HBASE-1230 Document installation of HBase on Windows + HBASE-1241 HBase additions to ZooKeeper part 1 (Nitay Joffe via JD) + HBASE-1231 Today, going from a RowResult to a BatchUpdate reqiures some + data processing even though they are pretty much the same thing + (Erik Holstad via Stack) + HBASE-1240 Would be nice if RowResult could be comparable + (Erik Holstad via Stack) + HBASE-803 Atomic increment operations (Ryan Rawson and Jon Gray via Stack) + Part 1 and part 2 -- fix for a crash. + HBASE-1252 Make atomic increment perform a binary increment + (Jonathan Gray via Stack) + HBASE-1258,1259 ganglia metrics for 'requests' is confusing + (Ryan Rawson via Stack) + HBASE-1265 HLogEdit static constants should be final (Nitay Joffe via + Stack) + HBASE-1244 ZooKeeperWrapper constants cleanup (Nitay Joffe via Stack) + HBASE-1262 Eclipse warnings, including performance related things like + synthetic accessors (Nitay Joffe via Stack) + HBASE-1273 ZooKeeper WARN spits out lots of useless messages + (Nitay Joffe via Stack) + HBASE-1285 Forcing compactions should be available via thrift + (Tim Sell via Stack) + HBASE-1186 Memory-aware Maps with LRU eviction for cell cache + (Jonathan Gray via Andrew Purtell) + HBASE-1205 RegionServers should find new master when a new master comes up + (Nitay Joffe via Andrew Purtell) + HBASE-1309 HFile rejects key in Memcache with empty value + HBASE-1331 Lower the default scanner caching value + HBASE-1235 Add table enabled status to shell and UI + (Lars George via Stack) + HBASE-1333 RowCounter updates + HBASE-1195 If HBase directory exists but version file is inexistent, still + proceed with bootstrapping (Evgeny Ryabitskiy via Stack) + HBASE-1301 HTable.getRow() returns null if the row does no exist + (Rong-en Fan via Stack) + HBASE-1176 Javadocs in HBA should be clear about which functions are + asynchronous and which are synchronous + (Evgeny Ryabitskiy via Stack) + HBASE-1260 Bytes utility class changes: remove usage of ByteBuffer and + provide additional ByteBuffer primitives (Jon Gray via Stack) + HBASE-1183 New MR splitting algorithm and other new features need a way to + split a key range in N chunks (Jon Gray via Stack) + HBASE-1350 New method in HTable.java to return start and end keys for + regions in a table (Vimal Mathew via Stack) + HBASE-1271 Allow multiple tests to run on one machine + (Evgeny Ryabitskiy via Stack) + HBASE-1112 we will lose data if the table name happens to be the logs' dir + name (Samuel Guo via Stack) + HBASE-889 The current Thrift API does not allow a new scanner to be + created without supplying a column list unlike the other APIs. + (Tim Sell via Stack) + HBASE-1341 HTable pooler + HBASE-1379 re-enable LZO using hadoop-gpl-compression library + (Ryan Rawson via Stack) + HBASE-1383 hbase shell needs to warn on deleting multi-region table + HBASE-1286 Thrift should support next(nbRow) like functionality + (Alex Newman via Stack) + HBASE-1392 change how we build/configure lzocodec (Ryan Rawson via Stack) + HBASE-1397 Better distribution in the PerformanceEvaluation MapReduce + when rows run to the Billions + HBASE-1393 Narrow synchronization in HLog + HBASE-1404 minor edit of regionserver logging messages + HBASE-1405 Threads.shutdown has unnecessary branch + HBASE-1407 Changing internal structure of ImmutableBytesWritable + contructor (Erik Holstad via Stack) + HBASE-1345 Remove distributed mode from MiniZooKeeper (Nitay Joffe via + Stack) + HBASE-1414 Add server status logging chore to ServerManager + HBASE-1379 Make KeyValue implement Writable + (Erik Holstad and Jon Gray via Stack) + HBASE-1380 Make KeyValue implement HeapSize + (Erik Holstad and Jon Gray via Stack) + HBASE-1413 Fall back to filesystem block size default if HLog blocksize is + not specified + HBASE-1417 Cleanup disorientating RPC message + HBASE-1424 have shell print regioninfo and location on first load if + DEBUG enabled + HBASE-1008 [performance] The replay of logs on server crash takes way too + long + HBASE-1394 Uploads sometimes fall to 0 requests/second (Binding up on + HLog#append?) + HBASE-1429 Allow passing of a configuration object to HTablePool + HBASE-1432 LuceneDocumentWrapper is not public + HBASE-1401 close HLog (and open new one) if there hasnt been edits in N + minutes/hours + HBASE-1420 add abliity to add and remove (table) indexes on existing + tables (Clint Morgan via Stack) + HBASE-1430 Read the logs in batches during log splitting to avoid OOME + HBASE-1017 Region balancing does not bring newly added node within + acceptable range (Evgeny Ryabitskiy via Stack) + HBASE-1454 HBaseAdmin.getClusterStatus + HBASE-1236 Improve readability of table descriptions in the UI + (Lars George and Alex Newman via Stack) + HBASE-1455 Update DemoClient.py for thrift 1.0 (Tim Sell via Stack) + HBASE-1464 Add hbase.regionserver.logroll.period to hbase-default + HBASE-1192 LRU-style map for the block cache (Jon Gray and Ryan Rawson + via Stack) + HBASE-1466 Binary keys are not first class citizens + (Ryan Rawson via Stack) + HBASE-1445 Add the ability to start a master from any machine + HBASE-1474 Add zk attributes to list of attributes + in master and regionserver UIs + HBASE-1448 Add a node in ZK to tell all masters to shutdown + HBASE-1478 Remove hbase master options from shell (Nitay Joffe via Stack) + HBASE-1462 hclient still seems to depend on master + HBASE-1143 region count erratic in master UI + HBASE-1490 Update ZooKeeper library + HBASE-1489 Basic git ignores for people who use git and eclipse + HBASE-1453 Add HADOOP-4681 to our bundled hadoop, add to 'gettting started' + recommendation that hbase users backport + HBASE-1507 iCMS as default JVM + HBASE-1509 Add explanation to shell "help" command on how to use binarykeys + (Lars George via Stack) + HBASE-1514 hfile inspection tool + HBASE-1329 Visibility into ZooKeeper + HBASE-867 If millions of columns in a column family, hbase scanner won't + come up (Jonathan Gray via Stack) + HBASE-1538 Up zookeeper timeout from 10 seconds to 30 seconds to cut down + on hbase-user traffic + HBASE-1539 prevent aborts due to missing zoo.cfg + HBASE-1488 Fix TestThriftServer and re-enable it + HBASE-1541 Scanning multiple column families in the presence of deleted + families results in bad scans + HBASE-1540 Client delete unit test, define behavior + (Jonathan Gray via Stack) + HBASE-1552 provide version running on cluster via getClusterStatus + HBASE-1550 hbase-daemon.sh stop should provide more information when stop + command fails + HBASE-1515 Address part of config option hbase.regionserver unnecessary + HBASE-1532 UI Visibility into ZooKeeper + HBASE-1572 Zookeeper log4j property set to ERROR on default, same output + when cluster working and not working (Jon Gray via Stack) + HBASE-1576 TIF needs to be able to set scanner caching size for smaller + row tables & performance + HBASE-1577 Move memcache to ConcurrentSkipListMap from + ConcurrentSkipListSet + HBASE-1578 Change the name of the in-memory updates from 'memcache' to + 'memtable' or.... + HBASE-1562 How to handle the setting of 32 bit versus 64 bit machines + (Erik Holstad via Stack) + HBASE-1584 Put add methods should return this for ease of use (Be + consistant with Get) (Clint Morgan via Stack) + HBASE-1581 Run major compaction on .META. when table is dropped or + truncated + HBASE-1587 Update ganglia config and doc to account for ganglia 3.1 and + hadoop-4675 + HBASE-1589 Up zk maxClientCnxns from default of 10 to 20 or 30 or so + HBASE-1385 Revamp TableInputFormat, needs updating to match hadoop 0.20.x + AND remove bit where we can make < maps than regions + (Lars George via Stack) + HBASE-1596 Remove WatcherWrapper and have all users of Zookeeper provide a + Watcher + HBASE-1597 Prevent unnecessary caching of blocks during compactions + (Jon Gray via Stack) + HBASE-1607 Redo MemStore heap sizing to be accurate, testable, and more + like new LruBlockCache (Jon Gray via Stack) + HBASE-1218 Implement in-memory column (Jon Gray via Stack) + HBASE-1606 Remove zoo.cfg, put config options into hbase-site.xml + HBASE-1575 HMaster does not handle ZK session expiration + HBASE-1620 Need to use special StoreScanner constructor for major + compactions (passed sf, no caching, etc) (Jon Gray via Stack) + HBASE-1624 Don't sort Puts if only one in list in HCM#processBatchOfRows + HBASE-1626 Allow emitting Deletes out of new TableReducer + (Lars George via Stack) + HBASE-1551 HBase should manage multiple node ZooKeeper quorum + HBASE-1637 Delete client class methods should return itself like Put, Get, + Scan (Jon Gray via Nitay) + HBASE-1640 Allow passing arguments to jruby script run when run by hbase + shell + HBASE-698 HLog recovery is not performed after master failure + HBASE-1643 ScanDeleteTracker takes comparator but it unused + HBASE-1603 MR failed "RetriesExhaustedException: Trying to contact region + server Some server for region TestTable..." -- deubugging + HBASE-1470 hbase and HADOOP-4379, dhruba's flush/sync + HBASE-1632 Write documentation for configuring/managing ZooKeeper + HBASE-1662 Tool to run major compaction on catalog regions when hbase is + shutdown + HBASE-1665 expose more load information to the client side + HBASE-1609 We wait on leases to expire before regionserver goes down. + Rather, just let client fail + HBASE-1655 Usability improvements to HTablePool (Ken Weiner via jgray) + HBASE-1688 Improve javadocs in Result and KeyValue + HBASE-1694 Add TOC to 'Getting Started', add references to THBase and + ITHBase + HBASE-1699 Remove hbrep example as it's too out of date + (Tim Sell via Stack) + HBASE-1683 OOME on master splitting logs; stuck, won't go down + HBASE-1704 Better zk error when failed connect + HBASE-1714 Thrift server: prefix scan API + HBASE-1719 hold a reference to the region in stores instead of only the + region info + HBASE-1743 [debug tool] Add regionsInTransition list to ClusterStatus + detailed output + HBASE-1772 Up the default ZK session timeout from 30seconds to 60seconds + HBASE-2625 Make testDynamicBloom()'s "randomness" deterministic + (Nicolas Spiegelberg via Stack) + + OPTIMIZATIONS + HBASE-1412 Change values for delete column and column family in KeyValue + HBASE-1535 Add client ability to perform mutations without the WAL + (Jon Gray via Stack) + HBASE-1460 Concurrent LRU Block Cache (Jon Gray via Stack) + HBASE-1635 PerformanceEvaluation should use scanner prefetching + +Release 0.19.0 - 01/21/2009 + INCOMPATIBLE CHANGES + HBASE-885 TableMap and TableReduce should be interfaces + (Doğacan Güney via Stack) + HBASE-905 Remove V5 migration classes from 0.19.0 (Jean-Daniel Cryans via + Jim Kellerman) + HBASE-852 Cannot scan all families in a row with a LIMIT, STARTROW, etc. + (Izaak Rubin via Stack) + HBASE-953 Enable BLOCKCACHE by default [WAS -> Reevaluate HBASE-288 block + caching work....?] -- Update your hbase-default.xml file! + HBASE-636 java6 as a requirement + HBASE-994 IPC interfaces with different versions can cause problems + HBASE-1028 If key does not exist, return null in getRow rather than an + empty RowResult + HBASE-1134 OOME in HMaster when HBaseRPC is older than 0.19 + + BUG FIXES + HBASE-891 HRS.validateValuesLength throws IOE, gets caught in the retries + HBASE-892 Cell iteration is broken (Doğacan Güney via Jim Kellerman) + HBASE-898 RowResult.containsKey(String) doesn't work + (Doğacan Güney via Jim Kellerman) + HBASE-906 [shell] Truncates output + HBASE-912 PE is broken when other tables exist + HBASE-853 [shell] Cannot describe meta tables (Izaak Rubin via Stack) + HBASE-844 Can't pass script to hbase shell + HBASE-837 Add unit tests for ThriftServer.HBaseHandler (Izaak Rubin via + Stack) + HBASE-913 Classes using log4j directly + HBASE-914 MSG_REPORT_CLOSE has a byte array for a message + HBASE-918 Region balancing during startup makes cluster unstable + HBASE-921 region close and open processed out of order; makes for + disagreement between master and regionserver on region state + HBASE-925 HRS NPE on way out if no master to connect to + HBASE-928 NPE throwing RetriesExhaustedException + HBASE-924 Update hadoop in lib on 0.18 hbase branch to 0.18.1 + HBASE-929 Clarify that ttl in HColumnDescriptor is seconds + HBASE-930 RegionServer stuck: HLog: Could not append. Requesting close of + log java.io.IOException: Could not get block locations + HBASE-926 If no master, regionservers should hang out rather than fail on + connection and shut themselves down + HBASE-919 Master and Region Server need to provide root region location if + they are using HTable + With J-D's one line patch, test cases now appear to work and + PerformanceEvaluation works as before. + HBASE-939 NPE in HStoreKey + HBASE-945 Be consistent in use of qualified/unqualified mapfile paths + HBASE-946 Row with 55k deletes timesout scanner lease + HBASE-950 HTable.commit no longer works with existing RowLocks though it's + still in API + HBASE-952 Deadlock in HRegion.batchUpdate + HBASE-954 Don't reassign root region until ProcessServerShutdown has split + the former region server's log + HBASE-957 PerformanceEvaluation tests if table exists by comparing + descriptors + HBASE-728, HBASE-956, HBASE-955 Address thread naming, which threads are + Chores, vs Threads, make HLog manager the write ahead log and + not extend it to provided optional HLog sync operations. + HBASE-970 Update the copy/rename scripts to go against change API + HBASE-966 HBASE-748 misses some writes + HBASE-971 Fix the failing tests on Hudson + HBASE-973 [doc] In getting started, make it clear that hbase needs to + create its directory in hdfs + HBASE-963 Fix the retries in HTable.flushCommit + HBASE-969 Won't when storefile > 2G. + HBASE-976 HADOOP 0.19.0 RC0 is broke; replace with HEAD of branch-0.19 + HBASE-977 Arcane HStoreKey comparator bug + HBASE-979 REST web app is not started automatically + HBASE-980 Undo core of HBASE-975, caching of start and end row + HBASE-982 Deleting a column in MapReduce fails (Doğacan Güney via + Stack) + HBASE-984 Fix javadoc warnings + HBASE-985 Fix javadoc warnings + HBASE-951 Either shut down master or let it finish cleanup + HBASE-964 Startup stuck "waiting for root region" + HBASE-964, HBASE-678 provide for safe-mode without locking up HBase "waiting + for root region" + HBASE-990 NoSuchElementException in flushSomeRegions; took two attempts. + HBASE-602 HBase Crash when network card has a IPv6 address + HBASE-996 Migration script to up the versions in catalog tables + HBASE-991 Update the mapred package document examples so they work with + TRUNK/0.19.0. + HBASE-1003 If cell exceeds TTL but not VERSIONs, will not be removed during + major compaction + HBASE-1005 Regex and string comparison operators for ColumnValueFilter + HBASE-910 Scanner misses columns / rows when the scanner is obtained + during a memcache flush + HBASE-1009 Master stuck in loop wanting to assign but regions are closing + HBASE-1016 Fix example in javadoc overvie + HBASE-1021 hbase metrics FileContext not working + HBASE-1023 Check global flusher + HBASE-1036 HBASE-1028 broke Thrift + HBASE-1037 Some test cases failing on Windows/Cygwin but not UNIX/Linux + HBASE-1041 Migration throwing NPE + HBASE-1042 OOME but we don't abort; two part commit. + HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down + HBASE-1029 REST wiki documentation incorrect + (Sishen Freecity via Stack) + HBASE-1043 Removing @Override attributes where they are no longer needed. + (Ryan Smith via Jim Kellerman) + HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down - + (fix bug in createTable which caused tests to fail) + HBASE-1039 Compaction fails if bloomfilters are enabled + HBASE-1027 Make global flusher check work with percentages rather than + hard code memory sizes + HBASE-1000 Sleeper.sleep does not go back to sleep when interrupted + and no stop flag given. + HBASE-900 Regionserver memory leak causing OOME during relatively + modest bulk importing; part 1 and part 2 + HBASE-1054 Index NPE on scanning (Clint Morgan via Andrew Purtell) + HBASE-1052 Stopping a HRegionServer with unflushed cache causes data loss + from org.apache.hadoop.hbase.DroppedSnapshotException + HBASE-1059 ConcurrentModificationException in notifyChangedReadersObservers + HBASE-1063 "File separator problem on Windows" (Max Lehn via Stack) + HBASE-1068 TestCompaction broken on hudson + HBASE-1067 TestRegionRebalancing broken by running of hdfs shutdown thread + HBASE-1070 Up default index interval in TRUNK and branch + HBASE-1045 Hangup by regionserver causes write to fail + HBASE-1079 Dumb NPE in ServerCallable hides the RetriesExhausted exception + HBASE-782 The DELETE key in the hbase shell deletes the wrong character + (Tim Sell via Stack) + HBASE-543, HBASE-1046, HBase-1051 A region's state is kept in several places + in the master opening the possibility for race conditions + HBASE-1087 DFS failures did not shutdown regionserver + HBASE-1072 Change Thread.join on exit to a timed Thread.join + HBASE-1098 IllegalStateException: Cannot set a region to be closed it it + was not already marked as closing + HBASE-1100 HBASE-1062 broke TestForceSplit + HBASE-1191 shell tools -> close_region does not work for regions that did + not deploy properly on startup + HBASE-1093 NPE in HStore#compact + HBASE-1097 SequenceFile.Reader keeps around buffer whose size is that of + largest item read -> results in lots of dead heap + HBASE-1107 NPE in HStoreScanner.updateReaders + HBASE-1083 Will keep scheduling major compactions if last time one ran, we + didn't. + HBASE-1101 NPE in HConnectionManager$TableServers.processBatchOfRows + HBASE-1099 Regions assigned while master is splitting logs of recently + crashed server; regionserver tries to execute incomplete log + HBASE-1104, HBASE-1098, HBASE-1096: Doubly-assigned regions redux, + IllegalStateException: Cannot set a region to be closed it it was + not already marked as closing, Does not recover if HRS carrying + -ROOT- goes down + HBASE-1114 Weird NPEs compacting + HBASE-1116 generated web.xml and svn don't play nice together + HBASE-1119 ArrayOutOfBoundsException in HStore.compact + HBASE-1121 Cluster confused about where -ROOT- is + HBASE-1125 IllegalStateException: Cannot set a region to be closed if it was + not already marked as pending close + HBASE-1124 Balancer kicks in way too early + HBASE-1127 OOME running randomRead PE + HBASE-1132 Can't append to HLog, can't roll log, infinite cycle (another + spin on HBASE-930) + + IMPROVEMENTS + HBASE-901 Add a limit to key length, check key and value length on client side + HBASE-890 Alter table operation and also related changes in REST interface + (Sishen Freecity via Stack) + HBASE-894 [shell] Should be able to copy-paste table description to create + new table (Sishen Freecity via Stack) + HBASE-886, HBASE-895 Sort the tables in the web UI, [shell] 'list' command + should emit a sorted list of tables (Krzysztof Szlapinski via Stack) + HBASE-884 Double and float converters for Bytes class + (Doğacan Güney via Stack) + HBASE-908 Add approximate counting to CountingBloomFilter + (Andrzej Bialecki via Stack) + HBASE-920 Make region balancing sloppier + HBASE-902 Add force compaction and force split operations to UI and Admin + HBASE-942 Add convenience methods to RowFilterSet + (Clint Morgan via Stack) + HBASE-943 to ColumnValueFilter: add filterIfColumnMissing property, add + SubString operator (Clint Morgan via Stack) + HBASE-937 Thrift getRow does not support specifying columns + (Doğacan Güney via Stack) + HBASE-959 Be able to get multiple RowResult at one time from client side + (Sishen Freecity via Stack) + HBASE-936 REST Interface: enable get number of rows from scanner interface + (Sishen Freecity via Stack) + HBASE-960 REST interface: more generic column family configure and also + get Rows using offset and limit (Sishen Freecity via Stack) + HBASE-817 Hbase/Shell Truncate + HBASE-949 Add an HBase Manual + HBASE-839 Update hadoop libs in hbase; move hbase TRUNK on to an hadoop + 0.19.0 RC + HBASE-785 Remove InfoServer, use HADOOP-3824 StatusHttpServer + instead (requires hadoop 0.19) + HBASE-81 When a scanner lease times out, throw a more "user friendly" exception + HBASE-978 Remove BloomFilterDescriptor. It is no longer used. + HBASE-975 Improve MapFile performance for start and end key + HBASE-961 Delete multiple columns by regular expression + (Samuel Guo via Stack) + HBASE-722 Shutdown and Compactions + HBASE-983 Declare Perl namespace in Hbase.thrift + HBASE-987 We need a Hbase Partitioner for TableMapReduceUtil.initTableReduceJob + MR Jobs (Billy Pearson via Stack) + HBASE-993 Turn off logging of every catalog table row entry on every scan + HBASE-992 Up the versions kept by catalog tables; currently 1. Make it 10? + HBASE-998 Narrow getClosestRowBefore by passing column family + HBASE-999 Up versions on historian and keep history of deleted regions for a + while rather than delete immediately + HBASE-938 Major compaction period is not checked periodically + HBASE-947 [Optimization] Major compaction should remove deletes as well as + the deleted cell + HBASE-675 Report correct server hosting a table split for assignment to + for MR Jobs + HBASE-927 We don't recover if HRS hosting -ROOT-/.META. goes down + HBASE-1013 Add debugging around commit log cleanup + HBASE-972 Update hbase trunk to use released hadoop 0.19.0 + HBASE-1022 Add storefile index size to hbase metrics + HBASE-1026 Tests in mapred are failing + HBASE-1020 Regionserver OOME handler should dump vital stats + HBASE-1018 Regionservers should report detailed health to master + HBASE-1034 Remove useless TestToString unit test + HBASE-1030 Bit of polish on HBASE-1018 + HBASE-847 new API: HTable.getRow with numVersion specified + (Doğacan Güney via Stack) + HBASE-1048 HLog: Found 0 logs to remove out of total 1450; oldest + outstanding seqnum is 162297053 fr om region -ROOT-,,0 + HBASE-1055 Better vm stats on startup + HBASE-1065 Minor logging improvements in the master + HBASE-1053 bring recent rpc changes down from hadoop + HBASE-1056 [migration] enable blockcaching on .META. table + HBASE-1069 Show whether HRegion major compacts or not in INFO level + HBASE-1066 Master should support close/open/reassignment/enable/disable + operations on individual regions + HBASE-1062 Compactions at (re)start on a large table can overwhelm DFS + HBASE-1102 boolean HTable.exists() + HBASE-1105 Remove duplicated code in HCM, add javadoc to RegionState, etc. + HBASE-1106 Expose getClosestRowBefore in HTable + (Michael Gottesman via Stack) + HBASE-1082 Administrative functions for table/region maintenance + HBASE-1090 Atomic Check And Save in HTable (Michael Gottesman via Stack) + HBASE-1137 Add not on xceivers count to overview documentation + + NEW FEATURES + HBASE-875 Use MurmurHash instead of JenkinsHash [in bloomfilters] + (Andrzej Bialecki via Stack) + HBASE-625 Metrics support for cluster load history: emissions and graphs + HBASE-883 Secondary indexes (Clint Morgan via Andrew Purtell) + HBASE-728 Support for HLog appends + + OPTIMIZATIONS + HBASE-748 Add an efficient way to batch update many rows + HBASE-887 Fix a hotspot in scanners + HBASE-967 [Optimization] Cache cell maximum length (HCD.getMaxValueLength); + its used checking batch size + HBASE-940 Make the TableOutputFormat batching-aware + HBASE-576 Investigate IPC performance + +Release 0.18.0 - September 21st, 2008 + + INCOMPATIBLE CHANGES + HBASE-697 Thrift idl needs update/edit to match new 0.2 API (and to fix bugs) + (Tim Sell via Stack) + HBASE-822 Update thrift README and HBase.thrift to use thrift 20080411 + Updated all other languages examples (only python went in) + + BUG FIXES + HBASE-881 Fixed bug when Master tries to reassign split or offline regions + from a dead server + HBASE-860 Fixed Bug in IndexTableReduce where it concerns writing lucene + index fields. + HBASE-805 Remove unnecessary getRow overloads in HRS (Jonathan Gray via + Jim Kellerman) (Fix whitespace diffs in HRegionServer) + HBASE-811 HTD is not fully copyable (Andrew Purtell via Jim Kellerman) + HBASE-729 Client region/metadata cache should have a public method for + invalidating entries (Andrew Purtell via Stack) + HBASE-819 Remove DOS-style ^M carriage returns from all code where found + (Jonathan Gray via Jim Kellerman) + HBASE-818 Deadlock running 'flushSomeRegions' (Andrew Purtell via Stack) + HBASE-820 Need mainline to flush when 'Blocking updates' goes up. + (Jean-Daniel Cryans via Stack) + HBASE-821 UnknownScanner happens too often (Jean-Daniel Cryans via Stack) + HBASE-813 Add a row counter in the new shell (Jean-Daniel Cryans via Stack) + HBASE-824 Bug in Hlog we print array of byes for region name + (Billy Pearson via Stack) + HBASE-825 Master logs showing byte [] in place of string in logging + (Billy Pearson via Stack) + HBASE-808,809 MAX_VERSIONS not respected, and Deletall doesn't and inserts + after delete don't work as expected + (Jean-Daniel Cryans via Stack) + HBASE-831 committing BatchUpdate with no row should complain + (Andrew Purtell via Jim Kellerman) + HBASE-833 Doing an insert with an unknown family throws a NPE in HRS + HBASE-810 Prevent temporary deadlocks when, during a scan with write + operations, the region splits (Jean-Daniel Cryans via Jim + Kellerman) + HBASE-843 Deleting and recreating a table in a single process does not work + (Jonathan Gray via Jim Kellerman) + HBASE-849 Speed improvement in JenkinsHash (Andrzej Bialecki via Stack) + HBASE-552 Bloom filter bugs (Andrzej Bialecki via Jim Kellerman) + HBASE-762 deleteFamily takes timestamp, should only take row and family. + Javadoc describes both cases but only implements the timestamp + case. (Jean-Daniel Cryans via Jim Kellerman) + HBASE-768 This message 'java.io.IOException: Install 0.1.x of hbase and run + its migration first' is useless (Jean-Daniel Cryans via Jim + Kellerman) + HBASE-826 Delete table followed by recreation results in honked table + HBASE-834 'Major' compactions and upper bound on files we compact at any + one time (Billy Pearson via Stack) + HBASE-836 Update thrift examples to work with changed IDL (HBASE-697) + (Toby White via Stack) + HBASE-854 hbase-841 broke build on hudson? - makes sure that proxies are + closed. (Andrew Purtell via Jim Kellerman) + HBASE-855 compaction can return less versions then we should in some cases + (Billy Pearson via Stack) + HBASE-832 Problem with row keys beginnig with characters < than ',' and + the region location cache + HBASE-864 Deadlock in regionserver + HBASE-865 Fix javadoc warnings (Rong-En Fan via Jim Kellerman) + HBASE-872 Getting exceptions in shell when creating/disabling tables + HBASE-868 Incrementing binary rows cause strange behavior once table + splits (Jonathan Gray via Stack) + HBASE-877 HCM is unable to find table with multiple regions which contains + binary (Jonathan Gray via Stack) + + IMPROVEMENTS + HBASE-801 When a table haven't disable, shell could response in a "user + friendly" way. + HBASE-816 TableMap should survive USE (Andrew Purtell via Stack) + HBASE-812 Compaction needs little better skip algo (Daniel Leffel via Stack) + HBASE-806 Change HbaseMapWritable and RowResult to implement SortedMap + instead of Map (Jonathan Gray via Stack) + HBASE-795 More Table operation in TableHandler for REST interface: part 1 + (Sishen Freecity via Stack) + HBASE-795 More Table operation in TableHandler for REST interface: part 2 + (Sishen Freecity via Stack) + HBASE-830 Debugging HCM.locateRegionInMeta is painful + HBASE-784 Base hbase-0.3.0 on hadoop-0.18 + HBASE-841 Consolidate multiple overloaded methods in HRegionInterface, + HRegionServer (Jean-Daniel Cryans via Jim Kellerman) + HBASE-840 More options on the row query in REST interface + (Sishen Freecity via Stack) + HBASE-874 deleting a table kills client rpc; no subsequent communication if + shell or thrift server, etc. (Jonathan Gray via Jim Kellerman) + HBASE-871 Major compaction periodicity should be specifyable at the column + family level, not cluster wide (Jonathan Gray via Stack) + HBASE-465 Fix javadoc for all public declarations + HBASE-882 The BatchUpdate class provides, put(col, cell) and delete(col) + but no get() (Ryan Smith via Stack and Jim Kellerman) + + NEW FEATURES + HBASE-787 Postgresql to HBase table replication example (Tim Sell via Stack) + HBASE-798 Provide Client API to explicitly lock and unlock rows (Jonathan + Gray via Jim Kellerman) + HBASE-798 Add missing classes: UnknownRowLockException and RowLock which + were present in previous versions of the patches for this issue, + but not in the version that was committed. Also fix a number of + compilation problems that were introduced by patch. + HBASE-669 MultiRegion transactions with Optimistic Concurrency Control + (Clint Morgan via Stack) + HBASE-842 Remove methods that have Text as a parameter and were deprecated + in 0.2.1 (Jean-Daniel Cryans via Jim Kellerman) + + OPTIMIZATIONS + +Release 0.2.0 - August 8, 2008. + + INCOMPATIBLE CHANGES + HBASE-584 Names in the filter interface are confusing (Clint Morgan via + Jim Kellerman) (API change for filters) + HBASE-601 Just remove deprecated methods in HTable; 0.2 is not backward + compatible anyways + HBASE-82 Row keys should be array of bytes + HBASE-76 Purge servers of Text (Done as part of HBASE-82 commit). + HBASE-487 Replace hql w/ a hbase-friendly jirb or jython shell + Part 1: purge of hql and added raw jirb in its place. + HBASE-521 Improve client scanner interface + HBASE-288 Add in-memory caching of data. Required update of hadoop to + 0.17.0-dev.2008-02-07_12-01-58. (Tom White via Stack) + HBASE-696 Make bloomfilter true/false and self-sizing + HBASE-720 clean up inconsistencies around deletes (Izaak Rubin via Stack) + HBASE-796 Deprecates Text methods from HTable + (Michael Gottesman via Stack) + + BUG FIXES + HBASE-574 HBase does not load hadoop native libs (Rong-En Fan via Stack) + HBASE-598 Loggging, no .log file; all goes into .out + HBASE-622 Remove StaticTestEnvironment and put a log4j.properties in src/test + HBASE-624 Master will shut down if number of active region servers is zero + even if shutdown was not requested + HBASE-629 Split reports incorrect elapsed time + HBASE-623 Migration script for hbase-82 + HBASE-630 Default hbase.rootdir is garbage + HBASE-589 Remove references to deprecated methods in Hadoop once + hadoop-0.17.0 is released + HBASE-638 Purge \r from src + HBASE-644 DroppedSnapshotException but RegionServer doesn't restart + HBASE-641 Improve master split logging + HBASE-642 Splitting log in a hostile environment -- bad hdfs -- we drop + write-ahead-log edits + HBASE-646 EOFException opening HStoreFile info file (spin on HBASE-645and 550) + HBASE-648 If mapfile index is empty, run repair + HBASE-640 TestMigrate failing on hudson + HBASE-651 Table.commit should throw NoSuchColumnFamilyException if column + family doesn't exist + HBASE-649 API polluted with default and protected access data members and methods + HBASE-650 Add String versions of get, scanner, put in HTable + HBASE-656 Do not retry exceptions such as unknown scanner or illegal argument + HBASE-659 HLog#cacheFlushLock not cleared; hangs a region + HBASE-663 Incorrect sequence number for cache flush + HBASE-655 Need programmatic way to add column family: need programmatic way + to enable/disable table + HBASE-654 API HTable.getMetadata().addFamily shouldn't be exposed to user + HBASE-666 UnmodifyableHRegionInfo gives the wrong encoded name + HBASE-668 HBASE-533 broke build + HBASE-670 Historian deadlocks if regionserver is at global memory boundary + and is hosting .META. + HBASE-665 Server side scanner doesn't honor stop row + HBASE-662 UI in table.jsp gives META locations, not the table's regions + location (Jean-Daniel Cryans via Stack) + HBASE-676 Bytes.getInt returns a long (Clint Morgan via Stack) + HBASE-680 Config parameter hbase.io.index.interval should be + hbase.index.interval, according to HBaseMapFile.HbaseWriter + (LN via Stack) + HBASE-682 Unnecessary iteration in HMemcache.internalGet? got much better + reading performance after break it (LN via Stack) + HBASE-686 MemcacheScanner didn't return the first row(if it exists), + because HScannerInterface's output incorrect (LN via Jim Kellerman) + HBASE-691 get* and getScanner are different in how they treat column parameter + HBASE-694 HStore.rowAtOrBeforeFromMapFile() fails to locate the row if # of mapfiles >= 2 + (Rong-En Fan via Bryan) + HBASE-652 dropping table fails silently if table isn't disabled + HBASE-683 can not get svn revision # at build time if locale is not english + (Rong-En Fan via Stack) + HBASE-699 Fix TestMigrate up on Hudson + HBASE-615 Region balancer oscillates during cluster startup + HBASE-613 Timestamp-anchored scanning fails to find all records + HBASE-681 NPE in Memcache + HBASE-701 Showing bytes in log when should be String + HBASE-702 deleteall doesn't + HBASE-704 update new shell docs and commands on help menu + HBASE-709 Deadlock while rolling WAL-log while finishing flush + HBASE-710 If clocks are way off, then we can have daughter split come + before rather than after its parent in .META. + HBASE-714 Showing bytes in log when should be string (2) + HBASE-627 Disable table doesn't work reliably + HBASE-716 TestGet2.testGetClosestBefore fails with hadoop-0.17.1 + HBASE-715 Base HBase 0.2 on Hadoop 0.17.1 + HBASE-718 hbase shell help info + HBASE-717 alter table broke with new shell returns InvalidColumnNameException + HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after + moving out hadoop/contrib + HBASE-11 Unexpected exits corrupt DFS + HBASE-12 When hbase regionserver restarts, it says "impossible state for + createLease()" + HBASE-575 master dies with stack overflow error if rootdir isn't qualified + HBASE-582 HBase 554 forgot to clear results on each iteration caused by a filter + (Clint Morgan via Stack) + HBASE-532 Odd interaction between HRegion.get, HRegion.deleteAll and compactions + HBASE-10 HRegionServer hangs upon exit due to DFSClient Exception + HBASE-595 RowFilterInterface.rowProcessed() is called *before* fhe final + filtering decision is made (Clint Morgan via Stack) + HBASE-586 HRegion runs HStore memcache snapshotting -- fix it so only HStore + knows about workings of memcache + HBASE-588 Still a 'hole' in scanners, even after HBASE-532 + HBASE-604 Don't allow CLASSPATH from environment pollute the hbase CLASSPATH + HBASE-608 HRegionServer::getThisIP() checks hadoop config var for dns interface name + (Jim R. Wilson via Stack) + HBASE-609 Master doesn't see regionserver edits because of clock skew + HBASE-607 MultiRegionTable.makeMultiRegionTable is not deterministic enough + for regression tests + HBASE-405 TIF and TOF use log4j directly rather than apache commons-logging + HBASE-618 We always compact if 2 files, regardless of the compaction threshold setting + HBASE-619 Fix 'logs' link in UI + HBASE-478 offlining of table does not run reliably + HBASE-453 undeclared throwable exception from HTable.get + HBASE-620 testmergetool failing in branch and trunk since hbase-618 went in + HBASE-550 EOF trying to read reconstruction log stops region deployment + HBASE-551 Master stuck splitting server logs in shutdown loop; on each + iteration, edits are aggregated up into the millions + HBASE-505 Region assignments should never time out so long as the region + server reports that it is processing the open request + HBASE-561 HBase package does not include LICENSE.txt nor build.xml + HBASE-563 TestRowFilterAfterWrite erroneously sets master address to + 0.0.0.0:60100 rather than relying on conf + HBASE-507 Use Callable pattern to sleep between retries + HBASE-564 Don't do a cache flush if there are zero entries in the cache. + HBASE-554 filters generate StackOverflowException + HBASE-567 Reused BatchUpdate instances accumulate BatchOperations + HBASE-577 NPE getting scanner + HBASE-19 CountingBloomFilter can overflow its storage + (Stu Hood and Bryan Duxbury via Stack) + HBASE-28 thrift put/mutateRow methods need to throw IllegalArgument + exceptions (Dave Simpson via Bryan Duxbury via Stack) + HBASE-2 hlog numbers should wrap around when they reach 999 + (Bryan Duxbury via Stack) + HBASE-421 TestRegionServerExit broken + HBASE-426 hbase can't find remote filesystem + HBASE-437 Clear Command should use system.out (Edward Yoon via Stack) + HBASE-434, HBASE-435 TestTableIndex and TestTableMapReduce failed in Hudson builds + HBASE-446 Fully qualified hbase.rootdir doesn't work + HBASE-438 XMLOutputter state should be initialized. (Edward Yoon via Stack) + HBASE-8 Delete table does not remove the table directory in the FS + HBASE-428 Under continuous upload of rows, WrongRegionExceptions are thrown + that reach the client even after retries + HBASE-460 TestMigrate broken when HBase moved to subproject + HBASE-462 Update migration tool + HBASE-473 When a table is deleted, master sends multiple close messages to + the region server + HBASE-490 Doubly-assigned .META.; master uses one and clients another + HBASE-492 hbase TRUNK does not build against hadoop TRUNK + HBASE-496 impossible state for createLease writes 400k lines in about 15mins + HBASE-472 Passing on edits, we dump all to log + HBASE-495 No server address listed in .META. + HBASE-433 HBASE-251 Region server should delete restore log after successful + restore, Stuck replaying the edits of crashed machine. + HBASE-27 hregioninfo cell empty in meta table + HBASE-501 Empty region server address in info:server entry and a + startcode of -1 in .META. + HBASE-516 HStoreFile.finalKey does not update the final key if it is not + the top region of a split region + HBASE-525 HTable.getRow(Text) does not work (Clint Morgan via Bryan Duxbury) + HBASE-524 Problems with getFull + HBASE-528 table 'does not exist' when it does + HBASE-531 Merge tool won't merge two overlapping regions (port HBASE-483 to + trunk) + HBASE-537 Wait for hdfs to exit safe mode + HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store + files (Clint Morgan via Jim Kellerman) + HBASE-527 RegexpRowFilter does not work when there are columns from + multiple families (Clint Morgan via Jim Kellerman) + HBASE-534 Double-assignment at SPLIT-time + HBASE-712 midKey found compacting is the first, not necessarily the optimal + HBASE-719 Find out why users have network problems in HBase and not in Hadoop + and HConnectionManager (Jean-Daniel Cryans via Stack) + HBASE-703 Invalid regions listed by regionserver.jsp (Izaak Rubin via Stack) + HBASE-674 Memcache size unreliable + HBASE-726 Unit tests won't run because of a typo (Sebastien Rainville via Stack) + HBASE-727 Client caught in an infinite loop when trying to connect to cached + server locations (Izaak Rubin via Stack) + HBASE-732 shell formatting error with the describe command + (Izaak Rubin via Stack) + HBASE-731 delete, deletefc in HBase shell do not work correctly + (Izaak Rubin via Stack) + HBASE-734 scan '.META.', {LIMIT => 10} crashes (Izaak Rubin via Stack) + HBASE-736 Should have HTable.deleteAll(String row) and HTable.deleteAll(Text row) + (Jean-Daniel Cryans via Stack) + HBASE-740 ThriftServer getting table names incorrectly (Tim Sell via Stack) + HBASE-742 Rename getMetainfo in HTable as getTableDescriptor + HBASE-739 HBaseAdmin.createTable() using old HTableDescription doesn't work + (Izaak Rubin via Stack) + HBASE-744 BloomFilter serialization/deserialization broken + HBASE-742 Column length limit is not enforced (Jean-Daniel Cryans via Stack) + HBASE-737 Scanner: every cell in a row has the same timestamp + HBASE-700 hbase.io.index.interval need be configuratable in column family + (Andrew Purtell via Stack) + HBASE-62 Allow user add arbitrary key/value pairs to table and column + descriptors (Andrew Purtell via Stack) + HBASE-34 Set memcache flush size per column (Andrew Purtell via Stack) + HBASE-42 Set region split size on table creation (Andrew Purtell via Stack) + HBASE-43 Add a read-only attribute to columns (Andrew Purtell via Stack) + HBASE-424 Should be able to enable/disable .META. table + HBASE-679 Regionserver addresses are still not right in the new tables page + HBASE-758 Throwing IOE read-only when should be throwing NSRE + HBASE-743 bin/hbase migrate upgrade fails when redo logs exists + HBASE-754 The JRuby shell documentation is wrong in "get" and "put" + (Jean-Daniel Cryans via Stack) + HBASE-756 In HBase shell, the put command doesn't process the timestamp + (Jean-Daniel Cryans via Stack) + HBASE-757 REST mangles table names (Sishen via Stack) + HBASE-706 On OOME, regionserver sticks around and doesn't go down with cluster + (Jean-Daniel Cryans via Stack) + HBASE-759 TestMetaUtils failing on hudson + HBASE-761 IOE: Stream closed exception all over logs + HBASE-763 ClassCastException from RowResult.get(String) + (Andrew Purtell via Stack) + HBASE-764 The name of column request has padding zero using REST interface + (Sishen Freecity via Stack) + HBASE-750 NPE caused by StoreFileScanner.updateReaders + HBASE-769 TestMasterAdmin fails throwing RegionOfflineException when we're + expecting IllegalStateException + HBASE-766 FileNotFoundException trying to load HStoreFile 'data' + HBASE-770 Update HBaseRPC to match hadoop 0.17 RPC + HBASE-780 Can't scan '.META.' from new shell + HBASE-424 Should be able to enable/disable .META. table + HBASE-771 Names legal in 0.1 are not in 0.2; breaks migration + HBASE-788 Div by zero in Master.jsp (Clint Morgan via Jim Kellerman) + HBASE-791 RowCount doesn't work (Jean-Daniel Cryans via Stack) + HBASE-751 dfs exception and regionserver stuck during heavy write load + HBASE-793 HTable.getStartKeys() ignores table names when matching columns + (Andrew Purtell and Dru Jensen via Stack) + HBASE-790 During import, single region blocks requests for >10 minutes, + thread dumps, throws out pending requests, and continues + (Jonathan Gray via Stack) + + IMPROVEMENTS + HBASE-559 MR example job to count table rows + HBASE-596 DemoClient.py (Ivan Begtin via Stack) + HBASE-581 Allow adding filters to TableInputFormat (At same time, ensure TIF + is subclassable) (David Alves via Stack) + HBASE-603 When an exception bubbles out of getRegionServerWithRetries, wrap + the exception with a RetriesExhaustedException + HBASE-600 Filters have excessive DEBUG logging + HBASE-611 regionserver should do basic health check before reporting + alls-well to the master + HBASE-614 Retiring regions is not used; exploit or remove + HBASE-538 Improve exceptions that come out on client-side + HBASE-569 DemoClient.php (Jim R. Wilson via Stack) + HBASE-522 Where new Text(string) might be used in client side method calls, + add an overload that takes String (Done as part of HBASE-82) + HBASE-570 Remove HQL unit test (Done as part of HBASE-82 commit). + HBASE-626 Use Visitor pattern in MetaRegion to reduce code clones in HTable + and HConnectionManager (Jean-Daniel Cryans via Stack) + HBASE-621 Make MAX_VERSIONS work like TTL: In scans and gets, check + MAX_VERSIONs setting and return that many only rather than wait on + compaction (Jean-Daniel Cryans via Stack) + HBASE-504 Allow HMsg's carry a payload: e.g. exception that happened over + on the remote side. + HBASE-583 RangeRowFilter/ColumnValueFilter to allow choice of rows based on + a (lexicographic) comparison to column's values + (Clint Morgan via Stack) + HBASE-579 Add hadoop 0.17.x + HBASE-660 [Migration] addColumn/deleteColumn functionality in MetaUtils + HBASE-632 HTable.getMetadata is very inefficient + HBASE-671 New UI page displaying all regions in a table should be sorted + HBASE-672 Sort regions in the regionserver UI + HBASE-677 Make HTable, HRegion, HRegionServer, HStore, and HColumnDescriptor + subclassable (Clint Morgan via Stack) + HBASE-682 Regularize toString + HBASE-672 Sort regions in the regionserver UI + HBASE-469 Streamline HStore startup and compactions + HBASE-544 Purge startUpdate from internal code and test cases + HBASE-557 HTable.getRow() should receive RowResult objects + HBASE-452 "region offline" should throw IOException, not IllegalStateException + HBASE-541 Update hadoop jars. + HBASE-523 package-level javadoc should have example client + HBASE-415 Rewrite leases to use DelayedBlockingQueue instead of polling + HBASE-35 Make BatchUpdate public in the API + HBASE-409 Add build path to svn:ignore list (Edward Yoon via Stack) + HBASE-408 Add .classpath and .project to svn:ignore list + (Edward Yoon via Stack) + HBASE-410 Speed up the test suite (make test timeout 5 instead of 15 mins). + HBASE-281 Shell should allow deletions in .META. and -ROOT- tables + (Edward Yoon & Bryan Duxbury via Stack) + HBASE-56 Unnecessary HQLClient Object creation in a shell loop + (Edward Yoon via Stack) + HBASE-3 rest server: configure number of threads for jetty + (Bryan Duxbury via Stack) + HBASE-416 Add apache-style logging to REST server and add setting log + level, etc. + HBASE-406 Remove HTable and HConnection close methods + (Bryan Duxbury via Stack) + HBASE-418 Move HMaster and related classes into master package + (Bryan Duxbury via Stack) + HBASE-410 Speed up the test suite - Apparently test timeout was too + aggressive for Hudson. TestLogRolling timed out even though it + was operating properly. Change test timeout to 10 minutes. + HBASE-436 website: http://hadoop.apache.org/hbase + HBASE-417 Factor TableOperation and subclasses into separate files from + HMaster (Bryan Duxbury via Stack) + HBASE-440 Add optional log roll interval so that log files are garbage + collected + HBASE-407 Keep HRegionLocation information in LRU structure + HBASE-444 hbase is very slow at determining table is not present + HBASE-438 XMLOutputter state should be initialized. + HBASE-414 Move client classes into client package + HBASE-79 When HBase needs to be migrated, it should display a message on + stdout, not just in the logs + HBASE-461 Simplify leases. + HBASE-419 Move RegionServer and related classes into regionserver package + HBASE-457 Factor Master into Master, RegionManager, and ServerManager + HBASE-464 HBASE-419 introduced javadoc errors + HBASE-468 Move HStoreKey back to o.a.h.h + HBASE-442 Move internal classes out of HRegionServer + HBASE-466 Move HMasterInterface, HRegionInterface, and + HMasterRegionInterface into o.a.h.h.ipc + HBASE-479 Speed up TestLogRolling + HBASE-480 Tool to manually merge two regions + HBASE-477 Add support for an HBASE_CLASSPATH + HBASE-443 Move internal classes out of HStore + HBASE-515 At least double default timeouts between regionserver and master + HBASE-529 RegionServer needs to recover if datanode goes down + HBASE-456 Clearly state which ports need to be opened in order to run HBase + HBASE-536 Remove MiniDFS startup from MiniHBaseCluster + HBASE-521 Improve client scanner interface + HBASE-562 Move Exceptions to subpackages (Jean-Daniel Cryans via Stack) + HBASE-631 HTable.getRow() for only a column family + (Jean-Daniel Cryans via Stack) + HBASE-731 Add a meta refresh tag to the Web ui for master and region server + (Jean-Daniel Cryans via Stack) + HBASE-735 hbase shell doesn't trap CTRL-C signal (Jean-Daniel Cryans via Stack) + HBASE-730 On startup, rinse STARTCODE and SERVER from .META. + (Jean-Daniel Cryans via Stack) + HBASE-738 overview.html in need of updating (Izaak Rubin via Stack) + HBASE-745 scaling of one regionserver, improving memory and cpu usage (partial) + (LN via Stack) + HBASE-746 Batching row mutations via thrift (Tim Sell via Stack) + HBASE-772 Up default lease period from 60 to 120 seconds + HBASE-779 Test changing hbase.hregion.memcache.block.multiplier to 2 + HBASE-783 For single row, single family retrieval, getRow() works half + as fast as getScanner().next() (Jean-Daniel Cryans via Stack) + HBASE-789 add clover coverage report targets (Rong-en Fan via Stack) + + NEW FEATURES + HBASE-47 Option to set TTL for columns in hbase + (Andrew Purtell via Bryan Duxbury and Stack) + HBASE-23 UI listing regions should be sorted by address and show additional + region state (Jean-Daniel Cryans via Stack) + HBASE-639 Add HBaseAdmin.getTableDescriptor function + HBASE-533 Region Historian + HBASE-487 Replace hql w/ a hbase-friendly jirb or jython shell + HBASE-548 Tool to online single region + HBASE-71 Master should rebalance region assignments periodically + HBASE-512 Add configuration for global aggregate memcache size + HBASE-40 Add a method of getting multiple (but not all) cells for a row + at once + HBASE-506 When an exception has to escape ServerCallable due to exhausted + retries, show all the exceptions that lead to this situation + HBASE-747 Add a simple way to do batch updates of many rows (Jean-Daniel + Cryans via JimK) + HBASE-733 Enhance Cell so that it can contain multiple values at multiple + timestamps + HBASE-511 Do exponential backoff in clients on NSRE, WRE, ISE, etc. + (Andrew Purtell via Jim Kellerman) + + OPTIMIZATIONS + HBASE-430 Performance: Scanners and getRow return maps with duplicate data + +Release 0.1.3 - 07/25/2008 + + BUG FIXES + HBASE-644 DroppedSnapshotException but RegionServer doesn't restart + HBASE-645 EOFException opening region (HBASE-550 redux) + HBASE-641 Improve master split logging + HBASE-642 Splitting log in a hostile environment -- bad hdfs -- we drop + write-ahead-log edits + HBASE-646 EOFException opening HStoreFile info file (spin on HBASE-645 and 550) + HBASE-648 If mapfile index is empty, run repair + HBASE-659 HLog#cacheFlushLock not cleared; hangs a region + HBASE-663 Incorrect sequence number for cache flush + HBASE-652 Dropping table fails silently if table isn't disabled + HBASE-674 Memcache size unreliable + HBASE-665 server side scanner doesn't honor stop row + HBASE-681 NPE in Memcache (Clint Morgan via Jim Kellerman) + HBASE-680 config parameter hbase.io.index.interval should be + hbase.index.interval, accroding to HBaseMapFile.HbaseWriter + (LN via Stack) + HBASE-684 unnecessary iteration in HMemcache.internalGet? got much better + reading performance after break it (LN via Stack) + HBASE-686 MemcacheScanner didn't return the first row(if it exists), + because HScannerInterface's output incorrect (LN via Jim Kellerman) + HBASE-613 Timestamp-anchored scanning fails to find all records + HBASE-709 Deadlock while rolling WAL-log while finishing flush + HBASE-707 High-load import of data into single table/family never triggers split + HBASE-710 If clocks are way off, then we can have daughter split come + before rather than after its parent in .META. + +Release 0.1.2 - 05/13/2008 + + BUG FIXES + HBASE-577 NPE getting scanner + HBASE-574 HBase does not load hadoop native libs (Rong-En Fan via Stack). + HBASE-11 Unexpected exits corrupt DFS - best we can do until we have at + least a subset of HADOOP-1700 + HBASE-573 HBase does not read hadoop-*.xml for dfs configuration after + moving out hadoop/contrib + HBASE-12 when hbase regionserver restarts, it says "impossible state for + createLease()" + HBASE-575 master dies with stack overflow error if rootdir isn't qualified + HBASE-500 Regionserver stuck on exit + HBASE-582 HBase 554 forgot to clear results on each iteration caused by a filter + (Clint Morgan via Stack) + HBASE-532 Odd interaction between HRegion.get, HRegion.deleteAll and compactions + HBASE-590 HBase migration tool does not get correct FileSystem or root + directory if configuration is not correct + HBASE-595 RowFilterInterface.rowProcessed() is called *before* fhe final + filtering decision is made (Clint Morgan via Stack) + HBASE-586 HRegion runs HStore memcache snapshotting -- fix it so only HStore + knows about workings of memcache + HBASE-572 Backport HBASE-512 to 0.1 branch + HBASE-588 Still a 'hole' in scanners, even after HBASE-532 + HBASE-604 Don't allow CLASSPATH from environment pollute the hbase CLASSPATH + HBASE-608 HRegionServer::getThisIP() checks hadoop config var for dns interface name + (Jim R. Wilson via Stack) + HBASE-609 Master doesn't see regionserver edits because of clock skew + HBASE-607 MultiRegionTable.makeMultiRegionTable is not deterministic enough + for regression tests + HBASE-478 offlining of table does not run reliably + HBASE-618 We always compact if 2 files, regardless of the compaction threshold setting + HBASE-619 Fix 'logs' link in UI + HBASE-620 testmergetool failing in branch and trunk since hbase-618 went in + + IMPROVEMENTS + HBASE-559 MR example job to count table rows + HBASE-578 Upgrade branch to 0.16.3 hadoop. + HBASE-596 DemoClient.py (Ivan Begtin via Stack) + + +Release 0.1.1 - 04/11/2008 + + BUG FIXES + HBASE-550 EOF trying to read reconstruction log stops region deployment + HBASE-551 Master stuck splitting server logs in shutdown loop; on each + iteration, edits are aggregated up into the millions + HBASE-505 Region assignments should never time out so long as the region + server reports that it is processing the open request + HBASE-552 Fix bloom filter bugs (Andrzej Bialecki via Jim Kellerman) + HBASE-507 Add sleep between retries + HBASE-555 Only one Worker in HRS; on startup, if assigned tens of regions, + havoc of reassignments because open processing is done in series + HBASE-547 UI shows hadoop version, not hbase version + HBASE-561 HBase package does not include LICENSE.txt nor build.xml + HBASE-556 Add 0.16.2 to hbase branch -- if it works + HBASE-563 TestRowFilterAfterWrite erroneously sets master address to + 0.0.0.0:60100 rather than relying on conf + HBASE-554 filters generate StackOverflowException (Clint Morgan via + Jim Kellerman) + HBASE-567 Reused BatchUpdate instances accumulate BatchOperations + + NEW FEATURES + HBASE-548 Tool to online single region + +Release 0.1.0 + + INCOMPATIBLE CHANGES + HADOOP-2750 Deprecated methods startBatchUpdate, commitBatch, abortBatch, + and renewLease have been removed from HTable (Bryan Duxbury via + Jim Kellerman) + HADOOP-2786 Move hbase out of hadoop core + HBASE-403 Fix build after move of hbase in svn + HBASE-494 Up IPC version on 0.1 branch so we cannot mistakenly connect + with a hbase from 0.16.0 + + NEW FEATURES + HBASE-506 When an exception has to escape ServerCallable due to exhausted retries, + show all the exceptions that lead to this situation + + OPTIMIZATIONS + + BUG FIXES + HADOOP-2731 Under load, regions become extremely large and eventually cause + region servers to become unresponsive + HADOOP-2693 NPE in getClosestRowBefore (Bryan Duxbury & Stack) + HADOOP-2599 Some minor improvements to changes in HADOOP-2443 + (Bryan Duxbury & Stack) + HADOOP-2773 Master marks region offline when it is recovering from a region + server death + HBASE-425 Fix doc. so it accomodates new hbase untethered context + HBase-421 TestRegionServerExit broken + HBASE-426 hbase can't find remote filesystem + HBASE-446 Fully qualified hbase.rootdir doesn't work + HBASE-428 Under continuous upload of rows, WrongRegionExceptions are + thrown that reach the client even after retries + HBASE-490 Doubly-assigned .META.; master uses one and clients another + HBASE-496 impossible state for createLease writes 400k lines in about 15mins + HBASE-472 Passing on edits, we dump all to log + HBASE-79 When HBase needs to be migrated, it should display a message on + stdout, not just in the logs + HBASE-495 No server address listed in .META. + HBASE-433 HBASE-251 Region server should delete restore log after successful + restore, Stuck replaying the edits of crashed machine. + HBASE-27 hregioninfo cell empty in meta table + HBASE-501 Empty region server address in info:server entry and a + startcode of -1 in .META. + HBASE-516 HStoreFile.finalKey does not update the final key if it is not + the top region of a split region + HBASE-524 Problems with getFull + HBASE-514 table 'does not exist' when it does + HBASE-537 Wait for hdfs to exit safe mode + HBASE-534 Double-assignment at SPLIT-time + + IMPROVEMENTS + HADOOP-2555 Refactor the HTable#get and HTable#getRow methods to avoid + repetition of retry-on-failure logic (thanks to Peter Dolan and + Bryan Duxbury) + HBASE-281 Shell should allow deletions in .META. and -ROOT- tables + HBASE-480 Tool to manually merge two regions + HBASE-477 Add support for an HBASE_CLASSPATH + HBASE-515 At least double default timeouts between regionserver and master + HBASE-482 package-level javadoc should have example client or at least + point at the FAQ + HBASE-497 RegionServer needs to recover if datanode goes down + HBASE-456 Clearly state which ports need to be opened in order to run HBase + HBASE-483 Merge tool won't merge two overlapping regions + HBASE-476 RegexpRowFilter behaves incorectly when there are multiple store + files (Clint Morgan via Jim Kellerman) + HBASE-527 RegexpRowFilter does not work when there are columns from + multiple families (Clint Morgan via Jim Kellerman) + +Release 0.16.0 + + 2008/02/04 HBase is now a subproject of Hadoop. The first HBase release as + a subproject will be release 0.1.0 which will be equivalent to + the version of HBase included in Hadoop 0.16.0. In order to + accomplish this, the HBase portion of HBASE-288 (formerly + HADOOP-1398) has been backed out. Once 0.1.0 is frozen (depending + mostly on changes to infrastructure due to becoming a sub project + instead of a contrib project), this patch will re-appear on HBase + trunk. + + INCOMPATIBLE CHANGES + HADOOP-2056 A table with row keys containing colon fails to split regions + HADOOP-2079 Fix generated HLog, HRegion names + HADOOP-2495 Minor performance improvements: Slim-down BatchOperation, etc. + HADOOP-2506 Remove the algebra package + HADOOP-2519 Performance improvements: Customized RPC serialization + HADOOP-2478 Restructure how HBase lays out files in the file system (phase 1) + (test input data) + HADOOP-2478 Restructure how HBase lays out files in the file system (phase 2) + Includes migration tool org.apache.hadoop.hbase.util.Migrate + HADOOP-2558 org.onelab.filter.BloomFilter class uses 8X the memory it should + be using + + NEW FEATURES + HADOOP-2061 Add new Base64 dialects + HADOOP-2084 Add a LocalHBaseCluster + HADOOP-2068 RESTful interface (Bryan Duxbury via Stack) + HADOOP-2316 Run REST servlet outside of master + (Bryan Duxbury & Stack) + HADOOP-1550 No means of deleting a'row' (Bryan Duxbuery via Stack) + HADOOP-2384 Delete all members of a column family on a specific row + (Bryan Duxbury via Stack) + HADOOP-2395 Implement "ALTER TABLE ... CHANGE column" operation + (Bryan Duxbury via Stack) + HADOOP-2240 Truncate for hbase (Edward Yoon via Stack) + HADOOP-2389 Provide multiple language bindings for HBase (Thrift) + (David Simpson via Stack) + + OPTIMIZATIONS + HADOOP-2479 Save on number of Text object creations + HADOOP-2485 Make mapfile index interval configurable (Set default to 32 + instead of 128) + HADOOP-2553 Don't make Long objects calculating hbase type hash codes + HADOOP-2377 Holding open MapFile.Readers is expensive, so use less of them + HADOOP-2407 Keeping MapFile.Reader open is expensive: Part 2 + HADOOP-2533 Performance: Scanning, just creating MapWritable in next + consumes >20% CPU + HADOOP-2443 Keep lazy cache of regions in client rather than an + 'authoritative' list (Bryan Duxbury via Stack) + HADOOP-2600 Performance: HStore.getRowKeyAtOrBefore should use + MapFile.Reader#getClosest (before) + (Bryan Duxbury via Stack) + + BUG FIXES + HADOOP-2059 In tests, exceptions in min dfs shutdown should not fail test + (e.g. nightly #272) + HADOOP-2064 TestSplit assertion and NPE failures (Patch build #952 and #953) + HADOOP-2124 Use of `hostname` does not work on Cygwin in some cases + HADOOP-2083 TestTableIndex failed in #970 and #956 + HADOOP-2109 Fixed race condition in processing server lease timeout. + HADOOP-2137 hql.jsp : The character 0x19 is not valid + HADOOP-2109 Fix another race condition in processing dead servers, + Fix error online meta regions: was using region name and not + startKey as key for map.put. Change TestRegionServerExit to + always kill the region server for the META region. This makes + the test more deterministic and getting META reassigned was + problematic. + HADOOP-2155 Method expecting HBaseConfiguration throws NPE when given Configuration + HADOOP-2156 BufferUnderflowException for un-named HTableDescriptors + HADOOP-2161 getRow() is orders of magnitudes slower than get(), even on rows + with one column (Clint Morgan and Stack) + HADOOP-2040 Hudson hangs AFTER test has finished + HADOOP-2274 Excess synchronization introduced by HADOOP-2139 negatively + impacts performance + HADOOP-2196 Fix how hbase sits in hadoop 'package' product + HADOOP-2276 Address regression caused by HADOOP-2274, fix HADOOP-2173 (When + the master times out a region servers lease, the region server + may not restart) + HADOOP-2253 getRow can return HBASE::DELETEVAL cells + (Bryan Duxbury via Stack) + HADOOP-2295 Fix assigning a region to multiple servers + HADOOP-2234 TableInputFormat erroneously aggregates map values + HADOOP-2308 null regioninfo breaks meta scanner + HADOOP-2304 Abbreviated symbol parsing error of dir path in jar command + (Edward Yoon via Stack) + HADOOP-2320 Committed TestGet2 is managled (breaks build). + HADOOP-2322 getRow(row, TS) client interface not properly connected + HADOOP-2309 ConcurrentModificationException doing get of all region start keys + HADOOP-2321 TestScanner2 does not release resources which sometimes cause the + test to time out + HADOOP-2315 REST servlet doesn't treat / characters in row key correctly + (Bryan Duxbury via Stack) + HADOOP-2332 Meta table data selection in Hbase Shell + (Edward Yoon via Stack) + HADOOP-2347 REST servlet not thread safe but run in a threaded manner + (Bryan Duxbury via Stack) + HADOOP-2365 Result of HashFunction.hash() contains all identical values + HADOOP-2362 Leaking hdfs file handle on region split + HADOOP-2338 Fix NullPointerException in master server. + HADOOP-2380 REST servlet throws NPE when any value node has an empty string + (Bryan Duxbury via Stack) + HADOOP-2350 Scanner api returns null row names, or skips row names if + different column families do not have entries for some rows + HADOOP-2283 AlreadyBeingCreatedException (Was: Stuck replay of failed + regionserver edits) + HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338 + HADOOP-2324 Fix assertion failures in TestTableMapReduce + HADOOP-2396 NPE in HMaster.cancelLease + HADOOP-2397 The only time that a meta scanner should try to recover a log is + when the master is starting + HADOOP-2417 Fix critical shutdown problem introduced by HADOOP-2338 + HADOOP-2418 Fix assertion failures in TestTableMapReduce, TestTableIndex, + and TestTableJoinMapReduce + HADOOP-2414 Fix ArrayIndexOutOfBoundsException in bloom filters. + HADOOP-2430 Master will not shut down if there are no active region servers + HADOOP-2199 Add tools for going from hregion filename to region name in logs + HADOOP-2441 Fix build failures in TestHBaseCluster + HADOOP-2451 End key is incorrectly assigned in many region splits + HADOOP-2455 Error in Help-string of CREATE command (Edward Yoon via Stack) + HADOOP-2465 When split parent regions are cleaned up, not all the columns are + deleted + HADOOP-2468 TestRegionServerExit failed in Hadoop-Nightly #338 + HADOOP-2467 scanner truncates resultset when > 1 column families + HADOOP-2503 REST Insert / Select encoding issue (Bryan Duxbury via Stack) + HADOOP-2505 formatter classes missing apache license + HADOOP-2504 REST servlet method for deleting a scanner was not properly + mapped (Bryan Duxbury via Stack) + HADOOP-2507 REST servlet does not properly base64 row keys and column names + (Bryan Duxbury via Stack) + HADOOP-2530 Missing type in new hbase custom RPC serializer + HADOOP-2490 Failure in nightly #346 (Added debugging of hudson failures). + HADOOP-2558 fixes for build up on hudson (part 1, part 2, part 3, part 4) + HADOOP-2500 Unreadable region kills region servers + HADOOP-2579 Initializing a new HTable object against a nonexistent table + throws a NoServerForRegionException instead of a + TableNotFoundException when a different table has been created + previously (Bryan Duxbury via Stack) + HADOOP-2587 Splits blocked by compactions cause region to be offline for + duration of compaction. + HADOOP-2592 Scanning, a region can let out a row that its not supposed + to have + HADOOP-2493 hbase will split on row when the start and end row is the + same cause data loss (Bryan Duxbury via Stack) + HADOOP-2629 Shell digests garbage without complaint + HADOOP-2619 Compaction errors after a region splits + HADOOP-2621 Memcache flush flushing every 60 secs with out considering + the max memcache size + HADOOP-2584 Web UI displays an IOException instead of the Tables + HADOOP-2650 Remove Writables.clone and use WritableUtils.clone from + hadoop instead + HADOOP-2668 Documentation and improved logging so fact that hbase now + requires migration comes as less of a surprise + HADOOP-2686 Removed tables stick around in .META. + HADOOP-2688 IllegalArgumentException processing a shutdown stops + server going down and results in millions of lines of output + HADOOP-2706 HBase Shell crash + HADOOP-2712 under load, regions won't split + HADOOP-2675 Options not passed to rest/thrift + HADOOP-2722 Prevent unintentional thread exit in region server and master + HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override + hbase configurations if argumant is not an instance of + HBaseConfiguration. + HADOOP-2753 Back out 2718; programmatic config works but hbase*xml conf + is overridden + HADOOP-2718 Copy Constructor HBaseConfiguration(Configuration) will override + hbase configurations if argumant is not an instance of + HBaseConfiguration (Put it back again). + HADOOP-2631 2443 breaks HTable.getStartKeys when there is more than one + table or table you are enumerating isn't the first table + Delete empty file: src/contrib/hbase/src/java/org/apache/hadoop/hbase/mapred/ + TableOutputCollector.java per Nigel Daley + + IMPROVEMENTS + HADOOP-2401 Add convenience put method that takes writable + (Johan Oskarsson via Stack) + HADOOP-2074 Simple switch to enable DEBUG level-logging in hbase + HADOOP-2088 Make hbase runnable in $HADOOP_HOME/build(/contrib/hbase) + HADOOP-2126 Use Bob Jenkins' hash for bloom filters + HADOOP-2157 Make Scanners implement Iterable + HADOOP-2176 Htable.deleteAll documentation is ambiguous + HADOOP-2139 (phase 1) Increase parallelism in region servers. + HADOOP-2267 [Hbase Shell] Change the prompt's title from 'hbase' to 'hql'. + (Edward Yoon via Stack) + HADOOP-2139 (phase 2) Make region server more event driven + HADOOP-2289 Useless efforts of looking for the non-existant table in select + command. + (Edward Yoon via Stack) + HADOOP-2257 Show a total of all requests and regions on the web ui + (Paul Saab via Stack) + HADOOP-2261 HTable.abort no longer throws exception if there is no active update. + HADOOP-2287 Make hbase unit tests take less time to complete. + HADOOP-2262 Retry n times instead of n**2 times. + HADOOP-1608 Relational Algrebra Operators + (Edward Yoon via Stack) + HADOOP-2198 HTable should have method to return table metadata + HADOOP-2296 hbase shell: phantom columns show up from select command + HADOOP-2297 System.exit() Handling in hbase shell jar command + (Edward Yoon via Stack) + HADOOP-2224 Add HTable.getRow(ROW, ts) + (Bryan Duxbury via Stack) + HADOOP-2339 Delete command with no WHERE clause + (Edward Yoon via Stack) + HADOOP-2299 Support inclusive scans (Bryan Duxbury via Stack) + HADOOP-2333 Client side retries happen at the wrong level + HADOOP-2357 Compaction cleanup; less deleting + prevent possible file leaks + HADOOP-2392 TestRegionServerExit has new failure mode since HADOOP-2338 + HADOOP-2370 Allow column families with an unlimited number of versions + (Edward Yoon via Stack) + HADOOP-2047 Add an '--master=X' and '--html' command-line parameters to shell + (Edward Yoon via Stack) + HADOOP-2351 If select command returns no result, it doesn't need to show the + header information (Edward Yoon via Stack) + HADOOP-2285 Add being able to shutdown regionservers (Dennis Kubes via Stack) + HADOOP-2458 HStoreFile.writeSplitInfo should just call + HStoreFile.Reference.write + HADOOP-2471 Add reading/writing MapFile to PerformanceEvaluation suite + HADOOP-2522 Separate MapFile benchmark from PerformanceEvaluation + (Tom White via Stack) + HADOOP-2502 Insert/Select timestamp, Timestamp data type in HQL + (Edward Yoon via Stack) + HADOOP-2450 Show version (and svn revision) in hbase web ui + HADOOP-2472 Range selection using filter (Edward Yoon via Stack) + HADOOP-2548 Make TableMap and TableReduce generic + (Frederik Hedberg via Stack) + HADOOP-2557 Shell count function (Edward Yoon via Stack) + HADOOP-2589 Change an classes/package name from Shell to hql + (Edward Yoon via Stack) + HADOOP-2545 hbase rest server should be started with hbase-daemon.sh + HADOOP-2525 Same 2 lines repeated 11 million times in HMaster log upon + HMaster shutdown + HADOOP-2616 hbase not spliting when the total size of region reaches max + region size * 1.5 + HADOOP-2643 Make migration tool smarter. + +Release 0.15.1 +Branch 0.15 + + INCOMPATIBLE CHANGES + HADOOP-1931 Hbase scripts take --ARG=ARG_VALUE when should be like hadoop + and do ---ARG ARG_VALUE + + NEW FEATURES + HADOOP-1768 FS command using Hadoop FsShell operations + (Edward Yoon via Stack) + HADOOP-1784 Delete: Fix scanners and gets so they work properly in presence + of deletes. Added a deleteAll to remove all cells equal to or + older than passed timestamp. Fixed compaction so deleted cells + do not make it out into compacted output. Ensure also that + versions > column max are dropped compacting. + HADOOP-1720 Addition of HQL (Hbase Query Language) support in Hbase Shell. + The old shell syntax has been replaced by HQL, a small SQL-like + set of operators, for creating, altering, dropping, inserting, + deleting, and selecting, etc., data in hbase. + (Inchul Song and Edward Yoon via Stack) + HADOOP-1913 Build a Lucene index on an HBase table + (Ning Li via Stack) + HADOOP-1957 Web UI with report on cluster state and basic browsing of tables + + OPTIMIZATIONS + + BUG FIXES + HADOOP-1527 Region server won't start because logdir exists + HADOOP-1723 If master asks region server to shut down, by-pass return of + shutdown message + HADOOP-1729 Recent renaming or META tables breaks hbase shell + HADOOP-1730 unexpected null value causes META scanner to exit (silently) + HADOOP-1747 On a cluster, on restart, regions multiply assigned + HADOOP-1776 Fix for sporadic compaction failures closing and moving + compaction result + HADOOP-1780 Regions are still being doubly assigned + HADOOP-1797 Fix NPEs in MetaScanner constructor + HADOOP-1799 Incorrect classpath in binary version of Hadoop + HADOOP-1805 Region server hang on exit + HADOOP-1785 TableInputFormat.TableRecordReader.next has a bug + (Ning Li via Stack) + HADOOP-1800 output should default utf8 encoding + HADOOP-1801 When hdfs is yanked out from under hbase, hbase should go down gracefully + HADOOP-1813 OOME makes zombie of region server + HADOOP-1814 TestCleanRegionServerExit fails too often on Hudson + HADOOP-1820 Regionserver creates hlogs without bound + (reverted 2007/09/25) (Fixed 2007/09/30) + HADOOP-1821 Replace all String.getBytes() with String.getBytes("UTF-8") + HADOOP-1832 listTables() returns duplicate tables + HADOOP-1834 Scanners ignore timestamp passed on creation + HADOOP-1847 Many HBase tests do not fail well. + HADOOP-1847 Many HBase tests do not fail well. (phase 2) + HADOOP-1870 Once file system failure has been detected, don't check it again + and get on with shutting down the hbase cluster. + HADOOP-1888 NullPointerException in HMemcacheScanner (reprise) + HADOOP-1903 Possible data loss if Exception happens between snapshot and + flush to disk. + HADOOP-1920 Wrapper scripts broken when hadoop in one location and hbase in + another + HADOOP-1923, HADOOP-1924 a) tests fail sporadically because set up and tear + down is inconsistent b) TestDFSAbort failed in nightly #242 + HADOOP-1929 Add hbase-default.xml to hbase jar + HADOOP-1941 StopRowFilter throws NPE when passed null row + HADOOP-1966 Make HBase unit tests more reliable in the Hudson environment. + HADOOP-1975 HBase tests failing with java.lang.NumberFormatException + HADOOP-1990 Regression test instability affects nightly and patch builds + HADOOP-1996 TestHStoreFile fails on windows if run multiple times + HADOOP-1937 When the master times out a region server's lease, it is too + aggressive in reclaiming the server's log. + HADOOP-2004 webapp hql formatting bugs + HADOOP_2011 Make hbase daemon scripts take args in same order as hadoop + daemon scripts + HADOOP-2017 TestRegionServerAbort failure in patch build #903 and + nightly #266 + HADOOP-2029 TestLogRolling fails too often in patch and nightlies + HADOOP-2038 TestCleanRegionExit failed in patch build #927 + + IMPROVEMENTS + HADOOP-1737 Make HColumnDescriptor data publically members settable + HADOOP-1746 Clean up findbugs warnings + HADOOP-1757 Bloomfilters: single argument constructor, use enum for bloom + filter types + HADOOP-1760 Use new MapWritable and SortedMapWritable classes from + org.apache.hadoop.io + HADOOP-1793 (Phase 1) Remove TestHClient (Phase2) remove HClient. + HADOOP-1794 Remove deprecated APIs + HADOOP-1802 Startup scripts should wait until hdfs as cleared 'safe mode' + HADOOP-1833 bin/stop_hbase.sh returns before it completes + (Izaak Rubin via Stack) + HADOOP-1835 Updated Documentation for HBase setup/installation + (Izaak Rubin via Stack) + HADOOP-1868 Make default configuration more responsive + HADOOP-1884 Remove useless debugging log messages from hbase.mapred + HADOOP-1856 Add Jar command to hbase shell using Hadoop RunJar util + (Edward Yoon via Stack) + HADOOP-1928 Have master pass the regionserver the filesystem to use + HADOOP-1789 Output formatting + HADOOP-1960 If a region server cannot talk to the master before its lease + times out, it should shut itself down + HADOOP-2035 Add logo to webapps + + +Below are the list of changes before 2007-08-18 + + 1. HADOOP-1384. HBase omnibus patch. (jimk, Vuk Ercegovac, and Michael Stack) + 2. HADOOP-1402. Fix javadoc warnings in hbase contrib. (Michael Stack) + 3. HADOOP-1404. HBase command-line shutdown failing (Michael Stack) + 4. HADOOP-1397. Replace custom hbase locking with + java.util.concurrent.locks.ReentrantLock (Michael Stack) + 5. HADOOP-1403. HBase reliability - make master and region server more fault + tolerant. + 6. HADOOP-1418. HBase miscellaneous: unit test for HClient, client to do + 'Performance Evaluation', etc. + 7. HADOOP-1420, HADOOP-1423. Findbugs changes, remove reference to removed + class HLocking. + 8. HADOOP-1424. TestHBaseCluster fails with IllegalMonitorStateException. Fix + regression introduced by HADOOP-1397. + 9. HADOOP-1426. Make hbase scripts executable + add test classes to CLASSPATH. + 10. HADOOP-1430. HBase shutdown leaves regionservers up. + 11. HADOOP-1392. Part1: includes create/delete table; enable/disable table; + add/remove column. + 12. HADOOP-1392. Part2: includes table compaction by merging adjacent regions + that have shrunk in size. + 13. HADOOP-1445 Support updates across region splits and compactions + 14. HADOOP-1460 On shutdown IOException with complaint 'Cannot cancel lease + that is not held' + 15. HADOOP-1421 Failover detection, split log files. + For the files modified, also clean up javadoc, class, field and method + visibility (HADOOP-1466) + 16. HADOOP-1479 Fix NPE in HStore#get if store file only has keys < passed key. + 17. HADOOP-1476 Distributed version of 'Performance Evaluation' script + 18. HADOOP-1469 Asychronous table creation + 19. HADOOP-1415 Integrate BSD licensed bloom filter implementation. + 20. HADOOP-1465 Add cluster stop/start scripts for hbase + 21. HADOOP-1415 Provide configurable per-column bloom filters - part 2. + 22. HADOOP-1498. Replace boxed types with primitives in many places. + 23. HADOOP-1509. Made methods/inner classes in HRegionServer and HClient protected + instead of private for easier extension. Also made HRegion and HRegionInfo public too. + Added an hbase-default.xml property for specifying what HRegionInterface extension to use + for proxy server connection. (James Kennedy via Jim Kellerman) + 24. HADOOP-1534. [hbase] Memcache scanner fails if start key not present + 25. HADOOP-1537. Catch exceptions in testCleanRegionServerExit so we can see + what is failing. + 26. HADOOP-1543 [hbase] Add HClient.tableExists + 27. HADOOP-1519 [hbase] map/reduce interface for HBase. (Vuk Ercegovac and + Jim Kellerman) + 28. HADOOP-1523 Hung region server waiting on write locks + 29. HADOOP-1560 NPE in MiniHBaseCluster on Windows + 30. HADOOP-1531 Add RowFilter to HRegion.HScanner + Adds a row filtering interface and two implemenentations: A page scanner, + and a regex row/column-data matcher. (James Kennedy via Stack) + 31. HADOOP-1566 Key-making utility + 32. HADOOP-1415 Provide configurable per-column bloom filters. + HADOOP-1466 Clean up visibility and javadoc issues in HBase. + 33. HADOOP-1538 Provide capability for client specified time stamps in HBase + HADOOP-1466 Clean up visibility and javadoc issues in HBase. + 34. HADOOP-1589 Exception handling in HBase is broken over client server connections + 35. HADOOP-1375 a simple parser for hbase (Edward Yoon via Stack) + 36. HADOOP-1600 Update license in HBase code + 37. HADOOP-1589 Exception handling in HBase is broken over client server + 38. HADOOP-1574 Concurrent creates of a table named 'X' all succeed + 39. HADOOP-1581 Un-openable tablename bug + 40. HADOOP-1607 [shell] Clear screen command (Edward Yoon via Stack) + 41. HADOOP-1614 [hbase] HClient does not protect itself from simultaneous updates + 42. HADOOP-1468 Add HBase batch update to reduce RPC overhead + 43. HADOOP-1616 Sporadic TestTable failures + 44. HADOOP-1615 Replacing thread notification-based queue with + java.util.concurrent.BlockingQueue in HMaster, HRegionServer + 45. HADOOP-1606 Updated implementation of RowFilterSet, RowFilterInterface + (Izaak Rubin via Stack) + 46. HADOOP-1579 Add new WhileMatchRowFilter and StopRowFilter filters + (Izaak Rubin via Stack) + 47. HADOOP-1637 Fix to HScanner to Support Filters, Add Filter Tests to + TestScanner2 (Izaak Rubin via Stack) + 48. HADOOP-1516 HClient fails to readjust when ROOT or META redeployed on new + region server + 49. HADOOP-1646 RegionServer OOME's under sustained, substantial loading by + 10 concurrent clients + 50. HADOOP-1468 Add HBase batch update to reduce RPC overhead (restrict batches + to a single row at a time) + 51. HADOOP-1528 HClient for multiple tables (phase 1) (James Kennedy & JimK) + 52. HADOOP-1528 HClient for multiple tables (phase 2) all HBase client side code + (except TestHClient and HBaseShell) have been converted to use the new client + side objects (HTable/HBaseAdmin/HConnection) instead of HClient. + 53. HADOOP-1528 HClient for multiple tables - expose close table function + 54. HADOOP-1466 Clean up warnings, visibility and javadoc issues in HBase. + 55. HADOOP-1662 Make region splits faster + 56. HADOOP-1678 On region split, master should designate which host should + serve daughter splits. Phase 1: Master balances load for new regions and + when a region server fails. + 57. HADOOP-1678 On region split, master should designate which host should + serve daughter splits. Phase 2: Master assigns children of split region + instead of HRegionServer serving both children. + 58. HADOOP-1710 All updates should be batch updates + 59. HADOOP-1711 HTable API should use interfaces instead of concrete classes as + method parameters and return values + 60. HADOOP-1644 Compactions should not block updates + 60. HADOOP-1672 HBase Shell should use new client classes + (Edward Yoon via Stack). + 61. HADOOP-1709 Make HRegionInterface more like that of HTable + HADOOP-1725 Client find of table regions should not include offlined, split parents += diff --git NOTICE.txt NOTICE.txt index 337c93b..5e159f6 100644 --- NOTICE.txt +++ NOTICE.txt @@ -15,6 +15,12 @@ Common Public License v1.0. JRuby itself includes libraries variously licensed. See its COPYING document for details: https://github.com/jruby/jruby/blob/master/COPYING - + The JRuby community went out of their way to make JRuby compatible with Apache projects: See https://issues.apache.org/jira/browse/HBASE-3374) + +Our Orca logo we got here: http://www.vectorfree.com/jumping-orca +It is licensed Creative Commons Attribution 3.0. +See https://creativecommons.org/licenses/by/3.0/us/ +We changed the logo by stripping the colored background, inverting +it and then rotating it some. diff --git bin/considerAsDead.sh bin/considerAsDead.sh new file mode 100755 index 0000000..a823f9d --- /dev/null +++ bin/considerAsDead.sh @@ -0,0 +1,63 @@ +#!/usr/bin/env bash +# +#/** +# * Copyright 2007 The Apache Software Foundation +# * +# * Licensed to the Apache Software Foundation (ASF) under one +# * or more contributor license agreements. See the NOTICE file +# * distributed with this work for additional information +# * regarding copyright ownership. The ASF licenses this file +# * to you under the Apache License, Version 2.0 (the +# * "License"); you may not use this file except in compliance +# * with the License. You may obtain a copy of the License at +# * +# * http://www.apache.org/licenses/LICENSE-2.0 +# * +# * Unless required by applicable law or agreed to in writing, software +# * distributed under the License is distributed on an "AS IS" BASIS, +# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# * See the License for the specific language governing permissions and +# * limitations under the License. +# */ +# + +usage="Usage: considerAsDead.sh --hostname serverName" + +# if no args specified, show usage +if [ $# -le 1 ]; then + echo $usage + exit 1 +fi + +bin=`dirname "${BASH_SOURCE-$0}"` +bin=`cd "$bin">/dev/null; pwd` + +. $bin/hbase-config.sh + +shift +deadhost=$@ + +remote_cmd="cd ${HBASE_HOME}; $bin/hbase-daemon.sh --config ${HBASE_CONF_DIR} restart" + +zparent=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.parent` +if [ "$zparent" == "null" ]; then zparent="/hbase"; fi + +zkrs=`$bin/hbase org.apache.hadoop.hbase.util.HBaseConfTool zookeeper.znode.rs` +if [ "$zkrs" == "null" ]; then zkrs="rs"; fi + +zkrs="$zparent/$zkrs" +online_regionservers=`$bin/hbase zkcli ls $zkrs 2>&1 | tail -1 | sed "s/\[//" | sed "s/\]//"` +for rs in $online_regionservers +do + rs_parts=(${rs//,/ }) + hostname=${rs_parts[0]} + echo $deadhost + echo $hostname + if [ "$deadhost" == "$hostname" ]; then + znode="$zkrs/$rs" + echo "ZNode Deleting:" $znode + $bin/hbase zkcli delete $znode > /dev/null 2>&1 + sleep 1 + ssh $HBASE_SSH_OPTS $hostname $remote_cmd 2>&1 | sed "s/^/$hostname: /" + fi +done diff --git bin/hirb.rb bin/hirb.rb index 0503c29..94b5cdb 100644 --- bin/hirb.rb +++ bin/hirb.rb @@ -19,6 +19,12 @@ # File passed to org.jruby.Main by bin/hbase. Pollutes jirb with hbase imports # and hbase commands and then loads jirb. Outputs a banner that tells user # where to find help, shell version, and loads up a custom hirb. +# +# In noninteractive mode, runs commands from stdin until completion or an error. +# On success will exit with status 0, on any problem will exit non-zero. Callers +# should only rely on "not equal to 0", because the current error exit code of 1 +# will likely be updated to diffentiate e.g. invalid commands, incorrect args, +# permissions, etc. # TODO: Interrupt a table creation or a connection to a bad master. Currently # has to time out. Below we've set down the retries for rpc and hbase but @@ -54,12 +60,16 @@ Usage: shell [OPTIONS] [SCRIPTFILE [ARGUMENTS]] -d | --debug Set DEBUG log levels. -h | --help This help. + -n | --noninteractive Do not run within an IRB session + and exit with non-zero status on + first error. HERE found = [] format = 'console' script2run = nil log_level = org.apache.log4j.Level::ERROR @shell_debug = false +interactive = true for arg in ARGV if arg =~ /^--format=(.+)/i format = $1 @@ -80,6 +90,9 @@ for arg in ARGV @shell_debug = true found.push(arg) puts "Setting DEBUG log level..." + elsif arg == '-n' || arg == '--noninteractive' + interactive = false + found.push(arg) else # Presume it a script. Save it off for running later below # after we've set up some environment. @@ -118,10 +131,11 @@ require 'shell/formatter' @hbase = Hbase::Hbase.new # Setup console -@shell = Shell::Shell.new(@hbase, @formatter) +@shell = Shell::Shell.new(@hbase, @formatter, interactive) @shell.debug = @shell_debug # Add commands to this namespace +# TODO avoid polluting main namespace by using a binding @shell.export_commands(self) # Add help command @@ -158,38 +172,75 @@ end # Include hbase constants include HBaseConstants -# If script2run, try running it. Will go on to run the shell unless +# If script2run, try running it. If we're in interactive mode, will go on to run the shell unless # script calls 'exit' or 'exit 0' or 'exit errcode'. load(script2run) if script2run -# Output a banner message that tells users where to go for help -@shell.print_banner +if interactive + # Output a banner message that tells users where to go for help + @shell.print_banner -require "irb" -require 'irb/hirb' + require "irb" + require 'irb/hirb' -module IRB - def self.start(ap_path = nil) - $0 = File::basename(ap_path, ".rb") if ap_path + module IRB + def self.start(ap_path = nil) + $0 = File::basename(ap_path, ".rb") if ap_path - IRB.setup(ap_path) - @CONF[:IRB_NAME] = 'hbase' - @CONF[:AP_NAME] = 'hbase' - @CONF[:BACK_TRACE_LIMIT] = 0 unless $fullBackTrace + IRB.setup(ap_path) + @CONF[:IRB_NAME] = 'hbase' + @CONF[:AP_NAME] = 'hbase' + @CONF[:BACK_TRACE_LIMIT] = 0 unless $fullBackTrace - if @CONF[:SCRIPT] - hirb = HIRB.new(nil, @CONF[:SCRIPT]) - else - hirb = HIRB.new - end + if @CONF[:SCRIPT] + hirb = HIRB.new(nil, @CONF[:SCRIPT]) + else + hirb = HIRB.new + end - @CONF[:IRB_RC].call(hirb.context) if @CONF[:IRB_RC] - @CONF[:MAIN_CONTEXT] = hirb.context + @CONF[:IRB_RC].call(hirb.context) if @CONF[:IRB_RC] + @CONF[:MAIN_CONTEXT] = hirb.context - catch(:IRB_EXIT) do - hirb.eval_input + catch(:IRB_EXIT) do + hirb.eval_input + end end end -end -IRB.start + IRB.start +else + begin + # Noninteractive mode: if there is input on stdin, do a simple REPL. + # XXX Note that this purposefully uses STDIN and not Kernel.gets + # in order to maintain compatibility with previous behavior where + # a user could pass in script2run and then still pipe commands on + # stdin. + require "irb/ruby-lex" + require "irb/workspace" + workspace = IRB::WorkSpace.new(binding()) + scanner = RubyLex.new + scanner.set_input(STDIN) + scanner.each_top_level_statement do |statement, linenum| + puts(workspace.evaluate(nil, statement, 'stdin', linenum)) + end + # XXX We're catching Exception on purpose, because we want to include + # unwrapped java exceptions, syntax errors, eval failures, etc. + rescue Exception => exception + message = exception.to_s + # exception unwrapping in shell means we'll have to handle Java exceptions + # as a special case in order to format them properly. + if exception.kind_of? java.lang.Exception + $stderr.puts "java exception" + message = exception.get_message + end + # Include the 'ERROR' string to try to make transition easier for scripts that + # may have already been relying on grepping output. + puts "ERROR #{exception.class}: #{message}" + if $fullBacktrace + # re-raising the will include a backtrace and exit. + raise exception + else + exit 1 + end + end +end diff --git bin/region_mover.rb bin/region_mover.rb index 465ffa0..7dfedd1 100644 --- bin/region_mover.rb +++ bin/region_mover.rb @@ -344,7 +344,6 @@ def unloadRegions(options, hostname, port) # Remove those already tried to move rs.removeAll(movedRegions) break if rs.length == 0 - count = 0 $LOG.info("Moving " + rs.length.to_s + " region(s) from " + servername + " on " + servers.length.to_s + " servers using " + options[:maxthreads].to_s + " threads.") counter = 0 @@ -398,7 +397,6 @@ def loadRegions(options, hostname, port) sleep 0.5 end $LOG.info("Moving " + regions.size().to_s + " regions to " + servername) - count = 0 # sleep 20s to make sure the rs finished initialization. sleep 20 counter = 0 @@ -415,13 +413,13 @@ def loadRegions(options, hostname, port) next unless exists currentServer = getServerNameForRegion(admin, r) if currentServer and currentServer == servername - $LOG.info("Region " + r.getRegionNameAsString() + " (" + count.to_s + + $LOG.info("Region " + r.getRegionNameAsString() + " (" + counter.to_s + " of " + regions.length.to_s + ") already on target server=" + servername) counter = counter + 1 next end - pool.launch(r,currentServer,count) do |_r,_currentServer,_count| - $LOG.info("Moving region " + _r.getRegionNameAsString() + " (" + (_count + 1).to_s + + pool.launch(r,currentServer,counter) do |_r,_currentServer,_counter| + $LOG.info("Moving region " + _r.getRegionNameAsString() + " (" + (_counter + 1).to_s + " of " + regions.length.to_s + ") from " + _currentServer.to_s + " to server=" + servername); move(admin, _r, servername, _currentServer) diff --git dev-support/findHangingTest.sh dev-support/findHangingTest.sh deleted file mode 100755 index f7ebe47..0000000 --- dev-support/findHangingTest.sh +++ /dev/null @@ -1,40 +0,0 @@ -#!/bin/bash -## -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -## -# script to find hanging test from Jenkins build output -# usage: ./findHangingTest.sh -# -`curl -k -o jenkins.out "$1"` -expecting=Running -cat jenkins.out | while read line; do - if [[ "$line" =~ "Running org.apache.hadoop" ]]; then - if [[ "$expecting" =~ "Running" ]]; then - expecting=Tests - else - echo "Hanging test: $prevLine" - fi - fi - if [[ "$line" =~ "Tests run" ]]; then - expecting=Running - fi - if [[ "$line" =~ "Forking command line" ]]; then - a=$line - else - prevLine=$line - fi -done diff --git dev-support/findHangingTests.py dev-support/findHangingTests.py new file mode 100644 index 0000000..f51e7f5 --- /dev/null +++ dev-support/findHangingTests.py @@ -0,0 +1,54 @@ +#!/usr/bin/python +## +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +## +# script to find hanging test from Jenkins build output +# usage: ./findHangingTests.py +# +import urllib2 +import sys +import string +if len(sys.argv) != 2 : + print "ERROR : Provide the jenkins job console URL as the only argument." + exit(1) +print "Fetching the console output from the URL" +response = urllib2.urlopen(sys.argv[1]) +i = 0; +tests = {} +failed_tests = {} +while True: + n = response.readline() + if n == "" : + break + if n.find("org.apache.hadoop.hbase") < 0: + continue + test_name = string.strip(n[n.find("org.apache.hadoop.hbase"):len(n)]) + if n.find("Running org.apache.hadoop.hbase") > -1 : + tests[test_name] = False + if n.find("Tests run:") > -1 : + if n.find("FAILURE") > -1 or n.find("ERROR") > -1: + failed_tests[test_name] = True + tests[test_name] = True +response.close() + +print "Printing hanging tests" +for key, value in tests.iteritems(): + if value == False: + print "Hanging test : " + key +print "Printing Failing tests" +for key, value in failed_tests.iteritems(): + print "Failing test : " + key diff --git dev-support/findbugs-exclude.xml dev-support/findbugs-exclude.xml index b2a609a..d89f9b2 100644 --- dev-support/findbugs-exclude.xml +++ dev-support/findbugs-exclude.xml @@ -261,4 +261,13 @@ + + + + + + + + + diff --git dev-support/hbase_docker/Dockerfile dev-support/hbase_docker/Dockerfile index 9f55a44..7829292 100644 --- dev-support/hbase_docker/Dockerfile +++ dev-support/hbase_docker/Dockerfile @@ -38,7 +38,7 @@ ENV MAVEN_HOME /usr/local/apache-maven ENV PATH /usr/java/bin:/usr/local/apache-maven/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin # Pull down HBase and build it into /root/hbase-bin. -RUN git clone http://git.apache.org/hbase.git -b branch-1 +RUN git clone http://git.apache.org/hbase.git -b master RUN mvn clean install -DskipTests assembly:single -f ./hbase/pom.xml RUN mkdir -p hbase-bin RUN tar xzf /root/hbase/hbase-assembly/target/*tar.gz --strip-components 1 -C /root/hbase-bin diff --git dev-support/jdiffHBasePublicAPI.sh dev-support/jdiffHBasePublicAPI.sh old mode 100644 new mode 100755 index b6adbbfe..a1d2ebb --- dev-support/jdiffHBasePublicAPI.sh +++ dev-support/jdiffHBasePublicAPI.sh @@ -150,7 +150,7 @@ scenario_template_name=hbase_jdiff_p-$PREVIOUS_BRANCH-c-$CURRENT_BRANCH.xml # Pull down JDiff tool and unpack it if [ ! -d jdiff-1.1.1-with-incompatible-option ]; then - curl -O http://cloud.github.com/downloads/tomwhite/jdiff/jdiff-1.1.1-with-incompatible-option.zip + curl -OL http://cloud.github.com/downloads/tomwhite/jdiff/jdiff-1.1.1-with-incompatible-option.zip unzip jdiff-1.1.1-with-incompatible-option.zip fi @@ -164,7 +164,7 @@ if [[ "$FIRST_SOURCE_TYPE" = "git_repo" ]]; then rm -rf p-$PREVIOUS_BRANCH mkdir -p p-$PREVIOUS_BRANCH cd p-$PREVIOUS_BRANCH - git clone --depth 1 $PREVIOUS_REPO && cd hbase && git checkout origin/$PREVIOUS_BRANCH + git clone --depth 1 --branch $PREVIOUS_BRANCH $PREVIOUS_REPO cd $JDIFF_WORKING_DIRECTORY HBASE_1_HOME=`pwd`/p-$PREVIOUS_BRANCH/hbase else @@ -180,7 +180,7 @@ if [[ "$SECOND_SOURCE_TYPE" = "git_repo" ]]; then rm -rf $JDIFF_WORKING_DIRECTORY/c-$CURRENT_BRANCH mkdir -p $JDIFF_WORKING_DIRECTORY/c-$CURRENT_BRANCH cd $JDIFF_WORKING_DIRECTORY/c-$CURRENT_BRANCH - git clone --depth 1 $CURRENT_REPO && cd hbase && git checkout origin/$CURRENT_BRANCH + git clone --depth 1 --branch $CURRENT_BRANCH $CURRENT_REPO cd $JDIFF_WORKING_DIRECTORY HBASE_2_HOME=`pwd`/c-$CURRENT_BRANCH/hbase else @@ -226,15 +226,24 @@ cp $templateFile $JDIFF_WORKING_DIRECTORY/$scenario_template_name ### Note that PREVIOUS_BRANCH and CURRENT_BRANCH will be the absolute locations of the source. echo "Configuring the jdiff script" -sed -i "s]hbase_jdiff_report]hbase_jdiff_report-p-$PREVIOUS_BRANCH-c-$CURRENT_BRANCH]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name -sed -i "s]JDIFF_HOME_NAME]$JDIFF_HOME]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name -sed -i "s]OLD_BRANCH_NAME]$HBASE_1_HOME]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name -sed -i "s]NEW_BRANCH_NAME]$HBASE_2_HOME]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name -sed -i "s]V1]$PREVIOUS_BRANCH]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name -sed -i "s]V2]$CURRENT_BRANCH]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name - -sed -i "s]JDIFF_FOLDER]$JDIFF_WORKING_DIRECTORY]g" $JDIFF_WORKING_DIRECTORY/$scenario_template_name +# Extension to -i is done to support in-place editing on GNU sed and BSD sed. +sed -i.tmp "s]hbase_jdiff_report]hbase_jdiff_report-p-$PREVIOUS_BRANCH-c-$CURRENT_BRANCH]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name +sed -i.tmp "s]JDIFF_HOME_NAME]$JDIFF_HOME]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name +sed -i.tmp "s]OLD_BRANCH_NAME]$HBASE_1_HOME]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name +sed -i.tmp "s]NEW_BRANCH_NAME]$HBASE_2_HOME]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name + +sed -i.tmp "s]V1]$PREVIOUS_BRANCH]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name +sed -i.tmp "s]V2]$CURRENT_BRANCH]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name + +sed -i.tmp "s]JDIFF_FOLDER]$JDIFF_WORKING_DIRECTORY]g" \ + $JDIFF_WORKING_DIRECTORY/$scenario_template_name echo "Running jdiff"; ls -la $JDIFF_WORKING_DIRECTORY; diff --git dev-support/publish_hbase_website.sh dev-support/publish_hbase_website.sh index 2763dec..0350a6d 100755 --- dev-support/publish_hbase_website.sh +++ dev-support/publish_hbase_website.sh @@ -90,7 +90,7 @@ if [ $INTERACTIVE ]; then [Yy]* ) mvn clean package javadoc:aggregate site site:stage -DskipTests status=$? - if [ $status != 0 ]; then + if [ $status -ne 0 ]; then echo "The website does not build. Aborting." exit $status fi @@ -234,6 +234,6 @@ else changed the size of the website by $SVN_SIZE_DIFF MB and the number of files \ by $SVN_NUM_DIFF files." |tee /tmp/commit.txt cat /tmp/out.txt >> /tmp/commit.txt - svn commit -F /tmp/commit.txt + svn commit -q -F /tmp/commit.txt fi diff --git dev-support/rebase_all_git_branches.sh dev-support/rebase_all_git_branches.sh new file mode 100755 index 0000000..261faa8 --- /dev/null +++ dev-support/rebase_all_git_branches.sh @@ -0,0 +1,202 @@ +#!/bin/bash + +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# This script assumes that your remote is called "origin" +# and that your local master branch is called "master". +# I am sure it could be made more abstract but these are the defaults. + +# Edit this line to point to your default directory, +# or always pass a directory to the script. + +DEFAULT_DIR="EDIT_ME" + +function print_usage { + cat << __EOF + +$0: A script to manage your Apache HBase Git repository. + +If run with no arguments, it reads the DEFAULT_DIR variable, which you +can specify by editing the script. + +Usage: $0 [-d ] + $0 -h + + -h Show this screen. + -d The absolute or relative directory of your Git repository. +__EOF +} + +function get_all_branches { + # Gets all git branches present locally + all_branches=() + for i in `git branch --list | sed -e "s/\*//g"`; do + all_branches+=("$(echo $i | awk '{print($1)}')") + done +} + +function get_tracking_branches { + # Gets all branches with a remote tracking branch + tracking_branches=() + for i in `git branch -lvv | grep "\[origin/" | sed -e 's/\*//g' | awk {'print $1'}`; do + tracking_branches+=("$(echo $i | awk '{print($1)}')") + done +} + +function check_git_branch_status { + # Checks the current Git branch to see if it's dirty + # Returns 1 if the branch is dirty + git_dirty=$(git diff --shortstat 2> /dev/null | wc -l|awk {'print $1'}) + if [ "$git_dirty" -ne 0 ]; then + echo "Git status is dirty. Commit locally first." >&2 + exit 1 + fi +} + +function get_jira_status { + # This function expects as an argument the JIRA ID, + # and returns 99 if resolved and 1 if it couldn't + # get the status. + + # The JIRA status looks like this in the HTML: + # span id="resolution-val" class="value resolved" > + # The following is a bit brittle, but filters for lines with + # resolution-val returns 99 if it's resolved + jira_url='https://issues.apache.org/jira/rest/api/2/issue' + jira_id="$1" + curl -s "$jira_url/$jira_id?fields=resolution" |grep -q '{"resolution":null}' + status=$? + if [ $status -ne 0 -a $status -ne 1 ]; then + echo "Could not get JIRA status. Check your network." >&2 + exit 1 + fi + if [ $status -ne 0 ]; then + return 99 + fi +} + +# Process the arguments +while getopts ":hd:" opt; do + case $opt in + d) + # A directory was passed in + dir="$OPTARG" + if [ ! -d "$dir/.git/" ]; then + echo "$dir does not exist or is not a Git repository." >&2 + exit 1 + fi + ;; + h) + # Print usage instructions + print_usage + exit 0 + ;; + *) + echo "Invalid argument: $OPTARG" >&2 + print_usage >&2 + exit 1 + ;; + esac +done + +if [ -z "$dir" ]; then + # No directory was passed in + dir="$DEFAULT_DIR" + if [ "$dir" = "EDIT_ME" ]; then + echo "You need to edit the DEFAULT_DIR in $0." >&2 + $0 -h + exit 1 + elif [ ! -d "$DEFAULT_DIR/.git/" ]; then + echo "Default directory $DEFAULT_DIR is not a Git repository." >&2 + exit 1 + fi +fi + +cd "$dir" + +# For each tracking branch, check it out and make sure it's fresh +# This function creates tracking_branches array and stores the tracking branches in it +get_tracking_branches +for i in "${tracking_branches[@]}"; do + git checkout -q "$i" + # Exit if git status is dirty + check_git_branch_status + git pull -q --rebase + status=$? + if [ "$status" -ne 0 ]; then + echo "Unable to pull changes in $i: $status Exiting." >&2 + exit 1 + fi + echo "Refreshed $i from remote." +done + +# Run the function to get the list of all branches +# The function creates array all_branches and stores the branches in it +get_all_branches + +# Declare array to hold deleted branch info +deleted_branches=() + +for i in "${all_branches[@]}"; do + # Check JIRA status to see if we still need this branch + # JIRA expects uppercase + jira_id="$(echo $i | awk '{print toupper($0)'})" + if [[ "$jira_id" == HBASE-* ]]; then + # Returns 1 if the JIRA is closed, 0 otherwise + get_jira_status "$jira_id" + jira_status=$? + if [ $jira_status -eq 99 ]; then + # the JIRA seems to be resolved or is at least not unresolved + deleted_branches+=("$i") + fi + fi + + git checkout -q "$i" + + # Exit if git status is dirty + check_git_branch_status + + # If this branch has a remote, don't rebase it + # If it has a remote, it has a log with at least one entry + git log -n 1 origin/"$i" > /dev/null 2>&1 + status=$? + if [ $status -eq 128 ]; then + # Status 128 means there is no remote branch + # Try to rebase against master + echo "Rebasing $i on origin/master" + git rebase -q origin/master > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo "Failed. Rolling back. Rebase $i manually." + git rebase --abort + fi + elif [ $status -ne 0 ]; then + # If status is 0 it means there is a remote branch, we already took care of it + echo "Unknown error: $?" >&2 + exit 1 + fi +done + +# Offer to clean up all deleted branches +for i in "${deleted_branches[@]}"; do + read -p "$i's JIRA is resolved. Delete? " yn + case $yn in + [Yy]) + git branch -D $i + ;; + *) + echo "To delete it manually, run git branch -D $deleted_branches" + ;; + esac +done +git checkout -q master +exit 0 diff --git dev-support/smart-apply-patch.sh dev-support/smart-apply-patch.sh index 74a7128..cf1a7b0 100755 --- dev-support/smart-apply-patch.sh +++ dev-support/smart-apply-patch.sh @@ -46,32 +46,36 @@ if $PATCH -p0 -E --dry-run < $PATCH_FILE 2>&1 > $TMP; then # is adding new files and they would apply anywhere. So try to guess the # correct place to put those files. - TMP2=/tmp/tmp.paths.2.$$ - TOCLEAN="$TOCLEAN $TMP2" +# NOTE 2014/07/17: +# Temporarily disabling below check since our jenkins boxes seems to be not defaulting to bash +# causing below checks to fail. Once it is fixed, we can revert the commit and enable this again. - grep '^patching file ' $TMP | awk '{print $3}' | grep -v /dev/null | sort | uniq > $TMP2 - - #first off check that all of the files do not exist - FOUND_ANY=0 - for CHECK_FILE in $(cat $TMP2) - do - if [[ -f $CHECK_FILE ]]; then - FOUND_ANY=1 - fi - done - - if [[ "$FOUND_ANY" = "0" ]]; then - #all of the files are new files so we have to guess where the correct place to put it is. - - # if all of the lines start with a/ or b/, then this is a git patch that - # was generated without --no-prefix - if ! grep -qv '^a/\|^b/' $TMP2 ; then - echo Looks like this is a git patch. Stripping a/ and b/ prefixes - echo and incrementing PLEVEL - PLEVEL=$[$PLEVEL + 1] - sed -i -e 's,^[ab]/,,' $TMP2 - fi - fi +# TMP2=/tmp/tmp.paths.2.$$ +# TOCLEAN="$TOCLEAN $TMP2" +# +# grep '^patching file ' $TMP | awk '{print $3}' | grep -v /dev/null | sort | uniq > $TMP2 +# +# #first off check that all of the files do not exist +# FOUND_ANY=0 +# for CHECK_FILE in $(cat $TMP2) +# do +# if [[ -f $CHECK_FILE ]]; then +# FOUND_ANY=1 +# fi +# done +# +# if [[ "$FOUND_ANY" = "0" ]]; then +# #all of the files are new files so we have to guess where the correct place to put it is. +# +# # if all of the lines start with a/ or b/, then this is a git patch that +# # was generated without --no-prefix +# if ! grep -qv '^a/\|^b/' $TMP2 ; then +# echo Looks like this is a git patch. Stripping a/ and b/ prefixes +# echo and incrementing PLEVEL +# PLEVEL=$[$PLEVEL + 1] +# sed -i -e 's,^[ab]/,,' $TMP2 +# fi +# fi elif $PATCH -p1 -E --dry-run < $PATCH_FILE 2>&1 > /dev/null; then PLEVEL=1 elif $PATCH -p2 -E --dry-run < $PATCH_FILE 2>&1 > /dev/null; then diff --git dev-support/test-patch.properties dev-support/test-patch.properties index e9edecb..4ecad34 100644 --- dev-support/test-patch.properties +++ dev-support/test-patch.properties @@ -19,7 +19,7 @@ MAVEN_OPTS="-Xmx3100M" # Please update the per-module test-patch.properties if you update this file. OK_RELEASEAUDIT_WARNINGS=0 -OK_FINDBUGS_WARNINGS=89 +OK_FINDBUGS_WARNINGS=95 # Allow two warnings. Javadoc complains about sun.misc.Unsafe use. See HBASE-7457 OK_JAVADOC_WARNINGS=2 diff --git dev-support/test-patch.sh dev-support/test-patch.sh index 3c01359..f99eefd 100755 --- dev-support/test-patch.sh +++ dev-support/test-patch.sh @@ -15,7 +15,7 @@ #set -x ### Setup some variables. -### SVN_REVISION and BUILD_URL are set by Hudson if it is run by patch process +### GIT_COMMIT and BUILD_URL are set by Hudson if it is run by patch process ### Read variables from properties file bindir=$(dirname $0) @@ -26,6 +26,8 @@ else MVN=$MAVEN_HOME/bin/mvn fi +NEWLINE=$'\n' + PROJECT_NAME=HBase JENKINS=false PATCH_DIR=/tmp @@ -34,7 +36,6 @@ BASEDIR=$(pwd) PS=${PS:-ps} AWK=${AWK:-awk} WGET=${WGET:-wget} -SVN=${SVN:-svn} GREP=${GREP:-grep} EGREP=${EGREP:-egrep} PATCH=${PATCH:-patch} @@ -42,6 +43,7 @@ JIRACLI=${JIRA:-jira} FINDBUGS_HOME=${FINDBUGS_HOME} FORREST_HOME=${FORREST_HOME} ECLIPSE_HOME=${ECLIPSE_HOME} +GIT=${GIT:-git} ############################################################################### printUsage() { @@ -57,12 +59,12 @@ printUsage() { echo "--mvn-cmd= The 'mvn' command to use (default \$MAVEN_HOME/bin/mvn, or 'mvn')" echo "--ps-cmd= The 'ps' command to use (default 'ps')" echo "--awk-cmd= The 'awk' command to use (default 'awk')" - echo "--svn-cmd= The 'svn' command to use (default 'svn')" echo "--grep-cmd= The 'grep' command to use (default 'grep')" echo "--patch-cmd= The 'patch' command to use (default 'patch')" echo "--findbugs-home= Findbugs home directory (default FINDBUGS_HOME environment variable)" echo "--forrest-home= Forrest home directory (default FORREST_HOME environment variable)" - echo "--dirty-workspace Allow the local SVN workspace to have uncommitted changes" + echo "--dirty-workspace Allow the local workspace to have uncommitted changes" + echo "--git-cmd= The 'git' command to use (default 'git')" echo echo "Jenkins-only options:" echo "--jenkins Run by Jenkins (runs tests and posts results to JIRA)" @@ -98,9 +100,6 @@ parseArgs() { --wget-cmd=*) WGET=${i#*=} ;; - --svn-cmd=*) - SVN=${i#*=} - ;; --grep-cmd=*) GREP=${i#*=} ;; @@ -125,6 +124,9 @@ parseArgs() { --dirty-workspace) DIRTY_WORKSPACE=true ;; + --git-cmd=*) + GIT=${i#*=} + ;; *) PATCH_OR_DEFECT=$i ;; @@ -175,19 +177,29 @@ checkout () { echo "" ### When run by a developer, if the workspace contains modifications, do not continue ### unless the --dirty-workspace option was set - status=`$SVN stat --ignore-externals | sed -e '/^X[ ]*/D'` if [[ $JENKINS == "false" ]] ; then - if [[ "$status" != "" && -z $DIRTY_WORKSPACE ]] ; then - echo "ERROR: can't run in a workspace that contains the following modifications" - echo "$status" - cleanupAndExit 1 + if [[ -z $DIRTY_WORKSPACE ]] ; then + # Ref http://stackoverflow.com/a/2659808 for details on checking dirty status + ${GIT} diff-index --quiet HEAD + if [[ $? -ne 0 ]] ; then + uncommitted=`${GIT} diff --name-only HEAD` + uncommitted="You have the following files with uncommitted changes:${NEWLINE}${uncommitted}" + fi + untracked="$(${GIT} ls-files --exclude-standard --others)" && test -z "${untracked}" + if [[ $? -ne 0 ]] ; then + untracked="You have untracked and unignored files:${NEWLINE}${untracked}" + fi + if [[ $uncommitted || $untracked ]] ; then + echo "ERROR: can't run in a workspace that contains modifications." + echo "Pass the '--dirty-workspace' flag to bypass." + echo "" + echo "${uncommitted}" + echo "" + echo "${untracked}" + cleanupAndExit 1 + fi fi echo - else - cd $BASEDIR - $SVN revert -R . - rm -rf `$SVN status --no-ignore` - $SVN update fi return $? } @@ -214,10 +226,10 @@ setup () { echo "$defect patch is being downloaded at `date` from" echo "$patchURL" $WGET -q -O $PATCH_DIR/patch $patchURL - VERSION=${SVN_REVISION}_${defect}_PATCH-${patchNum} + VERSION=${GIT_COMMIT}_${defect}_PATCH-${patchNum} JIRA_COMMENT="Here are the results of testing the latest attachment $patchURL - against trunk revision ${SVN_REVISION}. + against master branch at commit ${GIT_COMMIT}. ATTACHMENT ID: ${ATTACHMENT_ID}" ### Copy the patch file to $PATCH_DIR @@ -244,7 +256,7 @@ setup () { echo "" echo "======================================================================" echo "======================================================================" - echo " Pre-build trunk to verify trunk stability and javac warnings" + echo " Pre-build master to verify stability and javac warnings" echo "======================================================================" echo "======================================================================" echo "" @@ -345,6 +357,7 @@ checkCompilationErrors() { Compilation errors resume: $ERRORS " + submitJiraComment 1 cleanupAndExit 1 fi } @@ -445,6 +458,9 @@ checkJavadocWarnings () { JIRA_COMMENT="$JIRA_COMMENT {color:red}-1 javadoc{color}. The javadoc tool appears to have generated `expr $(($javadocWarnings-$OK_JAVADOC_WARNINGS))` warning messages." + # Add javadoc output url + JIRA_COMMENT_FOOTER="Javadoc warnings: $BUILD_URL/artifact/patchprocess/patchJavadocWarnings.txt +$JIRA_COMMENT_FOOTER" return 1 fi JIRA_COMMENT="$JIRA_COMMENT @@ -478,7 +494,7 @@ checkJavacWarnings () { if [[ $patchJavacWarnings -gt $trunkJavacWarnings ]] ; then JIRA_COMMENT="$JIRA_COMMENT - {color:red}-1 javac{color}. The applied patch generated $patchJavacWarnings javac compiler warnings (more than the trunk's current $trunkJavacWarnings warnings)." + {color:red}-1 javac{color}. The applied patch generated $patchJavacWarnings javac compiler warnings (more than the master's current $trunkJavacWarnings warnings)." return 1 fi fi @@ -513,14 +529,18 @@ checkCheckstyleErrors() { JIRA_COMMENT="$JIRA_COMMENT - {color:red}-1 javac{color}. The applied patch generated $patchCheckstyleErrors checkstyle errors (more than the trunk's current $trunkCheckstyleErrors errors)." + {color:red}-1 checkstyle{color}. The applied patch generated $patchCheckstyleErrors checkstyle errors (more than the master's current $trunkCheckstyleErrors errors)." return 1 fi echo "There were $patchCheckstyleErrors checkstyle errors in this patch compared to $trunkCheckstyleErrors on master." fi + JIRA_COMMENT_FOOTER="Checkstyle Errors: $BUILD_URL/artifact/patchprocess/checkstyle-aggregate.html + + $JIRA_COMMENT_FOOTER" + JIRA_COMMENT="$JIRA_COMMENT - {color:green}+1 javac{color}. The applied patch does not increase the total number of checkstyle errors" + {color:green}+1 checkstyle{color}. The applied patch does not increase the total number of checkstyle errors" return 0 } @@ -572,7 +592,7 @@ checkReleaseAuditWarnings () { if [[ $patchReleaseAuditWarnings -gt $OK_RELEASEAUDIT_WARNINGS ]] ; then JIRA_COMMENT="$JIRA_COMMENT - {color:red}-1 release audit{color}. The applied patch generated $patchReleaseAuditWarnings release audit warnings (more than the trunk's current $OK_RELEASEAUDIT_WARNINGS warnings)." + {color:red}-1 release audit{color}. The applied patch generated $patchReleaseAuditWarnings release audit warnings (more than the master's current $OK_RELEASEAUDIT_WARNINGS warnings)." $GREP '\!?????' $PATCH_DIR/patchReleaseAuditWarnings.txt > $PATCH_DIR/patchReleaseAuditProblems.txt echo "Lines that start with ????? in the release audit report indicate files that do not have an Apache license header." >> $PATCH_DIR/patchReleaseAuditProblems.txt JIRA_COMMENT_FOOTER="Release audit warnings: $BUILD_URL/artifact/patchprocess/patchReleaseAuditWarnings.txt @@ -628,10 +648,10 @@ checkFindbugsWarnings () { $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml | $AWK '{print $1}'` echo "Found $newFindbugsWarnings Findbugs warnings ($file)" findbugsWarnings=$((findbugsWarnings+newFindbugsWarnings)) - $FINDBUGS_HOME/bin/convertXmlToText -html \ - $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml \ - $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.html - JIRA_COMMENT_FOOTER="Findbugs warnings: $BUILD_URL/artifact/trunk/patchprocess/newPatchFindbugsWarnings${module_suffix}.html + echo "$FINDBUGS_HOME/bin/convertXmlToText -html $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.html" + $FINDBUGS_HOME/bin/convertXmlToText -html $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.html + file $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.xml $PATCH_DIR/newPatchFindbugsWarnings${module_suffix}.html + JIRA_COMMENT_FOOTER="Findbugs warnings: $BUILD_URL/artifact/patchprocess/newPatchFindbugsWarnings${module_suffix}.html $JIRA_COMMENT_FOOTER" done @@ -701,10 +721,10 @@ runTests () { condemnedCount=`$PS auxwww | $GREP ${PROJECT_NAME}PatchProcess | $AWK '{print $2}' | $AWK 'BEGIN {total = 0} {total += 1} END {print total}'` echo "WARNING: $condemnedCount rogue build processes detected, terminating." $PS auxwww | $GREP ${PROJECT_NAME}PatchProcess | $AWK '{print $2}' | /usr/bin/xargs -t -I {} /bin/kill -9 {} > /dev/null - echo "$MVN clean test -P runAllTests -D${PROJECT_NAME}PatchProcess" + echo "$MVN clean test -Dsurefire.rerunFailingTestsCount=2 -P runAllTests -D${PROJECT_NAME}PatchProcess" export MAVEN_OPTS="${MAVEN_OPTS}" ulimit -a - $MVN clean test -P runAllTests -D${PROJECT_NAME}PatchProcess + $MVN clean test -Dsurefire.rerunFailingTestsCount=2 -P runAllTests -D${PROJECT_NAME}PatchProcess if [[ $? != 0 ]] ; then ### Find and format names of failed tests failed_tests=`find . -name 'TEST*.xml' | xargs $GREP -l -E " hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. diff --git hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/InterfaceStability.java hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/InterfaceStability.java index 0573e57..338b375 100644 --- hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/InterfaceStability.java +++ hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/InterfaceStability.java @@ -30,11 +30,13 @@ import java.lang.annotation.RetentionPolicy; *
  • All classes that are annotated with * {@link org.apache.hadoop.hbase.classification.InterfaceAudience.Public} or * {@link org.apache.hadoop.hbase.classification.InterfaceAudience.LimitedPrivate} - * must have InterfaceStability annotation.
  • Classes that are + * must have InterfaceStability annotation.
  • + *
  • Classes that are * {@link org.apache.hadoop.hbase.classification.InterfaceAudience.LimitedPrivate} - * are to be considered unstable unless a different InterfaceStability annotation - * states otherwise.
  • Incompatible changes must not be made to classes - * marked as stable.
+ * are to be considered unstable unless a different InterfaceStability annotation + * states otherwise. + *
  • Incompatible changes must not be made to classes marked as stable.
  • + * */ @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/RootDocProcessor.java hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/RootDocProcessor.java index c6fb74a..2ea1022 100644 --- hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/RootDocProcessor.java +++ hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/RootDocProcessor.java @@ -215,15 +215,12 @@ class RootDocProcessor { } private Object unwrap(Object proxy) { - if (proxy instanceof Proxy) - return ((ExcludeHandler) Proxy.getInvocationHandler(proxy)).target; + if (proxy instanceof Proxy) return ((ExcludeHandler) Proxy.getInvocationHandler(proxy)).target; return proxy; } private boolean isFiltered(Object[] args) { return args != null && Boolean.TRUE.equals(args[0]); } - } - } diff --git hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/StabilityOptions.java hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/StabilityOptions.java index b79f645..809d96c 100644 --- hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/StabilityOptions.java +++ hbase-annotations/src/main/java/org/apache/hadoop/hbase/classification/tools/StabilityOptions.java @@ -64,5 +64,4 @@ class StabilityOptions { } return filteredOptions; } - } diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java new file mode 100644 index 0000000..ab39591 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ClientTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the client, This tests the hbase-client package and all of the client tests in + * hbase-server. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface ClientTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java new file mode 100644 index 0000000..ff65995 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/CoprocessorTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to coprocessors. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface CoprocessorTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java new file mode 100644 index 0000000..b4e9c35 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FilterTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the filter package. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface FilterTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java new file mode 100644 index 0000000..ddd92b1 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/FlakeyTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as failing commonly on public build infrastructure. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface FlakeyTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java new file mode 100644 index 0000000..cf8bffa --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/IOTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the io package. Things like HFile and the like. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface IOTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java new file mode 100644 index 0000000..5f8c9b7 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MapReduceTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to mapred or mapreduce, + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface MapReduceTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java new file mode 100644 index 0000000..19a95f2 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MasterTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the master. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface MasterTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java new file mode 100644 index 0000000..ef4d3f9 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/MiscTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as not easily falling into any of the below categories. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface MiscTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java new file mode 100644 index 0000000..eab3375 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RPCTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +/** + * Tag a test as related to RPC. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface RPCTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java new file mode 100644 index 0000000..3b03194 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RegionServerTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the regionserver, + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface RegionServerTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java new file mode 100644 index 0000000..4f86404 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/ReplicationTests.java @@ -0,0 +1,40 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to replication, + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface ReplicationTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java new file mode 100644 index 0000000..16fe1f7 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/RestTests.java @@ -0,0 +1,41 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to the rest capability of HBase. + * + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface RestTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java new file mode 100644 index 0000000..907ae7a --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/SecurityTests.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to security. + * + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface SecurityTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java new file mode 100644 index 0000000..96a5e9a --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowMapReduceTests.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/** + * Tag a test as related to mapreduce and taking longer than 5 minutes to run on public build + * infrastructure. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + + +public interface VerySlowMapReduceTests { +} diff --git hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java new file mode 100644 index 0000000..3caa218 --- /dev/null +++ hbase-annotations/src/test/java/org/apache/hadoop/hbase/testclassification/VerySlowRegionServerTests.java @@ -0,0 +1,42 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + +/** + * Tag a test as region tests which takes longer than 5 minutes to run on public build + * infrastructure. + * @see org.apache.hadoop.hbase.testclassification.ClientTests + * @see org.apache.hadoop.hbase.testclassification.CoprocessorTests + * @see org.apache.hadoop.hbase.testclassification.FilterTests + * @see org.apache.hadoop.hbase.testclassification.FlakeyTests + * @see org.apache.hadoop.hbase.testclassification.IOTests + * @see org.apache.hadoop.hbase.testclassification.MapReduceTests + * @see org.apache.hadoop.hbase.testclassification.MasterTests + * @see org.apache.hadoop.hbase.testclassification.MiscTests + * @see org.apache.hadoop.hbase.testclassification.RegionServerTests + * @see org.apache.hadoop.hbase.testclassification.ReplicationTests + * @see org.apache.hadoop.hbase.testclassification.RPCTests + * @see org.apache.hadoop.hbase.testclassification.SecurityTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + * @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + */ +package org.apache.hadoop.hbase.testclassification; + +public interface VerySlowRegionServerTests { +} diff --git hbase-assembly/pom.xml hbase-assembly/pom.xml index 90df652..4aa7759 100644 --- hbase-assembly/pom.xml +++ hbase-assembly/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. hbase-assembly @@ -102,6 +102,16 @@ ${project.version} + org.apache.hbase + hbase-thrift + ${project.version} + + + org.apache.hbase + hbase-rest + ${project.version} + + org.apache.hbase hbase-testing-util ${project.version} diff --git hbase-checkstyle/pom.xml hbase-checkstyle/pom.xml index 0ea8972..6f3d71c 100644 --- hbase-checkstyle/pom.xml +++ hbase-checkstyle/pom.xml @@ -24,19 +24,20 @@ 4.0.0 org.apache.hbase hbase-checkstyle -1.0.0-SNAPSHOT +2.0.0-SNAPSHOT HBase - Checkstyle Module to hold Checkstyle properties for HBase. hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. + org.apache.maven.plugins maven-site-plugin diff --git hbase-checkstyle/src/main/resources/hbase/checkstyle.xml hbase-checkstyle/src/main/resources/hbase/checkstyle.xml index 34fe5ec..bf84931 100644 --- hbase-checkstyle/src/main/resources/hbase/checkstyle.xml +++ hbase-checkstyle/src/main/resources/hbase/checkstyle.xml @@ -34,10 +34,12 @@ - + + + - + @@ -51,7 +53,10 @@ - + + + + diff --git hbase-client/pom.xml hbase-client/pom.xml index 08223f3..216a6ee 100644 --- hbase-client/pom.xml +++ hbase-client/pom.xml @@ -24,7 +24,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -35,6 +35,25 @@ + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + + + + default-testCompile + + ${java.default.compiler} + true + + + + + org.apache.maven.plugins maven-site-plugin @@ -89,6 +108,12 @@ org.apache.hbase hbase-common + + + com.google.guava + guava + + org.apache.hbase @@ -117,14 +142,6 @@ commons-logging - com.google.guava - guava - - - com.google.protobuf - protobuf-java - - io.netty netty-all @@ -197,18 +214,44 @@ + com.google.code.findbugs + jsr305 + 1.3.9 + true + + org.apache.hadoop hadoop-common + com.github.stephenc.findbugs + findbugs-annotations + + + net.java.dev.jets3t + jets3t + + javax.servlet.jsp jsp-api + org.mortbay.jetty + jetty + + com.sun.jersey jersey-server + com.sun.jersey + jersey-core + + + com.sun.jersey + jersey-json + + javax.servlet servlet-api @@ -224,17 +267,49 @@ org.apache.hadoop - hadoop-auth - - - org.apache.hadoop hadoop-mapreduce-client-core - - com.sun.jersey.jersey-test-framework - jersey-test-framework-grizzly2 - - + + com.sun.jersey.jersey-test-framework + jersey-test-framework-grizzly2 + + + javax.servlet + servlet-api + + + com.sun.jersey + jersey-server + + + com.sun.jersey + jersey-core + + + com.sun.jersey + jersey-json + + + com.sun.jersey.contribs + jersey-guice + + + com.google.inject + guice + + + com.google.inject.extensions + guice-servlet + + + org.codehaus.jackson + jackson-jaxrs + + + org.codehaus.jackson + jackson-xc + + diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java index 5be224c..f835857 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java @@ -18,14 +18,15 @@ package org.apache.hadoop.hbase; -import com.google.protobuf.InvalidProtocolBufferException; +import java.util.UUID; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClusterIdProtos; import org.apache.hadoop.hbase.util.Bytes; -import java.util.UUID; +import com.google.protobuf.InvalidProtocolBufferException; /** * The identifier for this cluster. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java index 7599e3e..b93312a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java @@ -26,7 +26,6 @@ import java.util.Collections; import java.util.HashMap; import java.util.Map; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.master.RegionState; @@ -38,6 +37,7 @@ import org.apache.hadoop.hbase.protobuf.generated.FSProtos.HBaseVersionFileConte import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.VersionedWritable; @@ -160,7 +160,7 @@ public class ClusterStatus extends VersionedWritable { public int getRequestsCount() { int count = 0; for (Map.Entry e: this.liveServers.entrySet()) { - count += e.getValue().getTotalNumberOfRequests(); + count += e.getValue().getNumberOfRequests(); } return count; } @@ -216,6 +216,7 @@ public class ClusterStatus extends VersionedWritable { * @return region server information * @deprecated Use {@link #getServers()} */ + @Deprecated public Collection getServerInfo() { return getServers(); } @@ -378,47 +379,37 @@ public class ClusterStatus extends VersionedWritable { public static ClusterStatus convert(ClusterStatusProtos.ClusterStatus proto) { Map servers = null; - if (proto.getLiveServersList() != null) { - servers = new HashMap(proto.getLiveServersList().size()); - for (LiveServerInfo lsi : proto.getLiveServersList()) { - servers.put(ProtobufUtil.toServerName( - lsi.getServer()), new ServerLoad(lsi.getServerLoad())); - } + servers = new HashMap(proto.getLiveServersList().size()); + for (LiveServerInfo lsi : proto.getLiveServersList()) { + servers.put(ProtobufUtil.toServerName( + lsi.getServer()), new ServerLoad(lsi.getServerLoad())); } Collection deadServers = null; - if (proto.getDeadServersList() != null) { - deadServers = new ArrayList(proto.getDeadServersList().size()); - for (HBaseProtos.ServerName sn : proto.getDeadServersList()) { - deadServers.add(ProtobufUtil.toServerName(sn)); - } + deadServers = new ArrayList(proto.getDeadServersList().size()); + for (HBaseProtos.ServerName sn : proto.getDeadServersList()) { + deadServers.add(ProtobufUtil.toServerName(sn)); } Collection backupMasters = null; - if (proto.getBackupMastersList() != null) { - backupMasters = new ArrayList(proto.getBackupMastersList().size()); - for (HBaseProtos.ServerName sn : proto.getBackupMastersList()) { - backupMasters.add(ProtobufUtil.toServerName(sn)); - } + backupMasters = new ArrayList(proto.getBackupMastersList().size()); + for (HBaseProtos.ServerName sn : proto.getBackupMastersList()) { + backupMasters.add(ProtobufUtil.toServerName(sn)); } Map rit = null; - if (proto.getRegionsInTransitionList() != null) { - rit = new HashMap(proto.getRegionsInTransitionList().size()); - for (RegionInTransition region : proto.getRegionsInTransitionList()) { - String key = new String(region.getSpec().getValue().toByteArray()); - RegionState value = RegionState.convert(region.getRegionState()); - rit.put(key, value); - } + rit = new HashMap(proto.getRegionsInTransitionList().size()); + for (RegionInTransition region : proto.getRegionsInTransitionList()) { + String key = new String(region.getSpec().getValue().toByteArray()); + RegionState value = RegionState.convert(region.getRegionState()); + rit.put(key, value); } String[] masterCoprocessors = null; - if (proto.getMasterCoprocessorsList() != null) { - final int numMasterCoprocessors = proto.getMasterCoprocessorsCount(); - masterCoprocessors = new String[numMasterCoprocessors]; - for (int i = 0; i < numMasterCoprocessors; i++) { - masterCoprocessors[i] = proto.getMasterCoprocessors(i).getName(); - } + final int numMasterCoprocessors = proto.getMasterCoprocessorsCount(); + masterCoprocessors = new String[numMasterCoprocessors]; + for (int i = 0; i < numMasterCoprocessors; i++) { + masterCoprocessors[i] = proto.getMasterCoprocessors(i).getName(); } return new ClusterStatus(proto.getHbaseVersion().getVersion(), diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java index d9e2bdc..37f1a33 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java @@ -18,8 +18,8 @@ package org.apache.hadoop.hbase; import java.io.IOException; import java.util.concurrent.ExecutorService; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.HTableInterface; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java index b566fcf..8be2518 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/DoNotRetryIOException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseIOException; /** * Subclass if exception is not meant to be retried: e.g. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java index 830339c..bdb7f53 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java @@ -14,11 +14,11 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * Thrown during flush if the possibility snapshot content was not properly diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java index 401e0da..5335bef 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java @@ -18,9 +18,6 @@ */ package org.apache.hadoop.hbase; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; @@ -30,7 +27,6 @@ import java.util.Set; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -38,14 +34,12 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BytesBytesPair; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ColumnFamilySchema; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair; import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.PrettyPrinter; import org.apache.hadoop.hbase.util.PrettyPrinter.Unit; -import org.apache.hadoop.io.Text; -import org.apache.hadoop.io.WritableComparable; import com.google.common.base.Preconditions; -import org.apache.hadoop.hbase.util.ByteStringer; import com.google.protobuf.InvalidProtocolBufferException; /** @@ -56,7 +50,7 @@ import com.google.protobuf.InvalidProtocolBufferException; */ @InterfaceAudience.Public @InterfaceStability.Evolving -public class HColumnDescriptor implements WritableComparable { +public class HColumnDescriptor implements Comparable { // For future backward compatibility // Version 3 was when column names become byte arrays and when we picked up @@ -236,8 +230,9 @@ public class HColumnDescriptor implements WritableComparable private final static Map DEFAULT_VALUES = new HashMap(); - private final static Set RESERVED_KEYWORDS - = new HashSet(); + private final static Set RESERVED_KEYWORDS + = new HashSet(); + static { DEFAULT_VALUES.put(BLOOMFILTER, DEFAULT_BLOOMFILTER); DEFAULT_VALUES.put(REPLICATION_SCOPE, String.valueOf(DEFAULT_REPLICATION_SCOPE)); @@ -257,10 +252,10 @@ public class HColumnDescriptor implements WritableComparable DEFAULT_VALUES.put(EVICT_BLOCKS_ON_CLOSE, String.valueOf(DEFAULT_EVICT_BLOCKS_ON_CLOSE)); DEFAULT_VALUES.put(PREFETCH_BLOCKS_ON_OPEN, String.valueOf(DEFAULT_PREFETCH_BLOCKS_ON_OPEN)); for (String s : DEFAULT_VALUES.keySet()) { - RESERVED_KEYWORDS.add(new ImmutableBytesWritable(Bytes.toBytes(s))); + RESERVED_KEYWORDS.add(new Bytes(Bytes.toBytes(s))); } - RESERVED_KEYWORDS.add(new ImmutableBytesWritable(Bytes.toBytes(ENCRYPTION))); - RESERVED_KEYWORDS.add(new ImmutableBytesWritable(Bytes.toBytes(ENCRYPTION_KEY))); + RESERVED_KEYWORDS.add(new Bytes(Bytes.toBytes(ENCRYPTION))); + RESERVED_KEYWORDS.add(new Bytes(Bytes.toBytes(ENCRYPTION_KEY))); } private static final int UNINITIALIZED = -1; @@ -269,8 +264,8 @@ public class HColumnDescriptor implements WritableComparable private byte [] name; // Column metadata - private final Map values = - new HashMap(); + private final Map values = + new HashMap(); /** * A map which holds the configuration specific to the column family. @@ -329,7 +324,7 @@ public class HColumnDescriptor implements WritableComparable public HColumnDescriptor(HColumnDescriptor desc) { super(); this.name = desc.name.clone(); - for (Map.Entry e: + for (Map.Entry e : desc.values.entrySet()) { this.values.put(e.getKey(), e.getValue()); } @@ -523,7 +518,7 @@ public class HColumnDescriptor implements WritableComparable * @return The value. */ public byte[] getValue(byte[] key) { - ImmutableBytesWritable ibw = values.get(new ImmutableBytesWritable(key)); + Bytes ibw = values.get(new Bytes(key)); if (ibw == null) return null; return ibw.get(); @@ -543,7 +538,7 @@ public class HColumnDescriptor implements WritableComparable /** * @return All values. */ - public Map getValues() { + public Map getValues() { // shallow pointer copy return Collections.unmodifiableMap(values); } @@ -554,8 +549,8 @@ public class HColumnDescriptor implements WritableComparable * @return this (for chained invocation) */ public HColumnDescriptor setValue(byte[] key, byte[] value) { - values.put(new ImmutableBytesWritable(key), - new ImmutableBytesWritable(value)); + values.put(new Bytes(key), + new Bytes(value)); return this; } @@ -563,7 +558,7 @@ public class HColumnDescriptor implements WritableComparable * @param key Key whose key and value we're to remove from HCD parameters. */ public void remove(final byte [] key) { - values.remove(new ImmutableBytesWritable(key)); + values.remove(new Bytes(key)); } /** @@ -638,6 +633,7 @@ public class HColumnDescriptor implements WritableComparable Integer.decode(value): Integer.valueOf(DEFAULT_BLOCKSIZE); } return this.blocksize.intValue(); + } /** @@ -670,7 +666,10 @@ public class HColumnDescriptor implements WritableComparable return setValue(COMPRESSION, type.getName().toUpperCase()); } - /** @return data block encoding algorithm used on disk */ + /** + * @return data block encoding algorithm used on disk + * @deprecated See getDataBlockEncoding() + */ @Deprecated public DataBlockEncoding getDataBlockEncodingOnDisk() { return getDataBlockEncoding(); @@ -680,6 +679,7 @@ public class HColumnDescriptor implements WritableComparable * This method does nothing now. Flag ENCODE_ON_DISK is not used * any more. Data blocks have the same encoding in cache as on disk. * @return this (for chained invocation) + * @deprecated This does nothing now. */ @Deprecated public HColumnDescriptor setEncodeOnDisk(boolean encodeOnDisk) { @@ -1106,7 +1106,7 @@ public class HColumnDescriptor implements WritableComparable boolean hasConfigKeys = false; // print all reserved keys first - for (ImmutableBytesWritable k : values.keySet()) { + for (Bytes k : values.keySet()) { if (!RESERVED_KEYWORDS.contains(k)) { hasConfigKeys = true; continue; @@ -1129,7 +1129,7 @@ public class HColumnDescriptor implements WritableComparable s.append(HConstants.METADATA).append(" => "); s.append('{'); boolean printComma = false; - for (ImmutableBytesWritable k : values.keySet()) { + for (Bytes k : values.keySet()) { if (RESERVED_KEYWORDS.contains(k)) { continue; } @@ -1207,109 +1207,6 @@ public class HColumnDescriptor implements WritableComparable return result; } - /** - * @deprecated Writables are going away. Use pb {@link #parseFrom(byte[])} instead. - */ - @Deprecated - public void readFields(DataInput in) throws IOException { - int version = in.readByte(); - if (version < 6) { - if (version <= 2) { - Text t = new Text(); - t.readFields(in); - this.name = t.getBytes(); -// if(KeyValue.getFamilyDelimiterIndex(this.name, 0, this.name.length) -// > 0) { -// this.name = stripColon(this.name); -// } - } else { - this.name = Bytes.readByteArray(in); - } - this.values.clear(); - setMaxVersions(in.readInt()); - int ordinal = in.readInt(); - setCompressionType(Compression.Algorithm.values()[ordinal]); - setInMemory(in.readBoolean()); - setBloomFilterType(in.readBoolean() ? BloomType.ROW : BloomType.NONE); - if (getBloomFilterType() != BloomType.NONE && version < 5) { - // If a bloomFilter is enabled and the column descriptor is less than - // version 5, we need to skip over it to read the rest of the column - // descriptor. There are no BloomFilterDescriptors written to disk for - // column descriptors with a version number >= 5 - throw new UnsupportedClassVersionError(this.getClass().getName() + - " does not support backward compatibility with versions older " + - "than version 5"); - } - if (version > 1) { - setBlockCacheEnabled(in.readBoolean()); - } - if (version > 2) { - setTimeToLive(in.readInt()); - } - } else { - // version 6+ - this.name = Bytes.readByteArray(in); - this.values.clear(); - int numValues = in.readInt(); - for (int i = 0; i < numValues; i++) { - ImmutableBytesWritable key = new ImmutableBytesWritable(); - ImmutableBytesWritable value = new ImmutableBytesWritable(); - key.readFields(in); - value.readFields(in); - - // in version 8, the BloomFilter setting changed from bool to enum - if (version < 8 && Bytes.toString(key.get()).equals(BLOOMFILTER)) { - value.set(Bytes.toBytes( - Boolean.getBoolean(Bytes.toString(value.get())) - ? BloomType.ROW.toString() - : BloomType.NONE.toString())); - } - - values.put(key, value); - } - if (version == 6) { - // Convert old values. - setValue(COMPRESSION, Compression.Algorithm.NONE.getName()); - } - String value = getValue(HConstants.VERSIONS); - this.cachedMaxVersions = (value != null)? - Integer.valueOf(value).intValue(): DEFAULT_VERSIONS; - if (version > 10) { - configuration.clear(); - int numConfigs = in.readInt(); - for (int i = 0; i < numConfigs; i++) { - ImmutableBytesWritable key = new ImmutableBytesWritable(); - ImmutableBytesWritable val = new ImmutableBytesWritable(); - key.readFields(in); - val.readFields(in); - configuration.put( - Bytes.toString(key.get(), key.getOffset(), key.getLength()), - Bytes.toString(val.get(), val.getOffset(), val.getLength())); - } - } - } - } - - /** - * @deprecated Writables are going away. Use {@link #toByteArray()} instead. - */ - @Deprecated - public void write(DataOutput out) throws IOException { - out.writeByte(COLUMN_DESCRIPTOR_VERSION); - Bytes.writeByteArray(out, this.name); - out.writeInt(values.size()); - for (Map.Entry e: - values.entrySet()) { - e.getKey().write(out); - e.getValue().write(out); - } - out.writeInt(configuration.size()); - for (Map.Entry e : configuration.entrySet()) { - new ImmutableBytesWritable(Bytes.toBytes(e.getKey())).write(out); - new ImmutableBytesWritable(Bytes.toBytes(e.getValue())).write(out); - } - } - // Comparable @Override public int compareTo(HColumnDescriptor o) { @@ -1384,7 +1281,7 @@ public class HColumnDescriptor implements WritableComparable public ColumnFamilySchema convert() { ColumnFamilySchema.Builder builder = ColumnFamilySchema.newBuilder(); builder.setName(ByteStringer.wrap(getName())); - for (Map.Entry e: this.values.entrySet()) { + for (Map.Entry e : this.values.entrySet()) { BytesBytesPair.Builder aBuilder = BytesBytesPair.newBuilder(); aBuilder.setFirst(ByteStringer.wrap(e.getKey().get())); aBuilder.setSecond(ByteStringer.wrap(e.getValue().get())); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java index aaa36f5..82beb0b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionInfo.java @@ -18,28 +18,25 @@ */ package org.apache.hadoop.hbase; -import java.io.ByteArrayInputStream; -import java.io.DataInput; import java.io.DataInputStream; -import java.io.DataOutput; -import java.io.EOFException; import java.io.IOException; -import java.io.SequenceInputStream; import java.util.ArrayList; import java.util.Arrays; import java.util.List; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.KeyValue.KVComparator; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.KeyValue.KVComparator; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionInfo; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JenkinsHash; import org.apache.hadoop.hbase.util.MD5Hash; @@ -78,30 +75,7 @@ import com.google.protobuf.InvalidProtocolBufferException; @InterfaceAudience.Public @InterfaceStability.Evolving public class HRegionInfo implements Comparable { - /* - * There are two versions associated with HRegionInfo: HRegionInfo.VERSION and - * HConstants.META_VERSION. HRegionInfo.VERSION indicates the data structure's versioning - * while HConstants.META_VERSION indicates the versioning of the serialized HRIs stored in - * the hbase:meta table. - * - * Pre-0.92: - * HRI.VERSION == 0 and HConstants.META_VERSION does not exist (is not stored at hbase:meta table) - * HRegionInfo had an HTableDescriptor reference inside it. - * HRegionInfo is serialized as Writable to hbase:meta table. - * For 0.92.x and 0.94.x: - * HRI.VERSION == 1 and HConstants.META_VERSION == 0 - * HRI no longer has HTableDescriptor in it. - * HRI is serialized as Writable to hbase:meta table. - * For 0.96.x: - * HRI.VERSION == 1 and HConstants.META_VERSION == 1 - * HRI data structure is the same as 0.92 and 0.94 - * HRI is serialized as PB to hbase:meta table. - * - * Versioning of HRegionInfo is deprecated. HRegionInfo does protobuf - * serialization using RegionInfo class, which has it's own versioning. - */ - @Deprecated - public static final byte VERSION = 1; + private static final Log LOG = LogFactory.getLog(HRegionInfo.class); /** @@ -143,7 +117,7 @@ public class HRegionInfo implements Comparable { public static final byte REPLICA_ID_DELIMITER = (byte)'_'; private static final int MAX_REPLICA_ID = 0xFFFF; - static final int DEFAULT_REPLICA_ID = 0; + public static final int DEFAULT_REPLICA_ID = 0; /** * Does region name contain its encoded name? * @param regionName region name @@ -220,6 +194,9 @@ public class HRegionInfo implements Comparable { // Current TableName private TableName tableName = null; + final static String DISPLAY_KEYS_KEY = "hbase.display.keys"; + public final static byte[] HIDDEN_END_KEY = Bytes.toBytes("hidden-end-key"); + public final static byte[] HIDDEN_START_KEY = Bytes.toBytes("hidden-start-key"); /** HRegionInfo for first meta region */ public static final HRegionInfo FIRST_META_REGIONINFO = @@ -824,86 +801,6 @@ public class HRegionInfo implements Comparable { return this.hashCode; } - /** @return the object version number - * @deprecated HRI is no longer a VersionedWritable */ - @Deprecated - public byte getVersion() { - return VERSION; - } - - /** - * @deprecated Use protobuf serialization instead. See {@link #toByteArray()} and - * {@link #toDelimitedByteArray()} - */ - @Deprecated - public void write(DataOutput out) throws IOException { - out.writeByte(getVersion()); - Bytes.writeByteArray(out, endKey); - out.writeBoolean(offLine); - out.writeLong(regionId); - Bytes.writeByteArray(out, regionName); - out.writeBoolean(split); - Bytes.writeByteArray(out, startKey); - Bytes.writeByteArray(out, tableName.getName()); - out.writeInt(hashCode); - } - - /** - * @deprecated Use protobuf deserialization instead. - * @see #parseFrom(byte[]) - */ - @Deprecated - public void readFields(DataInput in) throws IOException { - // Read the single version byte. We don't ask the super class do it - // because freaks out if its not the current classes' version. This method - // can deserialize version 0 and version 1 of HRI. - byte version = in.readByte(); - if (version == 0) { - // This is the old HRI that carried an HTD. Migrate it. The below - // was copied from the old 0.90 HRI readFields. - this.endKey = Bytes.readByteArray(in); - this.offLine = in.readBoolean(); - this.regionId = in.readLong(); - this.regionName = Bytes.readByteArray(in); - this.split = in.readBoolean(); - this.startKey = Bytes.readByteArray(in); - try { - HTableDescriptor htd = new HTableDescriptor(); - htd.readFields(in); - this.tableName = htd.getTableName(); - } catch(EOFException eofe) { - throw new IOException("HTD not found in input buffer", eofe); - } - this.hashCode = in.readInt(); - } else if (getVersion() == version) { - this.endKey = Bytes.readByteArray(in); - this.offLine = in.readBoolean(); - this.regionId = in.readLong(); - this.regionName = Bytes.readByteArray(in); - this.split = in.readBoolean(); - this.startKey = Bytes.readByteArray(in); - this.tableName = TableName.valueOf(Bytes.readByteArray(in)); - this.hashCode = in.readInt(); - } else { - throw new IOException("Non-migratable/unknown version=" + getVersion()); - } - } - - @Deprecated - private void readFields(byte[] bytes, int offset, int len) throws IOException { - if (bytes == null || len <= 0) { - throw new IllegalArgumentException("Can't build a writable with empty " + - "bytes array"); - } - DataInputBuffer in = new DataInputBuffer(); - try { - in.reset(bytes, offset, len); - this.readFields(in); - } finally { - in.close(); - } - } - // // Comparable // @@ -1101,13 +998,7 @@ public class HRegionInfo implements Comparable { throw new DeserializationException(e); } } else { - try { - HRegionInfo hri = new HRegionInfo(); - hri.readFields(bytes, offset, len); - return hri; - } catch (IOException e) { - throw new DeserializationException(e); - } + throw new DeserializationException("PB encoded HRegionInfo expected"); } } @@ -1123,6 +1014,104 @@ public class HRegionInfo implements Comparable { } /** + * Get the descriptive name as {@link RegionState} does it but with hidden + * startkey optionally + * @param state + * @param conf + * @return descriptive string + */ + public static String getDescriptiveNameFromRegionStateForDisplay(RegionState state, + Configuration conf) { + if (conf.getBoolean(DISPLAY_KEYS_KEY, true)) return state.toDescriptiveString(); + String descriptiveStringFromState = state.toDescriptiveString(); + int idx = descriptiveStringFromState.lastIndexOf(" state="); + String regionName = getRegionNameAsStringForDisplay(state.getRegion(), conf); + return regionName + descriptiveStringFromState.substring(idx); + } + + /** + * Get the end key for display. Optionally hide the real end key. + * @param hri + * @param conf + * @return the endkey + */ + public static byte[] getEndKeyForDisplay(HRegionInfo hri, Configuration conf) { + boolean displayKey = conf.getBoolean(DISPLAY_KEYS_KEY, true); + if (displayKey) return hri.getEndKey(); + return HIDDEN_END_KEY; + } + + /** + * Get the start key for display. Optionally hide the real start key. + * @param hri + * @param conf + * @return the startkey + */ + public static byte[] getStartKeyForDisplay(HRegionInfo hri, Configuration conf) { + boolean displayKey = conf.getBoolean(DISPLAY_KEYS_KEY, true); + if (displayKey) return hri.getStartKey(); + return HIDDEN_START_KEY; + } + + /** + * Get the region name for display. Optionally hide the start key. + * @param hri + * @param conf + * @return region name as String + */ + public static String getRegionNameAsStringForDisplay(HRegionInfo hri, Configuration conf) { + return Bytes.toStringBinary(getRegionNameForDisplay(hri, conf)); + } + + /** + * Get the region name for display. Optionally hide the start key. + * @param hri + * @param conf + * @return region name bytes + */ + public static byte[] getRegionNameForDisplay(HRegionInfo hri, Configuration conf) { + boolean displayKey = conf.getBoolean(DISPLAY_KEYS_KEY, true); + if (displayKey || hri.getTable().equals(TableName.META_TABLE_NAME)) { + return hri.getRegionName(); + } else { + // create a modified regionname with the startkey replaced but preserving + // the other parts including the encodedname. + try { + byte[][]regionNameParts = parseRegionName(hri.getRegionName()); + regionNameParts[1] = HIDDEN_START_KEY; //replace the real startkey + int len = 0; + // get the total length + for (byte[] b : regionNameParts) { + len += b.length; + } + byte[] encodedRegionName = + Bytes.toBytes(encodeRegionName(hri.getRegionName())); + len += encodedRegionName.length; + //allocate some extra bytes for the delimiters and the last '.' + byte[] modifiedName = new byte[len + regionNameParts.length + 1]; + int lengthSoFar = 0; + int loopCount = 0; + for (byte[] b : regionNameParts) { + System.arraycopy(b, 0, modifiedName, lengthSoFar, b.length); + lengthSoFar += b.length; + if (loopCount++ == 2) modifiedName[lengthSoFar++] = REPLICA_ID_DELIMITER; + else modifiedName[lengthSoFar++] = HConstants.DELIMITER; + } + // replace the last comma with '.' + modifiedName[lengthSoFar - 1] = ENC_SEPARATOR; + System.arraycopy(encodedRegionName, 0, modifiedName, lengthSoFar, + encodedRegionName.length); + lengthSoFar += encodedRegionName.length; + modifiedName[lengthSoFar] = ENC_SEPARATOR; + return modifiedName; + } catch (IOException e) { + //LOG.warn("Encountered exception " + e); + throw new RuntimeException(e); + } + } + } + + /** * Extract a HRegionInfo and ServerName from catalog table {@link Result}. * @param r Result to pull from * @return A pair of the {@link HRegionInfo} and the {@link ServerName} @@ -1251,25 +1240,12 @@ public class HRegionInfo implements Comparable { if (in.markSupported()) { //read it with mark() in.mark(pblen); } - int read = in.read(pbuf); //assumption: if Writable serialization, it should be longer than pblen. + int read = in.read(pbuf); //assumption: it should be longer than pblen. if (read != pblen) throw new IOException("read=" + read + ", wanted=" + pblen); if (ProtobufUtil.isPBMagicPrefix(pbuf)) { return convert(HBaseProtos.RegionInfo.parseDelimitedFrom(in)); } else { - // Presume Writables. Need to reset the stream since it didn't start w/ pb. - if (in.markSupported()) { - in.reset(); - HRegionInfo hri = new HRegionInfo(); - hri.readFields(in); - return hri; - } else { - //we cannot use BufferedInputStream, it consumes more than we read from the underlying IS - ByteArrayInputStream bais = new ByteArrayInputStream(pbuf); - SequenceInputStream sis = new SequenceInputStream(bais, in); //concatenate input streams - HRegionInfo hri = new HRegionInfo(); - hri.readFields(new DataInputStream(sis)); - return hri; - } + throw new IOException("PB encoded HRegionInfo expected"); } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java index 373e76b..edb53dc 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java @@ -104,7 +104,8 @@ public class HRegionLocation implements Comparable { } /** - * @return String made of hostname and port formatted as per {@link Addressing#createHostAndPortStr(String, int)} + * @return String made of hostname and port formatted as + * per {@link Addressing#createHostAndPortStr(String, int)} */ public String getHostnamePort() { return Addressing.createHostAndPortStr(this.getHostname(), this.getPort()); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java index d16e8ba..7478358 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java @@ -18,8 +18,6 @@ */ package org.apache.hadoop.hbase; -import java.io.DataInput; -import java.io.DataOutput; import java.io.IOException; import java.util.ArrayList; import java.util.Collection; @@ -34,16 +32,13 @@ import java.util.TreeMap; import java.util.TreeSet; import java.util.regex.Matcher; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BytesBytesPair; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ColumnFamilySchema; @@ -51,9 +46,8 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema; import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.Writables; -import org.apache.hadoop.io.WritableComparable; import com.google.protobuf.InvalidProtocolBufferException; @@ -65,20 +59,10 @@ import com.google.protobuf.InvalidProtocolBufferException; */ @InterfaceAudience.Public @InterfaceStability.Evolving -public class HTableDescriptor implements WritableComparable { +public class HTableDescriptor implements Comparable { private static final Log LOG = LogFactory.getLog(HTableDescriptor.class); - /** - * Changes prior to version 3 were not recorded here. - * Version 3 adds metadata as a map where keys and values are byte[]. - * Version 4 adds indexes - * Version 5 removed transactional pollution -- e.g. indexes - * Version 6 changed metadata to BytesBytesPair in PB - * Version 7 adds table-level configuration - */ - private static final byte TABLE_DESCRIPTOR_VERSION = 7; - private TableName name = null; /** @@ -86,8 +70,8 @@ public class HTableDescriptor implements WritableComparable { * includes values like IS_ROOT, IS_META, DEFERRED_LOG_FLUSH, SPLIT_POLICY, * MAX_FILE_SIZE, READONLY, MEMSTORE_FLUSHSIZE etc... */ - private final Map values = - new HashMap(); + private final Map values = + new HashMap(); /** * A map which holds the configuration specific to the table. @@ -106,12 +90,12 @@ public class HTableDescriptor implements WritableComparable { * @see #getMaxFileSize() */ public static final String MAX_FILESIZE = "MAX_FILESIZE"; - private static final ImmutableBytesWritable MAX_FILESIZE_KEY = - new ImmutableBytesWritable(Bytes.toBytes(MAX_FILESIZE)); + private static final Bytes MAX_FILESIZE_KEY = + new Bytes(Bytes.toBytes(MAX_FILESIZE)); public static final String OWNER = "OWNER"; - public static final ImmutableBytesWritable OWNER_KEY = - new ImmutableBytesWritable(Bytes.toBytes(OWNER)); + public static final Bytes OWNER_KEY = + new Bytes(Bytes.toBytes(OWNER)); /** * INTERNAL Used by rest interface to access this metadata @@ -120,8 +104,8 @@ public class HTableDescriptor implements WritableComparable { * @see #isReadOnly() */ public static final String READONLY = "READONLY"; - private static final ImmutableBytesWritable READONLY_KEY = - new ImmutableBytesWritable(Bytes.toBytes(READONLY)); + private static final Bytes READONLY_KEY = + new Bytes(Bytes.toBytes(READONLY)); /** * INTERNAL Used by HBase Shell interface to access this metadata @@ -130,8 +114,8 @@ public class HTableDescriptor implements WritableComparable { * @see #isCompactionEnabled() */ public static final String COMPACTION_ENABLED = "COMPACTION_ENABLED"; - private static final ImmutableBytesWritable COMPACTION_ENABLED_KEY = - new ImmutableBytesWritable(Bytes.toBytes(COMPACTION_ENABLED)); + private static final Bytes COMPACTION_ENABLED_KEY = + new Bytes(Bytes.toBytes(COMPACTION_ENABLED)); /** * INTERNAL Used by HBase Shell interface to access this metadata @@ -141,8 +125,10 @@ public class HTableDescriptor implements WritableComparable { * @see #getMemStoreFlushSize() */ public static final String MEMSTORE_FLUSHSIZE = "MEMSTORE_FLUSHSIZE"; - private static final ImmutableBytesWritable MEMSTORE_FLUSHSIZE_KEY = - new ImmutableBytesWritable(Bytes.toBytes(MEMSTORE_FLUSHSIZE)); + private static final Bytes MEMSTORE_FLUSHSIZE_KEY = + new Bytes(Bytes.toBytes(MEMSTORE_FLUSHSIZE)); + + public static final String FLUSH_POLICY = "FLUSH_POLICY"; /** * INTERNAL Used by rest interface to access this metadata @@ -151,8 +137,8 @@ public class HTableDescriptor implements WritableComparable { * @see #isRootRegion() */ public static final String IS_ROOT = "IS_ROOT"; - private static final ImmutableBytesWritable IS_ROOT_KEY = - new ImmutableBytesWritable(Bytes.toBytes(IS_ROOT)); + private static final Bytes IS_ROOT_KEY = + new Bytes(Bytes.toBytes(IS_ROOT)); /** * INTERNAL Used by rest interface to access this metadata @@ -162,8 +148,8 @@ public class HTableDescriptor implements WritableComparable { * @see #isMetaRegion() */ public static final String IS_META = "IS_META"; - private static final ImmutableBytesWritable IS_META_KEY = - new ImmutableBytesWritable(Bytes.toBytes(IS_META)); + private static final Bytes IS_META_KEY = + new Bytes(Bytes.toBytes(IS_META)); /** * INTERNAL Used by HBase Shell interface to access this metadata @@ -173,22 +159,22 @@ public class HTableDescriptor implements WritableComparable { @Deprecated public static final String DEFERRED_LOG_FLUSH = "DEFERRED_LOG_FLUSH"; @Deprecated - private static final ImmutableBytesWritable DEFERRED_LOG_FLUSH_KEY = - new ImmutableBytesWritable(Bytes.toBytes(DEFERRED_LOG_FLUSH)); + private static final Bytes DEFERRED_LOG_FLUSH_KEY = + new Bytes(Bytes.toBytes(DEFERRED_LOG_FLUSH)); /** * INTERNAL {@link Durability} setting for the table. */ public static final String DURABILITY = "DURABILITY"; - private static final ImmutableBytesWritable DURABILITY_KEY = - new ImmutableBytesWritable(Bytes.toBytes("DURABILITY")); + private static final Bytes DURABILITY_KEY = + new Bytes(Bytes.toBytes("DURABILITY")); /** * INTERNAL number of region replicas for the table. */ public static final String REGION_REPLICATION = "REGION_REPLICATION"; - private static final ImmutableBytesWritable REGION_REPLICATION_KEY = - new ImmutableBytesWritable(Bytes.toBytes(REGION_REPLICATION)); + private static final Bytes REGION_REPLICATION_KEY = + new Bytes(Bytes.toBytes(REGION_REPLICATION)); /** Default durability for HTD is USE_DEFAULT, which defaults to HBase-global default value */ private static final Durability DEFAULT_DURABLITY = Durability.USE_DEFAULT; @@ -198,11 +184,11 @@ public class HTableDescriptor implements WritableComparable { * replace booleans being saved as Strings with plain booleans. Need a * migration script to do this. TODO. */ - private static final ImmutableBytesWritable FALSE = - new ImmutableBytesWritable(Bytes.toBytes(Boolean.FALSE.toString())); + private static final Bytes FALSE = + new Bytes(Bytes.toBytes(Boolean.FALSE.toString())); - private static final ImmutableBytesWritable TRUE = - new ImmutableBytesWritable(Bytes.toBytes(Boolean.TRUE.toString())); + private static final Bytes TRUE = + new Bytes(Bytes.toBytes(Boolean.TRUE.toString())); private static final boolean DEFAULT_DEFERRED_LOG_FLUSH = false; @@ -226,8 +212,9 @@ public class HTableDescriptor implements WritableComparable { private final static Map DEFAULT_VALUES = new HashMap(); - private final static Set RESERVED_KEYWORDS - = new HashSet(); + private final static Set RESERVED_KEYWORDS + = new HashSet(); + static { DEFAULT_VALUES.put(MAX_FILESIZE, String.valueOf(HConstants.DEFAULT_MAX_FILE_SIZE)); @@ -239,7 +226,7 @@ public class HTableDescriptor implements WritableComparable { DEFAULT_VALUES.put(DURABILITY, DEFAULT_DURABLITY.name()); //use the enum name DEFAULT_VALUES.put(REGION_REPLICATION, String.valueOf(DEFAULT_REGION_REPLICATION)); for (String s : DEFAULT_VALUES.keySet()) { - RESERVED_KEYWORDS.add(new ImmutableBytesWritable(Bytes.toBytes(s))); + RESERVED_KEYWORDS.add(new Bytes(Bytes.toBytes(s))); } RESERVED_KEYWORDS.add(IS_ROOT_KEY); RESERVED_KEYWORDS.add(IS_META_KEY); @@ -282,12 +269,12 @@ public class HTableDescriptor implements WritableComparable { * catalog tables, hbase:meta and -ROOT-. */ protected HTableDescriptor(final TableName name, HColumnDescriptor[] families, - Map values) { + Map values) { setName(name); for(HColumnDescriptor descriptor : families) { this.families.put(descriptor.getName(), descriptor); } - for (Map.Entry entry: + for (Map.Entry entry : values.entrySet()) { setValue(entry.getKey(), entry.getValue()); } @@ -347,7 +334,7 @@ public class HTableDescriptor implements WritableComparable { for (HColumnDescriptor c: desc.families.values()) { this.families.put(c.getName(), new HColumnDescriptor(c)); } - for (Map.Entry e: + for (Map.Entry e : desc.values.entrySet()) { setValue(e.getKey(), e.getValue()); } @@ -411,7 +398,7 @@ public class HTableDescriptor implements WritableComparable { return (value != null)? Boolean.valueOf(Bytes.toString(value)): Boolean.FALSE; } - private boolean isSomething(final ImmutableBytesWritable key, + private boolean isSomething(final Bytes key, final boolean valueIfNull) { byte [] value = getValue(key); if (value != null) { @@ -449,11 +436,11 @@ public class HTableDescriptor implements WritableComparable { * @see #values */ public byte[] getValue(byte[] key) { - return getValue(new ImmutableBytesWritable(key)); + return getValue(new Bytes(key)); } - private byte[] getValue(final ImmutableBytesWritable key) { - ImmutableBytesWritable ibw = values.get(key); + private byte[] getValue(final Bytes key) { + Bytes ibw = values.get(key); if (ibw == null) return null; return ibw.get(); @@ -479,7 +466,7 @@ public class HTableDescriptor implements WritableComparable { * @return unmodifiable map {@link #values}. * @see #values */ - public Map getValues() { + public Map getValues() { // shallow pointer copy return Collections.unmodifiableMap(values); } @@ -492,7 +479,7 @@ public class HTableDescriptor implements WritableComparable { * @see #values */ public HTableDescriptor setValue(byte[] key, byte[] value) { - setValue(new ImmutableBytesWritable(key), new ImmutableBytesWritable(value)); + setValue(new Bytes(key), new Bytes(value)); return this; } @@ -500,9 +487,9 @@ public class HTableDescriptor implements WritableComparable { * @param key The key. * @param value The value. */ - private HTableDescriptor setValue(final ImmutableBytesWritable key, + private HTableDescriptor setValue(final Bytes key, final String value) { - setValue(key, new ImmutableBytesWritable(Bytes.toBytes(value))); + setValue(key, new Bytes(Bytes.toBytes(value))); return this; } @@ -512,8 +499,8 @@ public class HTableDescriptor implements WritableComparable { * @param key The key. * @param value The value. */ - public HTableDescriptor setValue(final ImmutableBytesWritable key, - final ImmutableBytesWritable value) { + public HTableDescriptor setValue(final Bytes key, + final Bytes value) { if (key.compareTo(DEFERRED_LOG_FLUSH_KEY) == 0) { boolean isDeferredFlush = Boolean.valueOf(Bytes.toString(value.get())); LOG.warn("HTableDescriptor property:" + DEFERRED_LOG_FLUSH + " is deprecated, " + @@ -548,7 +535,7 @@ public class HTableDescriptor implements WritableComparable { * parameters. */ public void remove(final String key) { - remove(new ImmutableBytesWritable(Bytes.toBytes(key))); + remove(new Bytes(Bytes.toBytes(key))); } /** @@ -557,7 +544,7 @@ public class HTableDescriptor implements WritableComparable { * @param key Key whose key and value we're to remove from HTableDescriptor * parameters. */ - public void remove(ImmutableBytesWritable key) { + public void remove(Bytes key) { values.remove(key); } @@ -568,7 +555,7 @@ public class HTableDescriptor implements WritableComparable { * parameters. */ public void remove(final byte [] key) { - remove(new ImmutableBytesWritable(key)); + remove(new Bytes(key)); } /** @@ -779,6 +766,28 @@ public class HTableDescriptor implements WritableComparable { } /** + * This sets the class associated with the flush policy which determines determines the stores + * need to be flushed when flushing a region. The class used by default is defined in + * {@link org.apache.hadoop.hbase.regionserver.FlushPolicy} + * @param clazz the class name + */ + public HTableDescriptor setFlushPolicyClassName(String clazz) { + setValue(FLUSH_POLICY, clazz); + return this; + } + + /** + * This gets the class associated with the flush policy which determines the stores need to be + * flushed when flushing a region. The class used by default is defined in + * {@link org.apache.hadoop.hbase.regionserver.FlushPolicy} + * @return the class name of the flush policy for this table. If this returns null, the default + * flush policy is used. + */ + public String getFlushPolicyClassName() { + return getValue(FLUSH_POLICY); + } + + /** * Adds a column family. * For the updating purpose please use {@link #modifyFamily(HColumnDescriptor)} instead. * @param family HColumnDescriptor of family to add. @@ -855,9 +864,9 @@ public class HTableDescriptor implements WritableComparable { StringBuilder s = new StringBuilder(); // step 1: set partitioning and pruning - Set reservedKeys = new TreeSet(); - Set userKeys = new TreeSet(); - for (ImmutableBytesWritable k : values.keySet()) { + Set reservedKeys = new TreeSet(); + Set userKeys = new TreeSet(); + for (Bytes k : values.keySet()) { if (k == null || k.get() == null) continue; String key = Bytes.toString(k.get()); // in this section, print out reserved keywords + coprocessor info @@ -889,7 +898,7 @@ public class HTableDescriptor implements WritableComparable { // print all reserved keys first boolean printCommaForAttr = false; - for (ImmutableBytesWritable k : reservedKeys) { + for (Bytes k : reservedKeys) { String key = Bytes.toString(k.get()); String value = Bytes.toStringBinary(values.get(k).get()); if (printCommaForAttr) s.append(", "); @@ -906,7 +915,7 @@ public class HTableDescriptor implements WritableComparable { s.append(HConstants.METADATA).append(" => "); s.append("{"); boolean printCommaForCfg = false; - for (ImmutableBytesWritable k : userKeys) { + for (Bytes k : userKeys) { String key = Bytes.toString(k.get()); String value = Bytes.toStringBinary(values.get(k).get()); if (printCommaForCfg) s.append(", "); @@ -969,8 +978,7 @@ public class HTableDescriptor implements WritableComparable { @Override public int hashCode() { int result = this.name.hashCode(); - result ^= Byte.valueOf(TABLE_DESCRIPTOR_VERSION).hashCode(); - if (this.families != null && this.families.size() > 0) { + if (this.families.size() > 0) { for (HColumnDescriptor e: this.families.values()) { result ^= e.hashCode(); } @@ -980,84 +988,6 @@ public class HTableDescriptor implements WritableComparable { return result; } - /** - * INTERNAL This method is a part of {@link WritableComparable} interface - * and is used for de-serialization of the HTableDescriptor over RPC - * @deprecated Writables are going away. Use pb {@link #parseFrom(byte[])} instead. - */ - @Deprecated - @Override - public void readFields(DataInput in) throws IOException { - int version = in.readInt(); - if (version < 3) - throw new IOException("versions < 3 are not supported (and never existed!?)"); - // version 3+ - name = TableName.valueOf(Bytes.readByteArray(in)); - setRootRegion(in.readBoolean()); - setMetaRegion(in.readBoolean()); - values.clear(); - configuration.clear(); - int numVals = in.readInt(); - for (int i = 0; i < numVals; i++) { - ImmutableBytesWritable key = new ImmutableBytesWritable(); - ImmutableBytesWritable value = new ImmutableBytesWritable(); - key.readFields(in); - value.readFields(in); - setValue(key, value); - } - families.clear(); - int numFamilies = in.readInt(); - for (int i = 0; i < numFamilies; i++) { - HColumnDescriptor c = new HColumnDescriptor(); - c.readFields(in); - families.put(c.getName(), c); - } - if (version >= 7) { - int numConfigs = in.readInt(); - for (int i = 0; i < numConfigs; i++) { - ImmutableBytesWritable key = new ImmutableBytesWritable(); - ImmutableBytesWritable value = new ImmutableBytesWritable(); - key.readFields(in); - value.readFields(in); - configuration.put( - Bytes.toString(key.get(), key.getOffset(), key.getLength()), - Bytes.toString(value.get(), value.getOffset(), value.getLength())); - } - } - } - - /** - * INTERNAL This method is a part of {@link WritableComparable} interface - * and is used for serialization of the HTableDescriptor over RPC - * @deprecated Writables are going away. - * Use {@link com.google.protobuf.MessageLite#toByteArray} instead. - */ - @Deprecated - @Override - public void write(DataOutput out) throws IOException { - out.writeInt(TABLE_DESCRIPTOR_VERSION); - Bytes.writeByteArray(out, name.toBytes()); - out.writeBoolean(isRootRegion()); - out.writeBoolean(isMetaRegion()); - out.writeInt(values.size()); - for (Map.Entry e: - values.entrySet()) { - e.getKey().write(out); - e.getValue().write(out); - } - out.writeInt(families.size()); - for(Iterator it = families.values().iterator(); - it.hasNext(); ) { - HColumnDescriptor family = it.next(); - family.write(out); - } - out.writeInt(configuration.size()); - for (Map.Entry e : configuration.entrySet()) { - new ImmutableBytesWritable(Bytes.toBytes(e.getKey())).write(out); - new ImmutableBytesWritable(Bytes.toBytes(e.getValue())).write(out); - } - } - // Comparable /** @@ -1065,7 +995,7 @@ public class HTableDescriptor implements WritableComparable { * This compares the content of the two descriptors and not the reference. * * @return 0 if the contents of the descriptors are exactly matching, - * 1 if there is a mismatch in the contents + * 1 if there is a mismatch in the contents */ @Override public int compareTo(final HTableDescriptor other) { @@ -1132,7 +1062,7 @@ public class HTableDescriptor implements WritableComparable { */ public HTableDescriptor setRegionReplication(int regionReplication) { setValue(REGION_REPLICATION_KEY, - new ImmutableBytesWritable(Bytes.toBytes(Integer.toString(regionReplication)))); + new Bytes(Bytes.toBytes(Integer.toString(regionReplication)))); return this; } @@ -1247,7 +1177,7 @@ public class HTableDescriptor implements WritableComparable { // generate a coprocessor key int maxCoprocessorNumber = 0; Matcher keyMatcher; - for (Map.Entry e: + for (Map.Entry e : this.values.entrySet()) { keyMatcher = HConstants.CP_HTD_ATTR_KEY_PATTERN.matcher( @@ -1278,7 +1208,7 @@ public class HTableDescriptor implements WritableComparable { public boolean hasCoprocessor(String className) { Matcher keyMatcher; Matcher valueMatcher; - for (Map.Entry e: + for (Map.Entry e : this.values.entrySet()) { keyMatcher = HConstants.CP_HTD_ATTR_KEY_PATTERN.matcher( @@ -1310,7 +1240,7 @@ public class HTableDescriptor implements WritableComparable { List result = new ArrayList(); Matcher keyMatcher; Matcher valueMatcher; - for (Map.Entry e : this.values.entrySet()) { + for (Map.Entry e : this.values.entrySet()) { keyMatcher = HConstants.CP_HTD_ATTR_KEY_PATTERN.matcher(Bytes.toString(e.getKey().get())); if (!keyMatcher.matches()) { continue; @@ -1330,10 +1260,10 @@ public class HTableDescriptor implements WritableComparable { * @param className Class name of the co-processor */ public void removeCoprocessor(String className) { - ImmutableBytesWritable match = null; + Bytes match = null; Matcher keyMatcher; Matcher valueMatcher; - for (Map.Entry e : this.values + for (Map.Entry e : this.values .entrySet()) { keyMatcher = HConstants.CP_HTD_ATTR_KEY_PATTERN.matcher(Bytes.toString(e .getKey().get())); @@ -1377,10 +1307,9 @@ public class HTableDescriptor implements WritableComparable { new Path(name.getNamespaceAsString(), new Path(name.getQualifierAsString())))); } - /** - * Table descriptor for hbase:meta catalog table - * @deprecated Use TableDescriptors#get(TableName.META_TABLE_NAME) or - * HBaseAdmin#getTableDescriptor(TableName.META_TABLE_NAME) instead. + /** Table descriptor for hbase:meta catalog table + * Deprecated, use TableDescriptors#get(TableName.META_TABLE) or + * Admin#getTableDescriptor(TableName.META_TABLE) instead. */ @Deprecated public static final HTableDescriptor META_TABLEDESC = new HTableDescriptor( @@ -1388,9 +1317,9 @@ public class HTableDescriptor implements WritableComparable { new HColumnDescriptor[] { new HColumnDescriptor(HConstants.CATALOG_FAMILY) // Ten is arbitrary number. Keep versions to help debugging. - .setMaxVersions(HConstants.DEFAULT_HBASE_META_VERSIONS) + .setMaxVersions(10) .setInMemory(true) - .setBlocksize(HConstants.DEFAULT_HBASE_META_BLOCK_SIZE) + .setBlocksize(8 * 1024) .setScope(HConstants.REPLICATION_SCOPE_LOCAL) // Disable blooms for meta. Needs work. Seems to mess w/ getClosestOrBefore. .setBloomFilterType(BloomType.NONE) @@ -1474,7 +1403,7 @@ public class HTableDescriptor implements WritableComparable { public static HTableDescriptor parseFrom(final byte [] bytes) throws DeserializationException, IOException { if (!ProtobufUtil.isPBMagicPrefix(bytes)) { - return (HTableDescriptor)Writables.getWritable(bytes, new HTableDescriptor()); + throw new DeserializationException("Expected PB encoded HTableDescriptor"); } int pblen = ProtobufUtil.lengthOfPBMagic(); TableSchema.Builder builder = TableSchema.newBuilder(); @@ -1493,7 +1422,7 @@ public class HTableDescriptor implements WritableComparable { public TableSchema convert() { TableSchema.Builder builder = TableSchema.newBuilder(); builder.setTableName(ProtobufUtil.toProtoTableName(getTableName())); - for (Map.Entry e: this.values.entrySet()) { + for (Map.Entry e : this.values.entrySet()) { BytesBytesPair.Builder aBuilder = BytesBytesPair.newBuilder(); aBuilder.setFirst(ByteStringer.wrap(e.getKey().get())); aBuilder.setSecond(ByteStringer.wrap(e.getValue().get())); @@ -1569,26 +1498,4 @@ public class HTableDescriptor implements WritableComparable { public void removeConfiguration(final String key) { configuration.remove(key); } - - public static HTableDescriptor metaTableDescriptor(final Configuration conf) - throws IOException { - HTableDescriptor metaDescriptor = new HTableDescriptor( - TableName.META_TABLE_NAME, - new HColumnDescriptor[] { - new HColumnDescriptor(HConstants.CATALOG_FAMILY) - .setMaxVersions(conf.getInt(HConstants.HBASE_META_VERSIONS, - HConstants.DEFAULT_HBASE_META_VERSIONS)) - .setInMemory(true) - .setBlocksize(conf.getInt(HConstants.HBASE_META_BLOCK_SIZE, - HConstants.DEFAULT_HBASE_META_BLOCK_SIZE)) - .setScope(HConstants.REPLICATION_SCOPE_LOCAL) - // Disable blooms for meta. Needs work. Seems to mess w/ getClosestOrBefore. - .setBloomFilterType(BloomType.NONE) - }); - metaDescriptor.addCoprocessor( - "org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint", - null, Coprocessor.PRIORITY_SYSTEM, null); - return metaDescriptor; - } - } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java index 492633c..5d9c2ed 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/InvalidFamilyOperationException.java @@ -21,7 +21,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; - /** * Thrown if a request is table schema modification is requested but * made for an invalid family name. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java index a85b164..ddd03e8 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/MasterNotRunningException.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * Thrown if the master is not running */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java index 187856e..5abf6a4 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/MetaTableAccessor.java @@ -22,8 +22,9 @@ import com.google.protobuf.ServiceException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; @@ -46,8 +47,6 @@ import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.PairOfSameType; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import java.io.IOException; import java.io.InterruptedIOException; @@ -170,37 +169,31 @@ public class MetaTableAccessor { } /** - * Callers should call close on the returned {@link HTable} instance. - * @param connection connection we're using to access table - * @param tableName Table to get an {@link org.apache.hadoop.hbase.client.HTable} against. - * @return An {@link org.apache.hadoop.hbase.client.HTable} for tableName + * Callers should call close on the returned {@link Table} instance. + * @param connection connection we're using to access Meta + * @return An {@link Table} for hbase:meta * @throws IOException - * @SuppressWarnings("deprecation") */ - private static Table getHTable(final Connection connection, final TableName tableName) + static Table getMetaHTable(final Connection connection) throws IOException { // We used to pass whole CatalogTracker in here, now we just pass in Connection if (connection == null || connection.isClosed()) { throw new NullPointerException("No connection"); } // If the passed in 'connection' is 'managed' -- i.e. every second test uses - // an HTable or an HBaseAdmin with managed connections -- then doing + // a Table or an HBaseAdmin with managed connections -- then doing // connection.getTable will throw an exception saying you are NOT to use // managed connections getting tables. Leaving this as it is for now. Will // revisit when inclined to change all tests. User code probaby makes use of // managed connections too so don't change it till post hbase 1.0. - return new HTable(tableName, connection); - } - - /** - * Callers should call close on the returned {@link HTable} instance. - * @param connection connection we're using to access Meta - * @return An {@link HTable} for hbase:meta - * @throws IOException - */ - static Table getMetaHTable(final Connection connection) - throws IOException { - return getHTable(connection, TableName.META_TABLE_NAME); + // + // There should still be a way to use this method with an unmanaged connection. + if (connection instanceof ClusterConnection) { + if (((ClusterConnection) connection).isManaged()) { + return new HTable(TableName.META_TABLE_NAME, (ClusterConnection) connection); + } + } + return connection.getTable(TableName.META_TABLE_NAME); } /** @@ -373,22 +366,21 @@ public class MetaTableAccessor { } /** - * Gets all of the regions of the specified table. - * @param zkw zookeeper connection to access meta table + * Gets all of the regions of the specified table. Do not use this method + * to get meta table regions, use methods in MetaTableLocator instead. * @param connection connection we're using * @param tableName table we're looking for * @return Ordered list of {@link HRegionInfo}. * @throws IOException */ - public static List getTableRegions(ZooKeeperWatcher zkw, - Connection connection, TableName tableName) + public static List getTableRegions(Connection connection, TableName tableName) throws IOException { - return getTableRegions(zkw, connection, tableName, false); + return getTableRegions(connection, tableName, false); } /** - * Gets all of the regions of the specified table. - * @param zkw zookeeper connection to access meta table + * Gets all of the regions of the specified table. Do not use this method + * to get meta table regions, use methods in MetaTableLocator instead. * @param connection connection we're using * @param tableName table we're looking for * @param excludeOfflinedSplitParents If true, do not include offlined split @@ -396,12 +388,14 @@ public class MetaTableAccessor { * @return Ordered list of {@link HRegionInfo}. * @throws IOException */ - public static List getTableRegions(ZooKeeperWatcher zkw, - Connection connection, TableName tableName, final boolean excludeOfflinedSplitParents) - throws IOException { - List> result = null; - result = getTableRegionsAndLocations(zkw, connection, tableName, - excludeOfflinedSplitParents); + public static List getTableRegions(Connection connection, + TableName tableName, final boolean excludeOfflinedSplitParents) + throws IOException { + List> result; + + result = getTableRegionsAndLocations(connection, tableName, + excludeOfflinedSplitParents); + return getListOfHRegionInfos(result); } @@ -459,38 +453,31 @@ public class MetaTableAccessor { } /** - * @param zkw zookeeper connection to access meta table + * Do not use this method to get meta table regions, use methods in MetaTableLocator instead. * @param connection connection we're using * @param tableName table we're looking for * @return Return list of regioninfos and server. * @throws IOException */ public static List> - getTableRegionsAndLocations(ZooKeeperWatcher zkw, - Connection connection, TableName tableName) - throws IOException { - return getTableRegionsAndLocations(zkw, connection, tableName, true); + getTableRegionsAndLocations(Connection connection, TableName tableName) + throws IOException { + return getTableRegionsAndLocations(connection, tableName, true); } /** - * @param zkw ZooKeeperWatcher instance we're using to get hbase:meta location + * Do not use this method to get meta table regions, use methods in MetaTableLocator instead. * @param connection connection we're using * @param tableName table to work with * @return Return list of regioninfos and server addresses. * @throws IOException */ public static List> getTableRegionsAndLocations( - ZooKeeperWatcher zkw, Connection connection, final TableName tableName, + Connection connection, final TableName tableName, final boolean excludeOfflinedSplitParents) throws IOException { - if (tableName.equals(TableName.META_TABLE_NAME)) { - // If meta, do a bit of special handling. - ServerName serverName = new MetaTableLocator().getMetaRegionLocation(zkw); - List> list = - new ArrayList>(); - list.add(new Pair(HRegionInfo.FIRST_META_REGIONINFO, - serverName)); - return list; + throw new IOException("This method can't be used to locate meta regions;" + + " use MetaTableLocator instead"); } // Make a version of CollectingVisitor that collects HRegionInfo and ServerAddress CollectingVisitor> visitor = @@ -808,7 +795,7 @@ public class MetaTableAccessor { * @return a pair of HRegionInfo or PairOfSameType(null, null) if the region is not a split * parent */ - public static PairOfSameType getDaughterRegions(Result data) throws IOException { + public static PairOfSameType getDaughterRegions(Result data) { HRegionInfo splitA = getHRegionInfo(data, HConstants.SPLITA_QUALIFIER); HRegionInfo splitB = getHRegionInfo(data, HConstants.SPLITB_QUALIFIER); @@ -822,7 +809,7 @@ public class MetaTableAccessor { * @return a pair of HRegionInfo or PairOfSameType(null, null) if the region is not a split * parent */ - public static PairOfSameType getMergeRegions(Result data) throws IOException { + public static PairOfSameType getMergeRegions(Result data) { HRegionInfo mergeA = getHRegionInfo(data, HConstants.MERGEA_QUALIFIER); HRegionInfo mergeB = getHRegionInfo(data, HConstants.MERGEB_QUALIFIER); @@ -1089,8 +1076,8 @@ public class MetaTableAccessor { /** * Adds a hbase:meta row for the specified new region to the given catalog table. The - * HTable is not flushed or closed. - * @param meta the HTable for META + * Table is not flushed or closed. + * @param meta the Table for META * @param regionInfo region information * @throws IOException if problem connecting or updating meta */ @@ -1105,7 +1092,7 @@ public class MetaTableAccessor { * {@link #splitRegion(org.apache.hadoop.hbase.client.Connection, * HRegionInfo, HRegionInfo, HRegionInfo, ServerName)} * if you want to do that. - * @param meta the HTable for META + * @param meta the Table for META * @param regionInfo region information * @param splitA first split daughter of the parent regionInfo * @param splitB second split daughter of the parent regionInfo diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java index 1523ff6..8975c74 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/NotServingRegionException.java @@ -18,12 +18,12 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.util.Bytes; -import java.io.IOException; - /** * Thrown by a region server if it is sent a request for a region it is not * serving. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java index 33ff8b1..a5ae44b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/PleaseHoldException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseIOException; /** * This exception is thrown by the master when a region server was shut down and diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java index 13d2f80..24ea16c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/RegionException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseIOException; /** * Thrown when something happens related to region handling. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTransition.java hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTransition.java deleted file mode 100644 index c74e11f..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/RegionTransition.java +++ /dev/null @@ -1,139 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase; - -import org.apache.hadoop.hbase.util.ByteStringer; -import com.google.protobuf.InvalidProtocolBufferException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.util.Bytes; - -/** - * Current state of a region in transition. Holds state of a region as it moves through the - * steps that take it from offline to open, etc. Used by regionserver, master, and zk packages. - * Encapsulates protobuf serialization/deserialization so we don't leak generated pb outside this - * class. Create an instance using createRegionTransition(EventType, byte[], ServerName). - *

    Immutable - */ -@InterfaceAudience.Private -public class RegionTransition { - private final ZooKeeperProtos.RegionTransition rt; - - /** - * Shutdown constructor - */ - private RegionTransition() { - this(null); - } - - private RegionTransition(final ZooKeeperProtos.RegionTransition rt) { - this.rt = rt; - } - - public EventType getEventType() { - return EventType.get(this.rt.getEventTypeCode()); - } - - public ServerName getServerName() { - return ProtobufUtil.toServerName(this.rt.getServerName()); - } - - public long getCreateTime() { - return this.rt.getCreateTime(); - } - - /** - * @return Full region name - */ - public byte [] getRegionName() { - return this.rt.getRegionName().toByteArray(); - } - - public byte [] getPayload() { - return this.rt.getPayload().toByteArray(); - } - - @Override - public String toString() { - byte [] payload = getPayload(); - return "region=" + Bytes.toStringBinary(getRegionName()) + ", state=" + getEventType() + - ", servername=" + getServerName() + ", createTime=" + this.getCreateTime() + - ", payload.length=" + (payload == null? 0: payload.length); - } - - /** - * @param type - * @param regionName - * @param sn - * @return a serialized pb {@link RegionTransition} - */ - public static RegionTransition createRegionTransition(final EventType type, - final byte [] regionName, final ServerName sn) { - return createRegionTransition(type, regionName, sn, null); - } - - /** - * @param type - * @param regionName - * @param sn - * @param payload May be null - * @return a serialized pb {@link RegionTransition} - */ - public static RegionTransition createRegionTransition(final EventType type, - final byte [] regionName, final ServerName sn, final byte [] payload) { - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName pbsn = - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.newBuilder(). - setHostName(sn.getHostname()).setPort(sn.getPort()).setStartCode(sn.getStartcode()).build(); - ZooKeeperProtos.RegionTransition.Builder builder = ZooKeeperProtos.RegionTransition.newBuilder(). - setEventTypeCode(type.getCode()).setRegionName(ByteStringer.wrap(regionName)). - setServerName(pbsn); - builder.setCreateTime(System.currentTimeMillis()); - if (payload != null) builder.setPayload(ByteStringer.wrap(payload)); - return new RegionTransition(builder.build()); - } - - /** - * @param data Serialized date to parse. - * @return A RegionTransition instance made of the passed data - * @throws DeserializationException - * @see #toByteArray() - */ - public static RegionTransition parseFrom(final byte [] data) throws DeserializationException { - ProtobufUtil.expectPBMagicPrefix(data); - try { - int prefixLen = ProtobufUtil.lengthOfPBMagic(); - ZooKeeperProtos.RegionTransition rt = ZooKeeperProtos.RegionTransition.newBuilder(). - mergeFrom(data, prefixLen, data.length - prefixLen).build(); - return new RegionTransition(rt); - } catch (InvalidProtocolBufferException e) { - throw new DeserializationException(e); - } - } - - /** - * @return This instance serialized into a byte array - * @see #parseFrom(byte[]) - */ - public byte [] toByteArray() { - return ProtobufUtil.prependPBMagic(this.rt.toByteArray()); - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java hbase-client/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java deleted file mode 100644 index dbed3e3..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/RemoteExceptionHandler.java +++ /dev/null @@ -1,120 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.ipc.RemoteException; - -import java.io.IOException; -import java.lang.reflect.Constructor; -import java.lang.reflect.InvocationTargetException; - -/** - * An immutable class which contains a static method for handling - * org.apache.hadoop.ipc.RemoteException exceptions. - */ -@InterfaceAudience.Private -public class RemoteExceptionHandler { - /* Not instantiable */ - private RemoteExceptionHandler() {super();} - - /** - * Examine passed Throwable. See if its carrying a RemoteException. If so, - * run {@link #decodeRemoteException(RemoteException)} on it. Otherwise, - * pass back t unaltered. - * @param t Throwable to examine. - * @return Decoded RemoteException carried by t or - * t unaltered. - */ - public static Throwable checkThrowable(final Throwable t) { - Throwable result = t; - if (t instanceof RemoteException) { - try { - result = - RemoteExceptionHandler.decodeRemoteException((RemoteException)t); - } catch (Throwable tt) { - result = tt; - } - } - return result; - } - - /** - * Examine passed IOException. See if its carrying a RemoteException. If so, - * run {@link #decodeRemoteException(RemoteException)} on it. Otherwise, - * pass back e unaltered. - * @param e Exception to examine. - * @return Decoded RemoteException carried by e or - * e unaltered. - */ - public static IOException checkIOException(final IOException e) { - Throwable t = checkThrowable(e); - return t instanceof IOException? (IOException)t: new IOException(t); - } - - /** - * Converts org.apache.hadoop.ipc.RemoteException into original exception, - * if possible. If the original exception is an Error or a RuntimeException, - * throws the original exception. - * - * @param re original exception - * @return decoded RemoteException if it is an instance of or a subclass of - * IOException, or the original RemoteException if it cannot be decoded. - * - * @throws IOException indicating a server error ocurred if the decoded - * exception is not an IOException. The decoded exception is set as - * the cause. - * @deprecated Use {@link RemoteException#unwrapRemoteException()} instead. - * In fact we should look into deprecating this whole class - St.Ack 2010929 - */ - public static IOException decodeRemoteException(final RemoteException re) - throws IOException { - IOException i = re; - - try { - Class c = Class.forName(re.getClassName()); - - Class[] parameterTypes = { String.class }; - Constructor ctor = c.getConstructor(parameterTypes); - - Object[] arguments = { re.getMessage() }; - Throwable t = (Throwable) ctor.newInstance(arguments); - - if (t instanceof IOException) { - i = (IOException) t; - } else { - i = new IOException("server error"); - i.initCause(t); - throw i; - } - - } catch (ClassNotFoundException x) { - // continue - } catch (NoSuchMethodException x) { - // continue - } catch (IllegalAccessException x) { - // continue - } catch (InvocationTargetException x) { - // continue - } catch (InstantiationException x) { - // continue - } - return i; - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java index 06a61c0..18e5d67 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/ServerLoad.java @@ -20,6 +20,12 @@ package org.apache.hadoop.hbase; +import java.util.Arrays; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.TreeSet; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; @@ -27,12 +33,6 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.Coprocessor; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Strings; -import java.util.Arrays; -import java.util.List; -import java.util.Map; -import java.util.TreeMap; -import java.util.TreeSet; - /** * This class is used for exporting current state of load on a RegionServer. */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java deleted file mode 100644 index dc5ba78..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java +++ /dev/null @@ -1,402 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase; - -import com.google.common.net.InetAddresses; -import com.google.protobuf.InvalidProtocolBufferException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.util.Addressing; -import org.apache.hadoop.hbase.util.Bytes; - -import java.io.Serializable; -import java.util.ArrayList; -import java.util.List; -import java.util.regex.Pattern; - -/** - * Instance of an HBase ServerName. - * A server name is used uniquely identifying a server instance in a cluster and is made - * of the combination of hostname, port, and startcode. The startcode distingushes restarted - * servers on same hostname and port (startcode is usually timestamp of server startup). The - * {@link #toString()} format of ServerName is safe to use in the filesystem and as znode name - * up in ZooKeeper. Its format is: - * <hostname> '{@link #SERVERNAME_SEPARATOR}' <port> '{@link #SERVERNAME_SEPARATOR}' <startcode>. - * For example, if hostname is www.example.org, port is 1234, - * and the startcode for the regionserver is 1212121212, then - * the {@link #toString()} would be www.example.org,1234,1212121212. - * - *

    You can obtain a versioned serialized form of this class by calling - * {@link #getVersionedBytes()}. To deserialize, call {@link #parseVersionedServerName(byte[])} - * - *

    Immutable. - */ -@InterfaceAudience.Public -@InterfaceStability.Evolving -public class ServerName implements Comparable, Serializable { - private static final long serialVersionUID = 1367463982557264981L; - - /** - * Version for this class. - * Its a short rather than a byte so I can for sure distinguish between this - * version of this class and the version previous to this which did not have - * a version. - */ - private static final short VERSION = 0; - static final byte [] VERSION_BYTES = Bytes.toBytes(VERSION); - - /** - * What to use if no startcode supplied. - */ - public static final int NON_STARTCODE = -1; - - /** - * This character is used as separator between server hostname, port and - * startcode. - */ - public static final String SERVERNAME_SEPARATOR = ","; - - public static final Pattern SERVERNAME_PATTERN = - Pattern.compile("[^" + SERVERNAME_SEPARATOR + "]+" + - SERVERNAME_SEPARATOR + Addressing.VALID_PORT_REGEX + - SERVERNAME_SEPARATOR + Addressing.VALID_PORT_REGEX + "$"); - - /** - * What to use if server name is unknown. - */ - public static final String UNKNOWN_SERVERNAME = "#unknown#"; - - private final String servername; - private final String hostnameOnly; - private final int port; - private final long startcode; - - /** - * Cached versioned bytes of this ServerName instance. - * @see #getVersionedBytes() - */ - private byte [] bytes; - public static final List EMPTY_SERVER_LIST = new ArrayList(0); - - private ServerName(final String hostname, final int port, final long startcode) { - // Drop the domain is there is one; no need of it in a local cluster. With it, we get long - // unwieldy names. - this.hostnameOnly = hostname; - this.port = port; - this.startcode = startcode; - this.servername = getServerName(this.hostnameOnly, port, startcode); - } - - /** - * @param hostname - * @return hostname minus the domain, if there is one (will do pass-through on ip addresses) - */ - static String getHostNameMinusDomain(final String hostname) { - if (InetAddresses.isInetAddress(hostname)) return hostname; - String [] parts = hostname.split("\\."); - if (parts == null || parts.length == 0) return hostname; - return parts[0]; - } - - private ServerName(final String serverName) { - this(parseHostname(serverName), parsePort(serverName), - parseStartcode(serverName)); - } - - private ServerName(final String hostAndPort, final long startCode) { - this(Addressing.parseHostname(hostAndPort), - Addressing.parsePort(hostAndPort), startCode); - } - - public static String parseHostname(final String serverName) { - if (serverName == null || serverName.length() <= 0) { - throw new IllegalArgumentException("Passed hostname is null or empty"); - } - if (!Character.isLetterOrDigit(serverName.charAt(0))) { - throw new IllegalArgumentException("Bad passed hostname, serverName=" + serverName); - } - int index = serverName.indexOf(SERVERNAME_SEPARATOR); - return serverName.substring(0, index); - } - - public static int parsePort(final String serverName) { - String [] split = serverName.split(SERVERNAME_SEPARATOR); - return Integer.parseInt(split[1]); - } - - public static long parseStartcode(final String serverName) { - int index = serverName.lastIndexOf(SERVERNAME_SEPARATOR); - return Long.parseLong(serverName.substring(index + 1)); - } - - /** - * Retrieve an instance of ServerName. - * Callers should use the equals method to compare returned instances, though we may return - * a shared immutable object as an internal optimization. - */ - public static ServerName valueOf(final String hostname, final int port, final long startcode) { - return new ServerName(hostname, port, startcode); - } - - /** - * Retrieve an instance of ServerName. - * Callers should use the equals method to compare returned instances, though we may return - * a shared immutable object as an internal optimization. - */ - public static ServerName valueOf(final String serverName) { - return new ServerName(serverName); - } - - /** - * Retrieve an instance of ServerName. - * Callers should use the equals method to compare returned instances, though we may return - * a shared immutable object as an internal optimization. - */ - public static ServerName valueOf(final String hostAndPort, final long startCode) { - return new ServerName(hostAndPort, startCode); - } - - @Override - public String toString() { - return getServerName(); - } - - /** - * @return Return a SHORT version of {@link ServerName#toString()}, one that has the host only, - * minus the domain, and the port only -- no start code; the String is for us internally mostly - * tying threads to their server. Not for external use. It is lossy and will not work in - * in compares, etc. - */ - public String toShortString() { - return Addressing.createHostAndPortStr(getHostNameMinusDomain(this.hostnameOnly), this.port); - } - - /** - * @return {@link #getServerName()} as bytes with a short-sized prefix with - * the ServerName#VERSION of this class. - */ - public synchronized byte [] getVersionedBytes() { - if (this.bytes == null) { - this.bytes = Bytes.add(VERSION_BYTES, Bytes.toBytes(getServerName())); - } - return this.bytes; - } - - public String getServerName() { - return servername; - } - - public String getHostname() { - return hostnameOnly; - } - - public int getPort() { - return port; - } - - public long getStartcode() { - return startcode; - } - - /** - * For internal use only. - * @param hostName - * @param port - * @param startcode - * @return Server name made of the concatenation of hostname, port and - * startcode formatted as <hostname> ',' <port> ',' <startcode> - */ - static String getServerName(String hostName, int port, long startcode) { - final StringBuilder name = new StringBuilder(hostName.length() + 1 + 5 + 1 + 13); - name.append(hostName); - name.append(SERVERNAME_SEPARATOR); - name.append(port); - name.append(SERVERNAME_SEPARATOR); - name.append(startcode); - return name.toString(); - } - - /** - * @param hostAndPort String in form of <hostname> ':' <port> - * @param startcode - * @return Server name made of the concatenation of hostname, port and - * startcode formatted as <hostname> ',' <port> ',' <startcode> - */ - public static String getServerName(final String hostAndPort, - final long startcode) { - int index = hostAndPort.indexOf(":"); - if (index <= 0) throw new IllegalArgumentException("Expected ':' "); - return getServerName(hostAndPort.substring(0, index), - Integer.parseInt(hostAndPort.substring(index + 1)), startcode); - } - - /** - * @return Hostname and port formatted as described at - * {@link Addressing#createHostAndPortStr(String, int)} - */ - public String getHostAndPort() { - return Addressing.createHostAndPortStr(this.hostnameOnly, this.port); - } - - /** - * @param serverName ServerName in form specified by {@link #getServerName()} - * @return The server start code parsed from servername - */ - public static long getServerStartcodeFromServerName(final String serverName) { - int index = serverName.lastIndexOf(SERVERNAME_SEPARATOR); - return Long.parseLong(serverName.substring(index + 1)); - } - - /** - * Utility method to excise the start code from a server name - * @param inServerName full server name - * @return server name less its start code - */ - public static String getServerNameLessStartCode(String inServerName) { - if (inServerName != null && inServerName.length() > 0) { - int index = inServerName.lastIndexOf(SERVERNAME_SEPARATOR); - if (index > 0) { - return inServerName.substring(0, index); - } - } - return inServerName; - } - - @Override - public int compareTo(ServerName other) { - int compare = this.getHostname().compareToIgnoreCase(other.getHostname()); - if (compare != 0) return compare; - compare = this.getPort() - other.getPort(); - if (compare != 0) return compare; - return (int)(this.getStartcode() - other.getStartcode()); - } - - @Override - public int hashCode() { - return getServerName().hashCode(); - } - - @Override - public boolean equals(Object o) { - if (this == o) return true; - if (o == null) return false; - if (!(o instanceof ServerName)) return false; - return this.compareTo((ServerName)o) == 0; - } - - /** - * @param left - * @param right - * @return True if other has same hostname and port. - */ - public static boolean isSameHostnameAndPort(final ServerName left, - final ServerName right) { - if (left == null) return false; - if (right == null) return false; - return left.getHostname().equals(right.getHostname()) && - left.getPort() == right.getPort(); - } - - /** - * Use this method instantiating a {@link ServerName} from bytes - * gotten from a call to {@link #getVersionedBytes()}. Will take care of the - * case where bytes were written by an earlier version of hbase. - * @param versionedBytes Pass bytes gotten from a call to {@link #getVersionedBytes()} - * @return A ServerName instance. - * @see #getVersionedBytes() - */ - public static ServerName parseVersionedServerName(final byte [] versionedBytes) { - // Version is a short. - short version = Bytes.toShort(versionedBytes); - if (version == VERSION) { - int length = versionedBytes.length - Bytes.SIZEOF_SHORT; - return valueOf(Bytes.toString(versionedBytes, Bytes.SIZEOF_SHORT, length)); - } - // Presume the bytes were written with an old version of hbase and that the - // bytes are actually a String of the form "'' ':' ''". - return valueOf(Bytes.toString(versionedBytes), NON_STARTCODE); - } - - /** - * @param str Either an instance of {@link ServerName#toString()} or a - * "'' ':' ''". - * @return A ServerName instance. - */ - public static ServerName parseServerName(final String str) { - return SERVERNAME_PATTERN.matcher(str).matches()? valueOf(str) : - valueOf(str, NON_STARTCODE); - } - - - /** - * @return true if the String follows the pattern of {@link ServerName#toString()}, false - * otherwise. - */ - public static boolean isFullServerName(final String str){ - if (str == null ||str.isEmpty()) return false; - return SERVERNAME_PATTERN.matcher(str).matches(); - } - - /** - * Get a ServerName from the passed in data bytes. - * @param data Data with a serialize server name in it; can handle the old style - * servername where servername was host and port. Works too with data that - * begins w/ the pb 'PBUF' magic and that is then followed by a protobuf that - * has a serialized {@link ServerName} in it. - * @return Returns null if data is null else converts passed data - * to a ServerName instance. - * @throws DeserializationException - */ - public static ServerName parseFrom(final byte [] data) throws DeserializationException { - if (data == null || data.length <= 0) return null; - if (ProtobufUtil.isPBMagicPrefix(data)) { - int prefixLen = ProtobufUtil.lengthOfPBMagic(); - try { - ZooKeeperProtos.Master rss = - ZooKeeperProtos.Master.PARSER.parseFrom(data, prefixLen, data.length - prefixLen); - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName sn = rss.getMaster(); - return valueOf(sn.getHostName(), sn.getPort(), sn.getStartCode()); - } catch (InvalidProtocolBufferException e) { - // A failed parse of the znode is pretty catastrophic. Rather than loop - // retrying hoping the bad bytes will changes, and rather than change - // the signature on this method to add an IOE which will send ripples all - // over the code base, throw a RuntimeException. This should "never" happen. - // Fail fast if it does. - throw new DeserializationException(e); - } - } - // The str returned could be old style -- pre hbase-1502 -- which was - // hostname and port seperated by a colon rather than hostname, port and - // startcode delimited by a ','. - String str = Bytes.toString(data); - int index = str.indexOf(ServerName.SERVERNAME_SEPARATOR); - if (index != -1) { - // Presume its ServerName serialized with versioned bytes. - return ServerName.parseVersionedServerName(data); - } - // Presume it a hostname:port format. - String hostname = Addressing.parseHostname(str); - int port = Addressing.parsePort(str); - return valueOf(hostname, port, -1L); - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java index 9590e4b..00cabbf 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java @@ -16,7 +16,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.TableName; /** * Thrown when a table exists but should not diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java index ea707bf..9b5f728 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotDisabledException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java index 210b875..0f78ee6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotEnabledException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java index 2433a14..8ac5e20 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/TableNotFoundException.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.util.Bytes; /** Thrown when a table can not be located */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java hbase-client/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java index b55fe33..6ef5475 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/YouAreDeadException.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * This exception is thrown by the master when a region server reports and is * already being processed as dead. This can happen when a region server loses diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java index 7aebf33..422a659 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/ZooKeeperConnectionException.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * Thrown if the client can't connect to zookeeper */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java index 2bc5d79..5743fd5 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Action.java @@ -18,8 +18,8 @@ */ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * A Get, Put, Increment, Append, or Delete associated with it's region. Used internally by diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java index eedbdcb..c5d9556 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase.client; + import java.io.Closeable; import java.io.IOException; import java.util.List; @@ -41,6 +42,9 @@ import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos; +import org.apache.hadoop.hbase.quotas.QuotaFilter; +import org.apache.hadoop.hbase.quotas.QuotaRetriever; +import org.apache.hadoop.hbase.quotas.QuotaSettings; import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException; import org.apache.hadoop.hbase.snapshot.HBaseSnapshotException; import org.apache.hadoop.hbase.snapshot.RestoreSnapshotException; @@ -1250,6 +1254,23 @@ public interface Admin extends Abortable, Closeable { void deleteSnapshots(final Pattern pattern) throws IOException; /** + * Apply the new quota settings. + * + * @param quota the quota settings + * @throws IOException if a remote or network exception occurs + */ + void setQuota(final QuotaSettings quota) throws IOException; + + /** + * Return a QuotaRetriever to list the quotas based on the filter. + * + * @param filter the quota settings filter + * @return the quota retriever + * @throws IOException if a remote or network exception occurs + */ + QuotaRetriever getQuotaRetriever(final QuotaFilter filter) throws IOException; + + /** * Creates and returns a {@link com.google.protobuf.RpcChannel} instance connected to the active * master.

    The obtained {@link com.google.protobuf.RpcChannel} instance can be used to access * a published coprocessor {@link com.google.protobuf.Service} using standard protobuf service diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java index 58c204b..d5a4552 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java @@ -23,11 +23,11 @@ import java.util.Map; import java.util.NavigableMap; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.visibility.CellVisibility; import org.apache.hadoop.hbase.util.Bytes; @@ -143,12 +143,6 @@ public class Append extends Mutation { } @Override - @Deprecated - public Append setWriteToWAL(boolean write) { - return (Append) super.setWriteToWAL(write); - } - - @Override public Append setDurability(Durability d) { return (Append) super.setDurability(d); } @@ -159,12 +153,6 @@ public class Append extends Mutation { } @Override - @Deprecated - public Append setFamilyMap(NavigableMap> map) { - return (Append) super.setFamilyMap(map); - } - - @Override public Append setClusterIds(List clusterIds) { return (Append) super.setClusterIds(clusterIds); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java index 735c20a..8b1db8f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java @@ -23,6 +23,7 @@ import java.io.IOException; import java.io.InterruptedIOException; import java.util.ArrayList; import java.util.Collection; +import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.Iterator; @@ -48,6 +49,7 @@ import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.backoff.ServerStatistics; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; import org.apache.hadoop.hbase.util.Bytes; @@ -313,7 +315,8 @@ class AsyncProcess { * Uses default ExecutorService for this AP (must have been created with one). */ public AsyncRequestFuture submit(TableName tableName, List rows, - boolean atLeastOne, Batch.Callback callback, boolean needResults) throws InterruptedIOException { + boolean atLeastOne, Batch.Callback callback, boolean needResults) + throws InterruptedIOException { return submit(null, tableName, rows, atLeastOne, callback, needResults); } @@ -374,7 +377,7 @@ class AsyncProcess { locationErrors = new ArrayList(); locationErrorRows = new ArrayList(); LOG.error("Failed to get region location ", ex); - // This action failed before creating ars. Add it to retained but do not add to submit list. + // This action failed before creating ars. Retain it, but do not add to submit list. // We will then add it to ars in an already-failed state. retainedActions.add(new Action(r, ++posInList)); locationErrors.add(ex); @@ -918,14 +921,12 @@ class AsyncProcess { return loc; } - - /** * Send a multi action structure to the servers, after a delay depending on the attempt * number. Asynchronous. * * @param actionsByServer the actions structured by regions - * @param numAttempt the attempt number. + * @param numAttempt the attempt number. * @param actionsForReplicaThread original actions for replica thread; null on non-first call. */ private void sendMultiAction(Map> actionsByServer, @@ -935,33 +936,98 @@ class AsyncProcess { int actionsRemaining = actionsByServer.size(); // This iteration is by server (the HRegionLocation comparator is by server portion only). for (Map.Entry> e : actionsByServer.entrySet()) { - final ServerName server = e.getKey(); - final MultiAction multiAction = e.getValue(); + ServerName server = e.getKey(); + MultiAction multiAction = e.getValue(); incTaskCounters(multiAction.getRegions(), server); - Runnable runnable = Trace.wrap("AsyncProcess.sendMultiAction", - new SingleServerRequestRunnable(multiAction, numAttempt, server)); - if ((--actionsRemaining == 0) && reuseThread) { - runnable.run(); - } else { - try { - pool.submit(runnable); - } catch (RejectedExecutionException ree) { - // This should never happen. But as the pool is provided by the end user, let's secure - // this a little. - decTaskCounters(multiAction.getRegions(), server); - LOG.warn("#" + id + ", the task was rejected by the pool. This is unexpected." + - " Server is " + server.getServerName(), ree); - // We're likely to fail again, but this will increment the attempt counter, so it will - // finish. - receiveGlobalFailure(multiAction, server, numAttempt, ree); + Collection runnables = getNewMultiActionRunnable(server, multiAction, + numAttempt); + // make sure we correctly count the number of runnables before we try to reuse the send + // thread, in case we had to split the request into different runnables because of backoff + if (runnables.size() > actionsRemaining) { + actionsRemaining = runnables.size(); + } + + // run all the runnables + for (Runnable runnable : runnables) { + if ((--actionsRemaining == 0) && reuseThread) { + runnable.run(); + } else { + try { + pool.submit(runnable); + } catch (RejectedExecutionException ree) { + // This should never happen. But as the pool is provided by the end user, let's secure + // this a little. + decTaskCounters(multiAction.getRegions(), server); + LOG.warn("#" + id + ", the task was rejected by the pool. This is unexpected." + + " Server is " + server.getServerName(), ree); + // We're likely to fail again, but this will increment the attempt counter, so it will + // finish. + receiveGlobalFailure(multiAction, server, numAttempt, ree); + } } } } + if (actionsForReplicaThread != null) { startWaitingForReplicaCalls(actionsForReplicaThread); } } + private Collection getNewMultiActionRunnable(ServerName server, + MultiAction multiAction, + int numAttempt) { + // no stats to manage, just do the standard action + if (AsyncProcess.this.connection.getStatisticsTracker() == null) { + return Collections.singletonList(Trace.wrap("AsyncProcess.sendMultiAction", + new SingleServerRequestRunnable(multiAction, numAttempt, server))); + } + + // group the actions by the amount of delay + Map actions = new HashMap(multiAction + .size()); + + // split up the actions + for (Map.Entry>> e : multiAction.actions.entrySet()) { + Long backoff = getBackoff(server, e.getKey()); + DelayingRunner runner = actions.get(backoff); + if (runner == null) { + actions.put(backoff, new DelayingRunner(backoff, e)); + } else { + runner.add(e); + } + } + + List toReturn = new ArrayList(actions.size()); + for (DelayingRunner runner : actions.values()) { + String traceText = "AsyncProcess.sendMultiAction"; + Runnable runnable = + new SingleServerRequestRunnable(runner.getActions(), numAttempt, server); + // use a delay runner only if we need to sleep for some time + if (runner.getSleepTime() > 0) { + runner.setRunner(runnable); + traceText = "AsyncProcess.clientBackoff.sendMultiAction"; + runnable = runner; + } + runnable = Trace.wrap(traceText, runnable); + toReturn.add(runnable); + + } + return toReturn; + } + + /** + * @param server server location where the target region is hosted + * @param regionName name of the region which we are going to write some data + * @return the amount of time the client should wait until it submit a request to the + * specified server and region + */ + private Long getBackoff(ServerName server, byte[] regionName) { + ServerStatisticTracker tracker = AsyncProcess.this.connection.getStatisticsTracker(); + ServerStatistics stats = tracker.getStats(server); + return AsyncProcess.this.connection.getBackoffPolicy() + .getBackoffTime(server, regionName, stats); + } + /** * Starts waiting to issue replica calls on a different thread; or issues them immediately. */ @@ -1169,6 +1235,13 @@ class AsyncProcess { ++failed; } } else { + // update the stats about the region, if its a user table. We don't want to slow down + // updates to meta tables, especially from internal updates (master, etc). + if (AsyncProcess.this.connection.getStatisticsTracker() != null) { + result = ResultStatsUtil.updateStats(result, + AsyncProcess.this.connection.getStatisticsTracker(), server, regionName); + } + if (callback != null) { try { //noinspection unchecked @@ -1419,23 +1492,24 @@ class AsyncProcess { } private String buildDetailedErrorMsg(String string, int index) { - String error = string + "; called for " + index + - ", actionsInProgress " + actionsInProgress.get() + "; replica gets: "; + StringBuilder error = new StringBuilder(128); + error.append(string).append("; called for ").append(index).append(", actionsInProgress ") + .append(actionsInProgress.get()).append("; replica gets: "); if (replicaGetIndices != null) { for (int i = 0; i < replicaGetIndices.length; ++i) { - error += replicaGetIndices[i] + ", "; + error.append(replicaGetIndices[i]).append(", "); } } else { - error += (hasAnyReplicaGets ? "all" : "none"); + error.append(hasAnyReplicaGets ? "all" : "none"); } - error += "; results "; + error.append("; results "); if (results != null) { for (int i = 0; i < results.length; ++i) { Object o = results[i]; - error += ((o == null) ? "null" : o.toString()) + ", "; + error.append(((o == null) ? "null" : o.toString())).append(", "); } } - return error; + return error.toString(); } @Override @@ -1497,7 +1571,6 @@ class AsyncProcess { } } - @VisibleForTesting /** Create AsyncRequestFuture. Isolated to be easily overridden in the tests. */ protected AsyncRequestFutureImpl createAsyncRequestFuture( diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Attributes.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Attributes.java index dbf9da9..78d3398 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Attributes.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Attributes.java @@ -19,11 +19,11 @@ package org.apache.hadoop.hbase.client; +import java.util.Map; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.util.Map; - @InterfaceAudience.Public @InterfaceStability.Stable public interface Attributes { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Cancellable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Cancellable.java new file mode 100644 index 0000000..43011e9 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Cancellable.java @@ -0,0 +1,31 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * This should be implemented by the Get/Scan implementations that + * talk to replica regions. When an RPC response is received from one + * of the replicas, the RPCs to the other replicas are cancelled. + */ +@InterfaceAudience.Private +interface Cancellable { + public void cancel(); + public boolean isCancelled(); +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java index 9d59242..afc9bc4 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java @@ -24,7 +24,6 @@ import java.util.concurrent.ExecutorService; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; @@ -35,7 +34,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownScannerException; -import org.apache.hadoop.hbase.client.RpcRetryingCallerFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -43,6 +42,10 @@ import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; import org.apache.hadoop.hbase.regionserver.RegionServerStoppedException; import org.apache.hadoop.hbase.util.Bytes; +import com.google.common.annotations.VisibleForTesting; + +import static org.apache.hadoop.hbase.client.ReversedClientScanner.createClosestRowBefore; + /** * Implements the scanner interface for the HBase client. * If there are multiple regions in a table, this scanner will iterate @@ -90,7 +93,8 @@ public class ClientScanner extends AbstractClientScanner { */ public ClientScanner(final Configuration conf, final Scan scan, final TableName tableName, ClusterConnection connection, RpcRetryingCallerFactory rpcFactory, - RpcControllerFactory controllerFactory, ExecutorService pool, int primaryOperationTimeout) throws IOException { + RpcControllerFactory controllerFactory, ExecutorService pool, int primaryOperationTimeout) + throws IOException { if (LOG.isTraceEnabled()) { LOG.trace("Scan table=" + tableName + ", startRow=" + Bytes.toStringBinary(scan.getStartRow())); @@ -227,7 +231,7 @@ public class ClientScanner extends AbstractClientScanner { // Close the previous scanner if it's open if (this.callable != null) { this.callable.setClose(); - call(scan, callable, caller, scannerTimeout); + call(callable, caller, scannerTimeout); this.callable = null; } @@ -264,7 +268,7 @@ public class ClientScanner extends AbstractClientScanner { callable = getScannerCallable(localStartKey, nbRows); // Open a scanner on the region server starting at the // beginning of the region - call(scan, callable, caller, scannerTimeout); + call(callable, caller, scannerTimeout); this.currentRegion = callable.getHRegionInfo(); if (this.scanMetrics != null) { this.scanMetrics.countOfRegions.incrementAndGet(); @@ -276,7 +280,12 @@ public class ClientScanner extends AbstractClientScanner { return true; } - static Result[] call(Scan scan, ScannerCallableWithReplicas callable, + @VisibleForTesting + boolean isAnyRPCcancelled() { + return callable.isAnyRPCcancelled(); + } + + static Result[] call(ScannerCallableWithReplicas callable, RpcRetryingCaller caller, int scannerTimeout) throws IOException, RuntimeException { if (Thread.interrupted()) { @@ -303,12 +312,12 @@ public class ClientScanner extends AbstractClientScanner { /** * Publish the scan metrics. For now, we use scan.setAttribute to pass the metrics back to the - * application or TableInputFormat.Later, we could push it to other systems. We don't use metrics - * framework because it doesn't support multi-instances of the same metrics on the same machine; - * for scan/map reduce scenarios, we will have multiple scans running at the same time. + * application or TableInputFormat.Later, we could push it to other systems. We don't use + * metrics framework because it doesn't support multi-instances of the same metrics on the same + * machine; for scan/map reduce scenarios, we will have multiple scans running at the same time. * - * By default, scan metrics are disabled; if the application wants to collect them, this behavior - * can be turned on by calling calling: + * By default, scan metrics are disabled; if the application wants to collect them, this + * behavior can be turned on by calling calling: * * scan.setAttribute(SCAN_ATTRIBUTES_METRICS_ENABLE, Bytes.toBytes(Boolean.TRUE)) */ @@ -336,39 +345,13 @@ public class ClientScanner extends AbstractClientScanner { callable.setCaching(this.caching); // This flag is set when we want to skip the result returned. We do // this when we reset scanner because it split under us. - boolean skipFirst = false; boolean retryAfterOutOfOrderException = true; do { try { - if (skipFirst) { - // Skip only the first row (which was the last row of the last - // already-processed batch). - callable.setCaching(1); - values = call(scan, callable, caller, scannerTimeout); - // When the replica switch happens, we need to do certain operations - // again. The scannercallable will openScanner with the right startkey - // but we need to pick up from there. Bypass the rest of the loop - // and let the catch-up happen in the beginning of the loop as it - // happens for the cases where we see exceptions. Since only openScanner - // would have happened, values would be null - if (values == null && callable.switchedToADifferentReplica()) { - if (this.lastResult != null) { //only skip if there was something read earlier - skipFirst = true; - } - this.currentRegion = callable.getHRegionInfo(); - continue; - } - callable.setCaching(this.caching); - skipFirst = false; - } // Server returns a null values if scanning is to stop. Else, // returns an empty array if scanning is to go on and we've just // exhausted current region. - values = call(scan, callable, caller, scannerTimeout); - if (skipFirst && values != null && values.length == 1) { - skipFirst = false; // Already skipped, unset it before scanning again - values = call(scan, callable, caller, scannerTimeout); - } + values = call(callable, caller, scannerTimeout); // When the replica switch happens, we need to do certain operations // again. The callable will openScanner with the right startkey // but we need to pick up from there. Bypass the rest of the loop @@ -376,9 +359,6 @@ public class ClientScanner extends AbstractClientScanner { // happens for the cases where we see exceptions. Since only openScanner // would have happened, values would be null if (values == null && callable.switchedToADifferentReplica()) { - if (this.lastResult != null) { //only skip if there was something read earlier - skipFirst = true; - } this.currentRegion = callable.getHRegionInfo(); continue; } @@ -421,11 +401,11 @@ public class ClientScanner extends AbstractClientScanner { // scanner starts at the correct row. Otherwise we may see previously // returned rows again. // (ScannerCallable by now has "relocated" the correct region) - this.scan.setStartRow(this.lastResult.getRow()); - - // Skip first row returned. We already let it out on previous - // invocation. - skipFirst = true; + if(scan.isReversed()){ + scan.setStartRow(createClosestRowBefore(lastResult.getRow())); + }else { + scan.setStartRow(Bytes.add(lastResult.getRow(), new byte[1])); + } } if (e instanceof OutOfOrderScannerNextException) { if (retryAfterOutOfOrderException) { @@ -445,7 +425,7 @@ public class ClientScanner extends AbstractClientScanner { continue; } long currentTime = System.currentTimeMillis(); - if (this.scanMetrics != null ) { + if (this.scanMetrics != null) { this.scanMetrics.sumOfMillisSecBetweenNexts.addAndGet(currentTime-lastNext); } lastNext = currentTime; @@ -480,7 +460,7 @@ public class ClientScanner extends AbstractClientScanner { if (callable != null) { callable.setClose(); try { - call(scan, callable, caller, scannerTimeout); + call(callable, caller, scannerTimeout); } catch (UnknownScannerException e) { // We used to catch this error, interpret, and rethrow. However, we // have since decided that it's not nice for a scanner's close to diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java index 2cab830..86ff424 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallReversedScanner.java @@ -20,20 +20,20 @@ package org.apache.hadoop.hbase.client; +import java.io.IOException; +import java.util.concurrent.ExecutorService; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; import org.apache.hadoop.hbase.util.Bytes; -import java.io.IOException; -import java.util.concurrent.ExecutorService; - /** * Client scanner for small reversed scan. Generally, only one RPC is called to fetch the * scan results, unless the results cross multiple regions or the row count of diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java index 478ba76..9fc9cc6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientSmallScanner.java @@ -24,14 +24,13 @@ import java.util.concurrent.ExecutorService; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.metrics.ScanMetrics; -import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; @@ -169,7 +168,7 @@ public class ClientSmallScanner extends ClientScanner { ScanRequest request = RequestConverter.buildScanRequest(getLocation() .getRegionInfo().getRegionName(), getScan(), getCaching(), true); ScanResponse response = null; - PayloadCarryingRpcController controller = controllerFactory.newController(); + controller = controllerFactory.newController(); try { controller.setPriority(getTableName()); controller.setCallTimeout(timeout); @@ -183,8 +182,8 @@ public class ClientSmallScanner extends ClientScanner { @Override public ScannerCallable getScannerCallableForReplica(int id) { - return new SmallScannerCallable((ClusterConnection)connection, tableName, getScan(), scanMetrics, - controllerFactory, getCaching(), id); + return new SmallScannerCallable((ClusterConnection)connection, tableName, getScan(), + scanMetrics, controllerFactory, getCaching(), id); } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java index f72d6fa..45b99eb 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterConnection.java @@ -21,7 +21,6 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.MasterNotRunningException; @@ -29,6 +28,8 @@ import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.backoff.ClientBackoffPolicy; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.MasterService; @@ -282,5 +283,20 @@ public interface ClusterConnection extends HConnection { * @return RpcRetryingCallerFactory */ RpcRetryingCallerFactory getNewRpcRetryingCallerFactory(Configuration conf); -} + + /** + * + * @return true if this is a managed connection. + */ + boolean isManaged(); + /** + * @return the current statistics tracker associated with this connection + */ + ServerStatisticTracker getStatisticsTracker(); + + /** + * @return the configured client backoff policy + */ + ClientBackoffPolicy getBackoffPolicy(); +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java index 475ae01..2e2ea65 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClusterStatusListener.java @@ -30,18 +30,6 @@ import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.DatagramChannel; import io.netty.channel.socket.DatagramPacket; import io.netty.channel.socket.nio.NioDatagramChannel; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HBaseInterfaceAudience; -import org.apache.hadoop.hbase.ClusterStatus; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; -import org.apache.hadoop.hbase.util.Addressing; -import org.apache.hadoop.hbase.util.ExceptionUtil; -import org.apache.hadoop.hbase.util.Threads; import java.io.Closeable; import java.io.IOException; @@ -49,10 +37,24 @@ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.net.InetAddress; import java.net.NetworkInterface; +import java.net.Inet6Address; import java.net.UnknownHostException; import java.util.ArrayList; import java.util.List; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.ClusterStatus; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; +import org.apache.hadoop.hbase.util.Addressing; +import org.apache.hadoop.hbase.util.ExceptionUtil; +import org.apache.hadoop.hbase.util.Threads; + /** * A class that receives the cluster status, and provide it as a set of service to the client. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Connection.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Connection.java index 92b3f04..55237be 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Connection.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Connection.java @@ -22,11 +22,11 @@ import java.io.Closeable; import java.io.IOException; import java.util.concurrent.ExecutorService; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; /** * A cluster connection encapsulating lower level individual connections to actual servers and diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java index 394618a..53c1271 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionAdapter.java @@ -21,7 +21,6 @@ import java.io.IOException; import java.util.List; import java.util.concurrent.ExecutorService; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; @@ -30,6 +29,8 @@ import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.backoff.ClientBackoffPolicy; import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService; @@ -167,6 +168,11 @@ abstract class ConnectionAdapter implements ClusterConnection { } @Override + public TableState getTableState(TableName tableName) throws IOException { + return wrappedConnection.getTableState(tableName); + } + + @Override public HTableDescriptor[] listTables() throws IOException { return wrappedConnection.listTables(); } @@ -437,4 +443,19 @@ abstract class ConnectionAdapter implements ClusterConnection { public RpcRetryingCallerFactory getNewRpcRetryingCallerFactory(Configuration conf) { return wrappedConnection.getNewRpcRetryingCallerFactory(conf); } + + @Override + public boolean isManaged() { + return wrappedConnection.isManaged(); + } + + @Override + public ServerStatisticTracker getStatisticsTracker() { + return wrappedConnection.getStatisticsTracker(); + } + + @Override + public ClientBackoffPolicy getBackoffPolicy() { + return wrappedConnection.getBackoffPolicy(); + } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java index 3969d2c..89378dd 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionFactory.java @@ -22,10 +22,10 @@ import java.io.IOException; import java.lang.reflect.Constructor; import java.util.concurrent.ExecutorService; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.UserProvider; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java index df96274..5db92eb 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java @@ -42,7 +42,6 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Chore; import org.apache.hadoop.hbase.DoNotRetryIOException; @@ -61,9 +60,12 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotEnabledException; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.AsyncProcess.AsyncRequestFuture; import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor; import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitorBase; +import org.apache.hadoop.hbase.client.backoff.ClientBackoffPolicy; +import org.apache.hadoop.hbase.client.backoff.ClientBackoffPolicyFactory; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.exceptions.RegionMovedException; import org.apache.hadoop.hbase.exceptions.RegionOpeningException; @@ -116,6 +118,8 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableDescripto import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableDescriptorsResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableNamesRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableNamesResponse; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsCatalogJanitorEnabledRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsCatalogJanitorEnabledResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest; @@ -149,13 +153,15 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.RunCatalogScanReq import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.RunCatalogScanResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetBalancerRunningRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetBalancerRunningResponse; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ShutdownRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ShutdownResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.StopMasterRequest; -import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.TruncateTableRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.StopMasterResponse; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.TruncateTableRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.TruncateTableResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.UnassignRegionRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.UnassignRegionResponse; @@ -537,6 +543,8 @@ class ConnectionManager { final int rpcTimeout; private NonceGenerator nonceGenerator = null; private final AsyncProcess asyncProcess; + // single tracker per connection + private final ServerStatisticTracker stats; private volatile boolean closed; private volatile boolean aborted; @@ -592,6 +600,8 @@ class ConnectionManager { */ Registry registry; + private final ClientBackoffPolicy backoffPolicy; + HConnectionImplementation(Configuration conf, boolean managed) throws IOException { this(conf, managed, null, null); } @@ -666,9 +676,11 @@ class ConnectionManager { } else { this.nonceGenerator = new NoNonceGenerator(); } + stats = ServerStatisticTracker.create(conf); this.asyncProcess = createAsyncProcess(this.conf); this.interceptor = (new RetryingCallerInterceptorFactory(conf)).build(); - this.rpcCallerFactory = RpcRetryingCallerFactory.instantiate(conf, interceptor); + this.rpcCallerFactory = RpcRetryingCallerFactory.instantiate(conf, interceptor, this.stats); + this.backoffPolicy = ClientBackoffPolicyFactory.create(conf); } @Override @@ -706,11 +718,7 @@ class ConnectionManager { @Override public RegionLocator getRegionLocator(TableName tableName) throws IOException { - if (managed) { - throw new IOException("The connection has to be unmanaged."); - } - return new HTable( - tableName, this, tableConfig, rpcCallerFactory, rpcControllerFactory, getBatchPool()); + return new HRegionLocator(tableName, this); } @Override @@ -864,7 +872,7 @@ class ConnectionManager { @Override public boolean isTableEnabled(TableName tableName) throws IOException { - return this.registry.isTableOnlineState(tableName, true); + return getTableState(tableName).inStates(TableState.State.ENABLED); } @Override @@ -874,7 +882,7 @@ class ConnectionManager { @Override public boolean isTableDisabled(TableName tableName) throws IOException { - return this.registry.isTableOnlineState(tableName, false); + return getTableState(tableName).inStates(TableState.State.DISABLED); } @Override @@ -906,7 +914,7 @@ class ConnectionManager { return true; } }; - MetaScanner.metaScan(conf, this, visitor, tableName); + MetaScanner.metaScan(this, visitor, tableName); return available.get() && (regionCount.get() > 0); } @@ -951,7 +959,7 @@ class ConnectionManager { return true; } }; - MetaScanner.metaScan(conf, this, visitor, tableName); + MetaScanner.metaScan(this, visitor, tableName); // +1 needs to be added so that the empty start row is also taken into account return available.get() && (regionCount.get() == splitKeys.length + 1); } @@ -993,8 +1001,7 @@ class ConnectionManager { @Override public List locateRegions(final TableName tableName, final boolean useCache, final boolean offlined) throws IOException { - NavigableMap regions = - MetaScanner.allTableRegions(conf, this, tableName); + NavigableMap regions = MetaScanner.allTableRegions(this, tableName); final List locations = new ArrayList(); for (HRegionInfo regionInfo : regions.keySet()) { RegionLocations list = locateRegion(tableName, regionInfo.getStartKey(), useCache, true); @@ -1944,6 +1951,13 @@ class ConnectionManager { } @Override + public GetTableStateResponse getTableState( + RpcController controller, GetTableStateRequest request) + throws ServiceException { + return stub.getTableState(controller, request); + } + + @Override public void close() { release(this.mss); } @@ -1975,6 +1989,13 @@ class ConnectionManager { throws ServiceException { return stub.getClusterStatus(controller, request); } + + @Override + public SetQuotaResponse setQuota( + RpcController controller, SetQuotaRequest request) + throws ServiceException { + return stub.setQuota(controller, request); + } }; } @@ -2189,7 +2210,8 @@ class ConnectionManager { protected AsyncProcess createAsyncProcess(Configuration conf) { // No default pool available. return new AsyncProcess(this, conf, this.batchPool, - RpcRetryingCallerFactory.instantiate(conf), false, RpcControllerFactory.instantiate(conf)); + RpcRetryingCallerFactory.instantiate(conf, this.getStatisticsTracker()), false, + RpcControllerFactory.instantiate(conf)); } @Override @@ -2197,6 +2219,16 @@ class ConnectionManager { return asyncProcess; } + @Override + public ServerStatisticTracker getStatisticsTracker() { + return this.stats; + } + + @Override + public ClientBackoffPolicy getBackoffPolicy() { + return this.backoffPolicy; + } + /* * Return the number of cached region for a table. It will only be called * from a unit test. @@ -2473,8 +2505,28 @@ class ConnectionManager { } @Override + public TableState getTableState(TableName tableName) throws IOException { + MasterKeepAliveConnection master = getKeepAliveMasterService(); + try { + GetTableStateResponse resp = master.getTableState(null, + RequestConverter.buildGetTableStateRequest(tableName)); + return TableState.convert(resp.getTableState()); + } catch (ServiceException se) { + throw ProtobufUtil.getRemoteException(se); + } finally { + master.close(); + } + } + + @Override public RpcRetryingCallerFactory getNewRpcRetryingCallerFactory(Configuration conf) { - return RpcRetryingCallerFactory.instantiate(conf, this.interceptor); + return RpcRetryingCallerFactory + .instantiate(conf, this.interceptor, this.getStatisticsTracker()); + } + + @Override + public boolean isManaged() { + return managed; } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java index 249ff7f..4d6a36c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionUtils.java @@ -22,11 +22,11 @@ import java.util.Random; import java.util.concurrent.ExecutorService; import org.apache.commons.logging.Log; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService; import org.apache.hadoop.hbase.security.User; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/DelayingRunner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/DelayingRunner.java new file mode 100644 index 0000000..83c73b6 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/DelayingRunner.java @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; + +import java.util.List; +import java.util.Map; + +/** + * A wrapper for a runnable for a group of actions for a single regionserver. + *

    + * This can be used to build up the actions that should be taken and then + *

    + *

    + * This class exists to simulate using a ScheduledExecutorService with just a regular + * ExecutorService and Runnables. It is used for legacy reasons in the the client; this could + * only be removed if we change the expectations in HTable around the pool the client is able to + * pass in and even if we deprecate the current APIs would require keeping this class around + * for the interim to bridge between the legacy ExecutorServices and the scheduled pool. + *

    + */ +@InterfaceAudience.Private +public class DelayingRunner implements Runnable { + private static final Log LOG = LogFactory.getLog(DelayingRunner.class); + + private final Object sleepLock = new Object(); + private boolean triggerWake = false; + private long sleepTime; + private MultiAction actions = new MultiAction(); + private Runnable runnable; + + public DelayingRunner(long sleepTime, Map.Entry>> e) { + this.sleepTime = sleepTime; + add(e); + } + + public void setRunner(Runnable runner) { + this.runnable = runner; + } + + @Override + public void run() { + if (!sleep()) { + LOG.warn( + "Interrupted while sleeping for expected sleep time " + sleepTime + " ms"); + } + //TODO maybe we should consider switching to a listenableFuture for the actual callable and + // then handling the results/errors as callbacks. That way we can decrement outstanding tasks + // even if we get interrupted here, but for now, we still need to run so we decrement the + // outstanding tasks + this.runnable.run(); + } + + /** + * Sleep for an expected amount of time. + *

    + * This is nearly a copy of what the Sleeper does, but with the ability to know if you + * got interrupted while sleeping. + *

    + * + * @return true if the sleep completely entirely successfully, + * but otherwise false if the sleep was interrupted. + */ + private boolean sleep() { + long now = EnvironmentEdgeManager.currentTime(); + long startTime = now; + long waitTime = sleepTime; + while (waitTime > 0) { + long woke = -1; + try { + synchronized (sleepLock) { + if (triggerWake) break; + sleepLock.wait(waitTime); + } + woke = EnvironmentEdgeManager.currentTime(); + } catch (InterruptedException iex) { + return false; + } + // Recalculate waitTime. + woke = (woke == -1) ? EnvironmentEdgeManager.currentTime() : woke; + waitTime = waitTime - (woke - startTime); + } + return true; + } + + public void add(Map.Entry>> e) { + actions.add(e.getKey(), e.getValue()); + } + + public MultiAction getActions() { + return actions; + } + + public long getSleepTime() { + return sleepTime; + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java index 4bbcb27..d947ef8 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java @@ -26,12 +26,12 @@ import java.util.Map; import java.util.NavigableMap; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.visibility.CellVisibility; import org.apache.hadoop.hbase.util.Bytes; @@ -433,12 +433,6 @@ public class Delete extends Mutation implements Comparable { } @Override - @Deprecated - public Delete setWriteToWAL(boolean write) { - return (Delete) super.setWriteToWAL(write); - } - - @Override public Delete setDurability(Durability d) { return (Delete) super.setDurability(d); } @@ -449,12 +443,6 @@ public class Delete extends Mutation implements Comparable { } @Override - @Deprecated - public Delete setFamilyMap(NavigableMap> map) { - return (Delete) super.setFamilyMap(map); - } - - @Override public Delete setClusterIds(List clusterIds) { return (Delete) super.setClusterIds(clusterIds); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Get.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Get.java index 1d3310e..701cd9c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Get.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Get.java @@ -31,9 +31,9 @@ import java.util.TreeSet; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.security.access.Permission; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java index 40ff168..5a9ca74 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java @@ -128,6 +128,9 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.StopMasterRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.TruncateTableRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.UnassignRegionRequest; +import org.apache.hadoop.hbase.quotas.QuotaFilter; +import org.apache.hadoop.hbase.quotas.QuotaRetriever; +import org.apache.hadoop.hbase.quotas.QuotaSettings; import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException; import org.apache.hadoop.hbase.snapshot.ClientSnapshotDescriptionUtils; import org.apache.hadoop.hbase.snapshot.HBaseSnapshotException; @@ -139,11 +142,13 @@ import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; +import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.util.StringUtils; import org.apache.zookeeper.KeeperException; +import com.google.common.annotations.VisibleForTesting; import com.google.protobuf.ByteString; import com.google.protobuf.ServiceException; @@ -576,7 +581,7 @@ public class HBaseAdmin implements Admin { return true; } }; - MetaScanner.metaScan(conf, connection, visitor, desc.getTableName()); + MetaScanner.metaScan(connection, visitor, desc.getTableName()); if (actualRegCount.get() < numRegs) { if (tries == this.numRetries * this.retryLongerMultiplier - 1) { throw new RegionOfflineException("Only " + actualRegCount.get() + @@ -1451,7 +1456,7 @@ public class HBaseAdmin implements Admin { AdminService.BlockingInterface admin = this.connection.getAdmin(sn); // Close the region without updating zk state. CloseRegionRequest request = - RequestConverter.buildCloseRegionRequest(sn, encodedRegionName, false); + RequestConverter.buildCloseRegionRequest(sn, encodedRegionName); try { CloseRegionResponse response = admin.closeRegion(null, request); boolean isRegionClosed = response.getClosed(); @@ -1476,7 +1481,7 @@ public class HBaseAdmin implements Admin { throws IOException { AdminService.BlockingInterface admin = this.connection.getAdmin(sn); // Close the region without updating zk state. - ProtobufUtil.closeRegion(admin, sn, hri.getRegionName(), false); + ProtobufUtil.closeRegion(admin, sn, hri.getRegionName()); } /** @@ -1753,8 +1758,12 @@ public class HBaseAdmin implements Admin { checkTableExists(tableName); zookeeper = new ZooKeeperWatcher(conf, ZK_IDENTIFIER_PREFIX + connection.toString(), new ThrowableAbortable()); - List> pairs = - MetaTableAccessor.getTableRegionsAndLocations(zookeeper, connection, tableName); + List> pairs; + if (TableName.META_TABLE_NAME.equals(tableName)) { + pairs = new MetaTableLocator().getMetaRegionsAndLocations(zookeeper); + } else { + pairs = MetaTableAccessor.getTableRegionsAndLocations(connection, tableName); + } for (Pair pair: pairs) { if (pair.getFirst().isOffline()) continue; if (pair.getSecond() == null) continue; @@ -2014,7 +2023,12 @@ public class HBaseAdmin implements Admin { public void mergeRegions(final byte[] encodedNameOfRegionA, final byte[] encodedNameOfRegionB, final boolean forcible) throws IOException { - + Pair pair = getRegion(encodedNameOfRegionA); + if (pair != null && pair.getFirst().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) + throw new IllegalArgumentException("Can't invoke merge on non-default regions directly"); + pair = getRegion(encodedNameOfRegionB); + if (pair != null && pair.getFirst().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) + throw new IllegalArgumentException("Can't invoke merge on non-default regions directly"); executeCallable(new MasterCallable(getConnection()) { @Override public Void call(int callTimeout) throws ServiceException { @@ -2080,8 +2094,12 @@ public class HBaseAdmin implements Admin { checkTableExists(tableName); zookeeper = new ZooKeeperWatcher(conf, ZK_IDENTIFIER_PREFIX + connection.toString(), new ThrowableAbortable()); - List> pairs = - MetaTableAccessor.getTableRegionsAndLocations(zookeeper, connection, tableName); + List> pairs; + if (TableName.META_TABLE_NAME.equals(tableName)) { + pairs = new MetaTableLocator().getMetaRegionsAndLocations(zookeeper); + } else { + pairs = MetaTableAccessor.getTableRegionsAndLocations(connection, tableName); + } for (Pair pair: pairs) { // May not be a server for a particular row if (pair.getSecond() == null) continue; @@ -2089,7 +2107,8 @@ public class HBaseAdmin implements Admin { // check for parents if (r.isSplitParent()) continue; // if a split point given, only split that particular region - if (splitPoint != null && !r.containsRow(splitPoint)) continue; + if (r.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID || + (splitPoint != null && !r.containsRow(splitPoint))) continue; // call out to region server to do split now split(pair.getSecond(), pair.getFirst(), splitPoint); } @@ -2110,6 +2129,11 @@ public class HBaseAdmin implements Admin { if (regionServerPair == null) { throw new IllegalArgumentException("Invalid region: " + Bytes.toStringBinary(regionName)); } + if (regionServerPair.getFirst() != null && + regionServerPair.getFirst().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + throw new IllegalArgumentException("Can't split replicas directly. " + + "Replicas are auto-split when their primary is split."); + } if (regionServerPair.getSecond() == null) { throw new NoServerForRegionException(Bytes.toStringBinary(regionName)); } @@ -2141,7 +2165,8 @@ public class HBaseAdmin implements Admin { } } - private void split(final ServerName sn, final HRegionInfo hri, + @VisibleForTesting + public void split(final ServerName sn, final HRegionInfo hri, byte[] splitPoint) throws IOException { if (hri.getStartKey() != null && splitPoint != null && Bytes.compareTo(hri.getStartKey(), splitPoint) == 0) { @@ -2216,14 +2241,23 @@ public class HBaseAdmin implements Admin { LOG.warn("No serialized HRegionInfo in " + data); return true; } - if (!encodedName.equals(info.getEncodedName())) return true; - ServerName sn = HRegionInfo.getServerName(data); + RegionLocations rl = MetaTableAccessor.getRegionLocations(data); + boolean matched = false; + ServerName sn = null; + for (HRegionLocation h : rl.getRegionLocations()) { + if (h != null && encodedName.equals(h.getRegionInfo().getEncodedName())) { + sn = h.getServerName(); + info = h.getRegionInfo(); + matched = true; + } + } + if (!matched) return true; result.set(new Pair(info, sn)); return false; // found the region, stop } }; - MetaScanner.metaScan(conf, connection, visitor, null); + MetaScanner.metaScan(connection, visitor, null); pair = result.get(); } return pair; @@ -2544,13 +2578,17 @@ public class HBaseAdmin implements Admin { ZooKeeperWatcher zookeeper = new ZooKeeperWatcher(conf, ZK_IDENTIFIER_PREFIX + connection.toString(), new ThrowableAbortable()); - List Regions = null; + List regions = null; try { - Regions = MetaTableAccessor.getTableRegions(zookeeper, connection, tableName, true); + if (TableName.META_TABLE_NAME.equals(tableName)) { + regions = new MetaTableLocator().getMetaRegions(zookeeper); + } else { + regions = MetaTableAccessor.getTableRegions(connection, tableName, true); + } } finally { zookeeper.close(); } - return Regions; + return regions; } public List getTableRegions(final byte[] tableName) @@ -2700,8 +2738,12 @@ public class HBaseAdmin implements Admin { new ThrowableAbortable()); try { checkTableExists(tableName); - List> pairs = - MetaTableAccessor.getTableRegionsAndLocations(zookeeper, connection, tableName); + List> pairs; + if (TableName.META_TABLE_NAME.equals(tableName)) { + pairs = new MetaTableLocator().getMetaRegionsAndLocations(zookeeper); + } else { + pairs = MetaTableAccessor.getTableRegionsAndLocations(connection, tableName); + } for (Pair pair: pairs) { if (pair.getFirst().isOffline()) continue; if (pair.getSecond() == null) continue; @@ -3595,6 +3637,35 @@ public class HBaseAdmin implements Admin { }); } + /** + * Apply the new quota settings. + * + * @param quota the quota settings + * @throws IOException if a remote or network exception occurs + */ + @Override + public void setQuota(final QuotaSettings quota) throws IOException { + executeCallable(new MasterCallable(getConnection()) { + @Override + public Void call(int callTimeout) throws ServiceException { + this.master.setQuota(null, QuotaSettings.buildSetQuotaRequestProto(quota)); + return null; + } + }); + } + + /** + * Return a Quota Scanner to list the quotas based on the filter. + * + * @param filter the quota settings filter + * @return the quota scanner + * @throws IOException if a remote or network exception occurs + */ + @Override + public QuotaRetriever getQuotaRetriever(final QuotaFilter filter) throws IOException { + return QuotaRetriever.open(conf, filter); + } + private V executeCallable(MasterCallable callable) throws IOException { RpcRetryingCaller caller = rpcCallerFactory.newCaller(); try { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectable.java index 8863bc1..c4f7b10 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectable.java @@ -21,8 +21,8 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * This class makes it convenient for one to execute a command in the context diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java index e476d5f..9a4ef69 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnection.java @@ -21,16 +21,15 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; import java.util.List; import java.util.concurrent.ExecutorService; - import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService; @@ -213,6 +212,13 @@ public interface HConnection extends Connection { boolean isTableDisabled(byte[] tableName) throws IOException; /** + * Retrieve TableState, represent current table state. + * @param tableName table state for + * @return state of the table + */ + public TableState getTableState(TableName tableName) throws IOException; + + /** * @param tableName table name * @return true if all regions of the table are available, false otherwise * @throws IOException if a remote or network exception occurs diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java index 0378fe3..4678092 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java @@ -20,10 +20,11 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; import java.util.concurrent.ExecutorService; + import org.apache.commons.logging.Log; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.security.User; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HRegionLocator.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HRegionLocator.java new file mode 100644 index 0000000..fa85653 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HRegionLocator.java @@ -0,0 +1,148 @@ +/** +* +* Licensed to the Apache Software Foundation (ASF) under one +* or more contributor license agreements. See the NOTICE file +* distributed with this work for additional information +* regarding copyright ownership. The ASF licenses this file +* to you under the Apache License, Version 2.0 (the +* "License"); you may not use this file except in compliance +* with the License. You may obtain a copy of the License at +* +* http://www.apache.org/licenses/LICENSE-2.0 +* +* Unless required by applicable law or agreed to in writing, software +* distributed under the License is distributed on an "AS IS" BASIS, +* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +* See the License for the specific language governing permissions and +* limitations under the License. +*/ +package org.apache.hadoop.hbase.client; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.NavigableMap; +import java.util.Map.Entry; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.RegionLocations; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Pair; + +import com.google.common.annotations.VisibleForTesting; + +/** + * An implementation of {@link RegionLocator}. Used to view region location information for a single + * HBase table. Lightweight. Get as needed and just close when done. Instances of this class SHOULD + * NOT be constructed directly. Obtain an instance via {@link Connection}. See + * {@link ConnectionFactory} class comment for an example of how. + * + *

    This class is thread safe + */ +@InterfaceAudience.Private +@InterfaceStability.Stable +public class HRegionLocator implements RegionLocator { + + private final TableName tableName; + private final ClusterConnection connection; + + public HRegionLocator(TableName tableName, ClusterConnection connection) { + this.connection = connection; + this.tableName = tableName; + } + + /** + * {@inheritDoc} + */ + @Override + public void close() throws IOException { + // This method is required by the RegionLocator interface. This implementation does not have any + // persistent state, so there is no need to do anything here. + } + + /** + * {@inheritDoc} + */ + @Override + public HRegionLocation getRegionLocation(final byte [] row) + throws IOException { + return connection.getRegionLocation(tableName, row, false); + } + + /** + * {@inheritDoc} + */ + @Override + public HRegionLocation getRegionLocation(final byte [] row, boolean reload) + throws IOException { + return connection.getRegionLocation(tableName, row, reload); + } + + @Override + public List getAllRegionLocations() throws IOException { + NavigableMap locations = + MetaScanner.allTableRegions(this.connection, getName()); + ArrayList regions = new ArrayList<>(locations.size()); + for (Entry entry : locations.entrySet()) { + regions.add(new HRegionLocation(entry.getKey(), entry.getValue())); + } + return regions; + } + + /** + * {@inheritDoc} + */ + @Override + public byte[][] getStartKeys() throws IOException { + return getStartEndKeys().getFirst(); + } + + /** + * {@inheritDoc} + */ + @Override + public byte[][] getEndKeys() throws IOException { + return getStartEndKeys().getSecond(); + } + + /** + * {@inheritDoc} + */ + @Override + public Pair getStartEndKeys() throws IOException { + return getStartEndKeys(listRegionLocations()); + } + + @VisibleForTesting + Pair getStartEndKeys(List regions) { + final byte[][] startKeyList = new byte[regions.size()][]; + final byte[][] endKeyList = new byte[regions.size()][]; + + for (int i = 0; i < regions.size(); i++) { + HRegionInfo region = regions.get(i).getRegionLocation().getRegionInfo(); + startKeyList[i] = region.getStartKey(); + endKeyList[i] = region.getEndKey(); + } + + return new Pair<>(startKeyList, endKeyList); + } + + @Override + public TableName getName() { + return this.tableName; + } + + @VisibleForTesting + List listRegionLocations() throws IOException { + return MetaScanner.listTableRegionLocations(getConfiguration(), this.connection, getName()); + } + + public Configuration getConfiguration() { + return connection.getConfiguration(); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java index 146cf80..68d3f9f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java @@ -25,7 +25,6 @@ import java.util.Collections; import java.util.LinkedList; import java.util.List; import java.util.Map; -import java.util.Map.Entry; import java.util.NavigableMap; import java.util.TreeMap; import java.util.concurrent.Callable; @@ -38,8 +37,6 @@ import java.util.concurrent.TimeUnit; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; @@ -48,7 +45,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; @@ -110,7 +106,7 @@ import com.google.protobuf.ServiceException; */ @InterfaceAudience.Private @InterfaceStability.Stable -public class HTable implements HTableInterface, RegionLocator { +public class HTable implements HTableInterface { private static final Log LOG = LogFactory.getLog(HTable.class); protected ClusterConnection connection; private final TableName tableName; @@ -127,6 +123,7 @@ public class HTable implements HTableInterface, RegionLocator { private final boolean cleanupPoolOnClose; // shutdown the pool in close() private final boolean cleanupConnectionOnClose; // close the connection in close() private Consistency defaultConsistency = Consistency.STRONG; + private HRegionLocator locator; /** The Async process for puts with autoflush set to false or multiputs */ protected AsyncProcess ap; @@ -326,9 +323,10 @@ public class HTable implements HTableInterface, RegionLocator { /** * For internal testing. + * @throws IOException */ @VisibleForTesting - protected HTable() { + protected HTable() throws IOException { tableName = null; tableConfiguration = new TableConfiguration(); cleanupPoolOnClose = false; @@ -353,8 +351,6 @@ public class HTable implements HTableInterface, RegionLocator { this.operationTimeout = tableName.isSystemTable() ? tableConfiguration.getMetaOperationTimeout() : tableConfiguration.getOperationTimeout(); this.writeBufferSize = tableConfiguration.getWriteBufferSize(); - this.autoFlush = true; - this.currentWriteBufferSize = 0; this.scannerCaching = tableConfiguration.getScannerCaching(); if (this.rpcCallerFactory == null) { @@ -367,8 +363,7 @@ public class HTable implements HTableInterface, RegionLocator { // puts need to track errors globally due to how the APIs currently work. ap = new AsyncProcess(connection, configuration, pool, rpcCallerFactory, true, rpcControllerFactory); multiAp = this.connection.getAsyncProcess(); - - this.closed = false; + this.locator = new HRegionLocator(getName(), connection); } /** @@ -478,25 +473,25 @@ public class HTable implements HTableInterface, RegionLocator { @Deprecated public HRegionLocation getRegionLocation(final String row) throws IOException { - return connection.getRegionLocation(tableName, Bytes.toBytes(row), false); + return getRegionLocation(Bytes.toBytes(row), false); } /** - * {@inheritDoc} + * @deprecated Use {@link RegionLocator#getRegionLocation(byte[])} instead. */ - @Override + @Deprecated public HRegionLocation getRegionLocation(final byte [] row) throws IOException { - return connection.getRegionLocation(tableName, row, false); + return locator.getRegionLocation(row); } /** - * {@inheritDoc} + * @deprecated Use {@link RegionLocator#getRegionLocation(byte[], boolean)} instead. */ - @Override + @Deprecated public HRegionLocation getRegionLocation(final byte [] row, boolean reload) throws IOException { - return connection.getRegionLocation(tableName, row, reload); + return locator.getRegionLocation(row, reload); } /** @@ -602,45 +597,27 @@ public class HTable implements HTableInterface, RegionLocator { } /** - * {@inheritDoc} + * @deprecated Use {@link RegionLocator#getStartEndKeys()} instead; */ - @Override + @Deprecated public byte [][] getStartKeys() throws IOException { - return getStartEndKeys().getFirst(); + return locator.getStartKeys(); } /** - * {@inheritDoc} + * @deprecated Use {@link RegionLocator#getEndKeys()} instead; */ - @Override + @Deprecated public byte[][] getEndKeys() throws IOException { - return getStartEndKeys().getSecond(); + return locator.getEndKeys(); } /** - * {@inheritDoc} + * @deprecated Use {@link RegionLocator#getStartEndKeys()} instead; */ - @Override + @Deprecated public Pair getStartEndKeys() throws IOException { - - List regions = listRegionLocations(); - final List startKeyList = new ArrayList(regions.size()); - final List endKeyList = new ArrayList(regions.size()); - - for (RegionLocations locations : regions) { - HRegionInfo region = locations.getRegionLocation().getRegionInfo(); - startKeyList.add(region.getStartKey()); - endKeyList.add(region.getEndKey()); - } - - return new Pair( - startKeyList.toArray(new byte[startKeyList.size()][]), - endKeyList.toArray(new byte[endKeyList.size()][])); - } - - @VisibleForTesting - List listRegionLocations() throws IOException { - return MetaScanner.listTableRegionLocations(getConfiguration(), this.connection, getName()); + return locator.getStartEndKeys(); } /** @@ -654,7 +631,7 @@ public class HTable implements HTableInterface, RegionLocator { @Deprecated public NavigableMap getRegionLocations() throws IOException { // TODO: Odd that this returns a Map of HRI to SN whereas getRegionLocator, singular, returns an HRegionLocation. - return MetaScanner.allTableRegions(getConfiguration(), this.connection, getName()); + return MetaScanner.allTableRegions(this.connection, getName()); } /** @@ -663,15 +640,12 @@ public class HTable implements HTableInterface, RegionLocator { * This is mainly useful for the MapReduce integration. * @return A map of HRegionInfo with it's server address * @throws IOException if a remote or network exception occurs + * + * @deprecated Use {@link RegionLocator#getAllRegionLocations()} instead; */ - @Override + @Deprecated public List getAllRegionLocations() throws IOException { - NavigableMap locations = getRegionLocations(); - ArrayList regions = new ArrayList<>(locations.size()); - for (Entry entry : locations.entrySet()) { - regions.add(new HRegionLocation(entry.getKey(), entry.getValue())); - } - return regions; + return locator.getAllRegionLocations(); } /** @@ -1892,8 +1866,9 @@ public class HTable implements HTableInterface, RegionLocator { AsyncProcess asyncProcess = new AsyncProcess(connection, configuration, pool, - RpcRetryingCallerFactory.instantiate(configuration), true, - RpcControllerFactory.instantiate(configuration)); + RpcRetryingCallerFactory.instantiate(configuration, connection.getStatisticsTracker()), + true, RpcControllerFactory.instantiate(configuration)); + AsyncRequestFuture future = asyncProcess.submitAll(tableName, execs, new Callback() { @Override @@ -1928,4 +1903,8 @@ public class HTable implements HTableInterface, RegionLocator { callbackErrorServers); } } + + public RegionLocator getRegionLocator() { + return this.locator; + } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java index d053e66..6970333 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableFactory.java @@ -18,12 +18,12 @@ */ package org.apache.hadoop.hbase.client; +import java.io.IOException; + +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; - -import java.io.IOException; /** * Factory for creating HTable instances. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java index ada3aee..b6349c2 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableInterfaceFactory.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase.client; +import java.io.IOException; + +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; - -import java.io.IOException; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java deleted file mode 100644 index 4b998a6..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTablePool.java +++ /dev/null @@ -1,676 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.client; - -import java.io.Closeable; -import java.io.IOException; -import java.util.Collection; -import java.util.List; -import java.util.Map; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.coprocessor.Batch; -import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; -import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; -import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.PoolMap; -import org.apache.hadoop.hbase.util.PoolMap.PoolType; - -import com.google.protobuf.Descriptors; -import com.google.protobuf.Message; -import com.google.protobuf.Service; -import com.google.protobuf.ServiceException; - -/** - * A simple pool of HTable instances. - * - * Each HTablePool acts as a pool for all tables. To use, instantiate an - * HTablePool and use {@link #getTable(String)} to get an HTable from the pool. - * - * This method is not needed anymore, clients should call - * HTableInterface.close() rather than returning the tables to the pool - * - * Once you are done with it, close your instance of {@link HTableInterface} - * by calling {@link HTableInterface#close()} rather than returning the tables - * to the pool with (deprecated) {@link #putTable(HTableInterface)}. - * - *

    - * A pool can be created with a maxSize which defines the most HTable - * references that will ever be retained for each table. Otherwise the default - * is {@link Integer#MAX_VALUE}. - * - *

    - * Pool will manage its own connections to the cluster. See - * {@link HConnectionManager}. - * @deprecated as of 0.98.1. See {@link HConnection#getTable(String)}. - */ -@InterfaceAudience.Private -@Deprecated -public class HTablePool implements Closeable { - private final PoolMap tables; - private final int maxSize; - private final PoolType poolType; - private final Configuration config; - private final HTableInterfaceFactory tableFactory; - - /** - * Default Constructor. Default HBaseConfiguration and no limit on pool size. - */ - public HTablePool() { - this(HBaseConfiguration.create(), Integer.MAX_VALUE); - } - - /** - * Constructor to set maximum versions and use the specified configuration. - * - * @param config - * configuration - * @param maxSize - * maximum number of references to keep for each table - */ - public HTablePool(final Configuration config, final int maxSize) { - this(config, maxSize, null, null); - } - - /** - * Constructor to set maximum versions and use the specified configuration and - * table factory. - * - * @param config - * configuration - * @param maxSize - * maximum number of references to keep for each table - * @param tableFactory - * table factory - */ - public HTablePool(final Configuration config, final int maxSize, - final HTableInterfaceFactory tableFactory) { - this(config, maxSize, tableFactory, PoolType.Reusable); - } - - /** - * Constructor to set maximum versions and use the specified configuration and - * pool type. - * - * @param config - * configuration - * @param maxSize - * maximum number of references to keep for each table - * @param poolType - * pool type which is one of {@link PoolType#Reusable} or - * {@link PoolType#ThreadLocal} - */ - public HTablePool(final Configuration config, final int maxSize, - final PoolType poolType) { - this(config, maxSize, null, poolType); - } - - /** - * Constructor to set maximum versions and use the specified configuration, - * table factory and pool type. The HTablePool supports the - * {@link PoolType#Reusable} and {@link PoolType#ThreadLocal}. If the pool - * type is null or not one of those two values, then it will default to - * {@link PoolType#Reusable}. - * - * @param config - * configuration - * @param maxSize - * maximum number of references to keep for each table - * @param tableFactory - * table factory - * @param poolType - * pool type which is one of {@link PoolType#Reusable} or - * {@link PoolType#ThreadLocal} - */ - public HTablePool(final Configuration config, final int maxSize, - final HTableInterfaceFactory tableFactory, PoolType poolType) { - // Make a new configuration instance so I can safely cleanup when - // done with the pool. - this.config = config == null ? HBaseConfiguration.create() : config; - this.maxSize = maxSize; - this.tableFactory = tableFactory == null ? new HTableFactory() - : tableFactory; - if (poolType == null) { - this.poolType = PoolType.Reusable; - } else { - switch (poolType) { - case Reusable: - case ThreadLocal: - this.poolType = poolType; - break; - default: - this.poolType = PoolType.Reusable; - break; - } - } - this.tables = new PoolMap(this.poolType, - this.maxSize); - } - - /** - * Get a reference to the specified table from the pool. - *

    - *

    - * - * @param tableName - * table name - * @return a reference to the specified table - * @throws RuntimeException - * if there is a problem instantiating the HTable - */ - public HTableInterface getTable(String tableName) { - // call the old getTable implementation renamed to findOrCreateTable - HTableInterface table = findOrCreateTable(tableName); - // return a proxy table so when user closes the proxy, the actual table - // will be returned to the pool - return new PooledHTable(table); - } - - /** - * Get a reference to the specified table from the pool. - *

    - * - * Create a new one if one is not available. - * - * @param tableName - * table name - * @return a reference to the specified table - * @throws RuntimeException - * if there is a problem instantiating the HTable - */ - private HTableInterface findOrCreateTable(String tableName) { - HTableInterface table = tables.get(tableName); - if (table == null) { - table = createHTable(tableName); - } - return table; - } - - /** - * Get a reference to the specified table from the pool. - *

    - * - * Create a new one if one is not available. - * - * @param tableName - * table name - * @return a reference to the specified table - * @throws RuntimeException - * if there is a problem instantiating the HTable - */ - public HTableInterface getTable(byte[] tableName) { - return getTable(Bytes.toString(tableName)); - } - - /** - * This method is not needed anymore, clients should call - * HTableInterface.close() rather than returning the tables to the pool - * - * @param table - * the proxy table user got from pool - * @deprecated - */ - public void putTable(HTableInterface table) throws IOException { - // we need to be sure nobody puts a proxy implementation in the pool - // but if the client code is not updated - // and it will continue to call putTable() instead of calling close() - // then we need to return the wrapped table to the pool instead of the - // proxy - // table - if (table instanceof PooledHTable) { - returnTable(((PooledHTable) table).getWrappedTable()); - } else { - // normally this should not happen if clients pass back the same - // table - // object they got from the pool - // but if it happens then it's better to reject it - throw new IllegalArgumentException("not a pooled table: " + table); - } - } - - /** - * Puts the specified HTable back into the pool. - *

    - * - * If the pool already contains maxSize references to the table, then - * the table instance gets closed after flushing buffered edits. - * - * @param table - * table - */ - private void returnTable(HTableInterface table) throws IOException { - // this is the old putTable method renamed and made private - String tableName = Bytes.toString(table.getTableName()); - if (tables.size(tableName) >= maxSize) { - // release table instance since we're not reusing it - this.tables.removeValue(tableName, table); - this.tableFactory.releaseHTableInterface(table); - return; - } - tables.put(tableName, table); - } - - protected HTableInterface createHTable(String tableName) { - return this.tableFactory.createHTableInterface(config, - Bytes.toBytes(tableName)); - } - - /** - * Closes all the HTable instances , belonging to the given table, in the - * table pool. - *

    - * Note: this is a 'shutdown' of the given table pool and different from - * {@link #putTable(HTableInterface)}, that is used to return the table - * instance to the pool for future re-use. - * - * @param tableName - */ - public void closeTablePool(final String tableName) throws IOException { - Collection tables = this.tables.values(tableName); - if (tables != null) { - for (HTableInterface table : tables) { - this.tableFactory.releaseHTableInterface(table); - } - } - this.tables.remove(tableName); - } - - /** - * See {@link #closeTablePool(String)}. - * - * @param tableName - */ - public void closeTablePool(final byte[] tableName) throws IOException { - closeTablePool(Bytes.toString(tableName)); - } - - /** - * Closes all the HTable instances , belonging to all tables in the table - * pool. - *

    - * Note: this is a 'shutdown' of all the table pools. - */ - public void close() throws IOException { - for (String tableName : tables.keySet()) { - closeTablePool(tableName); - } - this.tables.clear(); - } - - public int getCurrentPoolSize(String tableName) { - return tables.size(tableName); - } - - /** - * A proxy class that implements HTableInterface.close method to return the - * wrapped table back to the table pool - * - */ - class PooledHTable implements HTableInterface { - - private boolean open = false; - - private HTableInterface table; // actual table implementation - - public PooledHTable(HTableInterface table) { - this.table = table; - this.open = true; - } - - @Override - public byte[] getTableName() { - checkState(); - return table.getTableName(); - } - - @Override - public TableName getName() { - return table.getName(); - } - - @Override - public Configuration getConfiguration() { - checkState(); - return table.getConfiguration(); - } - - @Override - public HTableDescriptor getTableDescriptor() throws IOException { - checkState(); - return table.getTableDescriptor(); - } - - @Override - public boolean exists(Get get) throws IOException { - checkState(); - return table.exists(get); - } - - @Override - public boolean[] existsAll(List gets) throws IOException { - checkState(); - return table.existsAll(gets); - } - - @Override - public Boolean[] exists(List gets) throws IOException { - checkState(); - return table.exists(gets); - } - - @Override - public void batch(List actions, Object[] results) throws IOException, - InterruptedException { - checkState(); - table.batch(actions, results); - } - - /** - * {@inheritDoc} - * @deprecated If any exception is thrown by one of the actions, there is no way to - * retrieve the partially executed results. Use {@link #batch(List, Object[])} instead. - */ - @Override - public Object[] batch(List actions) throws IOException, - InterruptedException { - checkState(); - return table.batch(actions); - } - - @Override - public Result get(Get get) throws IOException { - checkState(); - return table.get(get); - } - - @Override - public Result[] get(List gets) throws IOException { - checkState(); - return table.get(gets); - } - - @Override - @SuppressWarnings("deprecation") - @Deprecated - public Result getRowOrBefore(byte[] row, byte[] family) throws IOException { - checkState(); - return table.getRowOrBefore(row, family); - } - - @Override - public ResultScanner getScanner(Scan scan) throws IOException { - checkState(); - return table.getScanner(scan); - } - - @Override - public ResultScanner getScanner(byte[] family) throws IOException { - checkState(); - return table.getScanner(family); - } - - @Override - public ResultScanner getScanner(byte[] family, byte[] qualifier) - throws IOException { - checkState(); - return table.getScanner(family, qualifier); - } - - @Override - public void put(Put put) throws IOException { - checkState(); - table.put(put); - } - - @Override - public void put(List puts) throws IOException { - checkState(); - table.put(puts); - } - - @Override - public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, - byte[] value, Put put) throws IOException { - checkState(); - return table.checkAndPut(row, family, qualifier, value, put); - } - - @Override - public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, - CompareOp compareOp, byte[] value, Put put) throws IOException { - checkState(); - return table.checkAndPut(row, family, qualifier, compareOp, value, put); - } - - @Override - public void delete(Delete delete) throws IOException { - checkState(); - table.delete(delete); - } - - @Override - public void delete(List deletes) throws IOException { - checkState(); - table.delete(deletes); - } - - @Override - public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, - byte[] value, Delete delete) throws IOException { - checkState(); - return table.checkAndDelete(row, family, qualifier, value, delete); - } - - @Override - public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, - CompareOp compareOp, byte[] value, Delete delete) throws IOException { - checkState(); - return table.checkAndDelete(row, family, qualifier, compareOp, value, delete); - } - - @Override - public Result increment(Increment increment) throws IOException { - checkState(); - return table.increment(increment); - } - - @Override - public long incrementColumnValue(byte[] row, byte[] family, - byte[] qualifier, long amount) throws IOException { - checkState(); - return table.incrementColumnValue(row, family, qualifier, amount); - } - - @Override - public long incrementColumnValue(byte[] row, byte[] family, - byte[] qualifier, long amount, Durability durability) throws IOException { - checkState(); - return table.incrementColumnValue(row, family, qualifier, amount, - durability); - } - - @Override - public boolean isAutoFlush() { - checkState(); - return table.isAutoFlush(); - } - - @Override - public void flushCommits() throws IOException { - checkState(); - table.flushCommits(); - } - - /** - * Returns the actual table back to the pool - * - * @throws IOException - */ - public void close() throws IOException { - checkState(); - open = false; - returnTable(table); - } - - @Override - public CoprocessorRpcChannel coprocessorService(byte[] row) { - checkState(); - return table.coprocessorService(row); - } - - @Override - public Map coprocessorService(Class service, - byte[] startKey, byte[] endKey, Batch.Call callable) - throws ServiceException, Throwable { - checkState(); - return table.coprocessorService(service, startKey, endKey, callable); - } - - @Override - public void coprocessorService(Class service, - byte[] startKey, byte[] endKey, Batch.Call callable, Callback callback) - throws ServiceException, Throwable { - checkState(); - table.coprocessorService(service, startKey, endKey, callable, callback); - } - - @Override - public String toString() { - return "PooledHTable{" + ", table=" + table + '}'; - } - - /** - * Expose the wrapped HTable to tests in the same package - * - * @return wrapped htable - */ - HTableInterface getWrappedTable() { - return table; - } - - @Override - public void batchCallback(List actions, - Object[] results, Callback callback) throws IOException, - InterruptedException { - checkState(); - table.batchCallback(actions, results, callback); - } - - /** - * {@inheritDoc} - * @deprecated If any exception is thrown by one of the actions, there is no way to - * retrieve the partially executed results. Use - * {@link #batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)} - * instead. - */ - @Override - public Object[] batchCallback(List actions, - Callback callback) throws IOException, InterruptedException { - checkState(); - return table.batchCallback(actions, callback); - } - - @Override - public void mutateRow(RowMutations rm) throws IOException { - checkState(); - table.mutateRow(rm); - } - - @Override - public Result append(Append append) throws IOException { - checkState(); - return table.append(append); - } - - @Override - public void setAutoFlush(boolean autoFlush) { - checkState(); - table.setAutoFlush(autoFlush, autoFlush); - } - - @Override - public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { - checkState(); - table.setAutoFlush(autoFlush, clearBufferOnFail); - } - - @Override - public void setAutoFlushTo(boolean autoFlush) { - table.setAutoFlushTo(autoFlush); - } - - @Override - public long getWriteBufferSize() { - checkState(); - return table.getWriteBufferSize(); - } - - @Override - public void setWriteBufferSize(long writeBufferSize) throws IOException { - checkState(); - table.setWriteBufferSize(writeBufferSize); - } - - boolean isOpen() { - return open; - } - - private void checkState() { - if (!isOpen()) { - throw new IllegalStateException("Table=" + new String(table.getTableName()) + " already closed"); - } - } - - @Override - public long incrementColumnValue(byte[] row, byte[] family, - byte[] qualifier, long amount, boolean writeToWAL) throws IOException { - return table.incrementColumnValue(row, family, qualifier, amount, writeToWAL); - } - - @Override - public Map batchCoprocessorService( - Descriptors.MethodDescriptor method, Message request, - byte[] startKey, byte[] endKey, R responsePrototype) throws ServiceException, Throwable { - checkState(); - return table.batchCoprocessorService(method, request, startKey, endKey, - responsePrototype); - } - - @Override - public void batchCoprocessorService( - Descriptors.MethodDescriptor method, Message request, - byte[] startKey, byte[] endKey, R responsePrototype, Callback callback) - throws ServiceException, Throwable { - checkState(); - table.batchCoprocessorService(method, request, startKey, endKey, responsePrototype, callback); - } - - @Override - public boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, - byte[] value, RowMutations mutation) throws IOException { - checkState(); - return table.checkAndMutate(row, family, qualifier, compareOp, value, mutation); - } - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java deleted file mode 100644 index ab77ceb..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableUtil.java +++ /dev/null @@ -1,137 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.client; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionLocation; - -import java.io.IOException; -import java.io.InterruptedIOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -/** - * Utility class for HTable. - * - * @deprecated since 1.0 - */ -@InterfaceAudience.Private -@Deprecated -public class HTableUtil { - - private static final int INITIAL_LIST_SIZE = 250; - - /** - * Processes a List of Puts and writes them to an HTable instance in RegionServer buckets via the htable.put method. - * This will utilize the writeBuffer, thus the writeBuffer flush frequency may be tuned accordingly via htable.setWriteBufferSize. - *

    - * The benefit of submitting Puts in this manner is to minimize the number of RegionServer RPCs in each flush. - *

    - * Assumption #1: Regions have been pre-created for the table. If they haven't, then all of the Puts will go to the same region, - * defeating the purpose of this utility method. See the Apache HBase book for an explanation of how to do this. - *
    - * Assumption #2: Row-keys are not monotonically increasing. See the Apache HBase book for an explanation of this problem. - *
    - * Assumption #3: That the input list of Puts is big enough to be useful (in the thousands or more). The intent of this - * method is to process larger chunks of data. - *
    - * Assumption #4: htable.setAutoFlush(false) has been set. This is a requirement to use the writeBuffer. - *

    - * @param htable HTable instance for target HBase table - * @param puts List of Put instances - * @throws IOException if a remote or network exception occurs - * - */ - public static void bucketRsPut(HTable htable, List puts) throws IOException { - - Map> putMap = createRsPutMap(htable, puts); - for (List rsPuts: putMap.values()) { - htable.put( rsPuts ); - } - htable.flushCommits(); - } - - /** - * Processes a List of Rows (Put, Delete) and writes them to an HTable instance in RegionServer buckets via the htable.batch method. - *

    - * The benefit of submitting Puts in this manner is to minimize the number of RegionServer RPCs, thus this will - * produce one RPC of Puts per RegionServer. - *

    - * Assumption #1: Regions have been pre-created for the table. If they haven't, then all of the Puts will go to the same region, - * defeating the purpose of this utility method. See the Apache HBase book for an explanation of how to do this. - *
    - * Assumption #2: Row-keys are not monotonically increasing. See the Apache HBase book for an explanation of this problem. - *
    - * Assumption #3: That the input list of Rows is big enough to be useful (in the thousands or more). The intent of this - * method is to process larger chunks of data. - *

    - * This method accepts a list of Row objects because the underlying .batch method accepts a list of Row objects. - *

    - * @param htable HTable instance for target HBase table - * @param rows List of Row instances - * @throws IOException if a remote or network exception occurs - */ - public static void bucketRsBatch(HTable htable, List rows) throws IOException { - - try { - Map> rowMap = createRsRowMap(htable, rows); - for (List rsRows: rowMap.values()) { - htable.batch( rsRows ); - } - } catch (InterruptedException e) { - throw (InterruptedIOException)new InterruptedIOException().initCause(e); - } - - } - - private static Map> createRsPutMap(RegionLocator htable, List puts) throws IOException { - - Map> putMap = new HashMap>(); - for (Put put: puts) { - HRegionLocation rl = htable.getRegionLocation( put.getRow() ); - String hostname = rl.getHostname(); - List recs = putMap.get( hostname); - if (recs == null) { - recs = new ArrayList(INITIAL_LIST_SIZE); - putMap.put( hostname, recs); - } - recs.add(put); - } - return putMap; - } - - private static Map> createRsRowMap(RegionLocator htable, List rows) throws IOException { - - Map> rowMap = new HashMap>(); - for (Row row: rows) { - HRegionLocation rl = htable.getRegionLocation( row.getRow() ); - String hostname = rl.getHostname(); - List recs = rowMap.get( hostname); - if (recs == null) { - recs = new ArrayList(INITIAL_LIST_SIZE); - rowMap.put( hostname, recs); - } - recs.add(row); - } - return rowMap; - } - -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java index af0ea56..b6e6a52 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java @@ -25,11 +25,11 @@ import java.util.NavigableMap; import java.util.TreeMap; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.visibility.CellVisibility; @@ -289,12 +289,6 @@ public class Increment extends Mutation implements Comparable { } @Override - @Deprecated - public Increment setWriteToWAL(boolean write) { - return (Increment) super.setWriteToWAL(write); - } - - @Override public Increment setDurability(Durability d) { return (Increment) super.setDurability(d); } @@ -305,12 +299,6 @@ public class Increment extends Mutation implements Comparable { } @Override - @Deprecated - public Increment setFamilyMap(NavigableMap> map) { - return (Increment) super.setFamilyMap(map); - } - - @Override public Increment setClusterIds(List clusterIds) { return (Increment) super.setClusterIds(clusterIds); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java index 0f59b8a..a49f95c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaCache.java @@ -25,15 +25,16 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.ConcurrentSkipListSet; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java index fdffec4..3bc4000 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/MetaScanner.java @@ -28,7 +28,6 @@ import java.util.TreeMap; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; @@ -38,6 +37,7 @@ import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ExceptionUtil; @@ -63,32 +63,30 @@ public class MetaScanner { * start row value as table name. * *

    Visible for testing. Use {@link - * #metaScan(Configuration, Connection, MetaScannerVisitor, TableName)} instead. + * #metaScan(Connection, MetaScannerVisitor, TableName)} instead. * - * @param configuration conf * @param visitor A custom visitor * @throws IOException e */ @VisibleForTesting // Do not use. Used by tests only and hbck. - public static void metaScan(Configuration configuration, MetaScannerVisitor visitor) - throws IOException { - metaScan(configuration, visitor, null, null, Integer.MAX_VALUE); + public static void metaScan(Connection connection, + MetaScannerVisitor visitor) throws IOException { + metaScan(connection, visitor, null, null, Integer.MAX_VALUE); } /** * Scans the meta table and calls a visitor on each RowResult. Uses a table * name to locate meta regions. * - * @param configuration config * @param connection connection to use internally (null to use a new instance) * @param visitor visitor object * @param userTableName User table name in meta table to start scan at. Pass * null if not interested in a particular table. * @throws IOException e */ - public static void metaScan(Configuration configuration, Connection connection, + public static void metaScan(Connection connection, MetaScannerVisitor visitor, TableName userTableName) throws IOException { - metaScan(configuration, connection, visitor, userTableName, null, Integer.MAX_VALUE, + metaScan(connection, visitor, userTableName, null, Integer.MAX_VALUE, TableName.META_TABLE_NAME); } @@ -98,9 +96,9 @@ public class MetaScanner { * rowLimit of rows. * *

    Visible for testing. Use {@link - * #metaScan(Configuration, Connection, MetaScannerVisitor, TableName)} instead. + * #metaScan(Connection, MetaScannerVisitor, TableName)} instead. * - * @param configuration HBase configuration. + * @param connection to scan on * @param visitor Visitor object. * @param userTableName User table name in meta table to start scan at. Pass * null if not interested in a particular table. @@ -111,11 +109,12 @@ public class MetaScanner { * @throws IOException e */ @VisibleForTesting // Do not use. Used by Master but by a method that is used testing. - public static void metaScan(Configuration configuration, + public static void metaScan(Connection connection, MetaScannerVisitor visitor, TableName userTableName, byte[] row, int rowLimit) throws IOException { - metaScan(configuration, null, visitor, userTableName, row, rowLimit, TableName.META_TABLE_NAME); + metaScan(connection, visitor, userTableName, row, rowLimit, TableName + .META_TABLE_NAME); } /** @@ -123,7 +122,6 @@ public class MetaScanner { * name and a row name to locate meta regions. And it only scans at most * rowLimit of rows. * - * @param configuration HBase configuration. * @param connection connection to use internally (null to use a new instance) * @param visitor Visitor object. Closes the visitor before returning. * @param tableName User table name in meta table to start scan at. Pass @@ -135,17 +133,11 @@ public class MetaScanner { * @param metaTableName Meta table to scan, root or meta. * @throws IOException e */ - static void metaScan(Configuration configuration, Connection connection, + static void metaScan(Connection connection, final MetaScannerVisitor visitor, final TableName tableName, final byte[] row, final int rowLimit, final TableName metaTableName) throws IOException { - boolean closeConnection = false; - if (connection == null) { - connection = ConnectionFactory.createConnection(configuration); - closeConnection = true; - } - int rowUpperLimit = rowLimit > 0 ? rowLimit: Integer.MAX_VALUE; // Calculate startrow for scan. byte[] startRow; @@ -179,8 +171,9 @@ public class MetaScanner { HConstants.ZEROES, false); } final Scan scan = new Scan(startRow).addFamily(HConstants.CATALOG_FAMILY); - int scannerCaching = configuration.getInt(HConstants.HBASE_META_SCANNER_CACHING, - HConstants.DEFAULT_HBASE_META_SCANNER_CACHING); + int scannerCaching = connection.getConfiguration() + .getInt(HConstants.HBASE_META_SCANNER_CACHING, + HConstants.DEFAULT_HBASE_META_SCANNER_CACHING); if (rowUpperLimit <= scannerCaching) { scan.setSmall(true); } @@ -211,9 +204,6 @@ public class MetaScanner { LOG.debug("Got exception in closing the meta scanner visitor", t); } } - if (closeConnection) { - if (connection != null) connection.close(); - } } } @@ -246,14 +236,16 @@ public class MetaScanner { /** * Lists all of the regions currently in META. - * @param conf + * @param conf configuration + * @param connection to connect with * @param offlined True if we are to include offlined regions, false and we'll * leave out offlined regions from returned list. * @return List of all user-space regions. * @throws IOException */ @VisibleForTesting // And for hbck. - public static List listAllRegions(Configuration conf, final boolean offlined) + public static List listAllRegions(Configuration conf, Connection connection, + final boolean offlined) throws IOException { final List regions = new ArrayList(); MetaScannerVisitor visitor = new MetaScannerVisitorBase() { @@ -276,7 +268,7 @@ public class MetaScanner { return true; } }; - metaScan(conf, visitor); + metaScan(connection, visitor); return regions; } @@ -287,23 +279,22 @@ public class MetaScanner { * leave out offlined regions from returned list. * @return Map of all user-space regions to servers * @throws IOException - * @deprecated Use {@link #allTableRegions(Configuration, Connection, TableName)} instead + * @deprecated Use {@link #allTableRegions(Connection, TableName)} instead */ @Deprecated public static NavigableMap allTableRegions(Configuration conf, Connection connection, final TableName tableName, boolean offlined) throws IOException { - return allTableRegions(conf, connection, tableName); + return allTableRegions(connection, tableName); } /** * Lists all of the table regions currently in META. - * @param conf * @param connection * @param tableName * @return Map of all user-space regions to servers * @throws IOException */ - public static NavigableMap allTableRegions(Configuration conf, + public static NavigableMap allTableRegions( Connection connection, final TableName tableName) throws IOException { final NavigableMap regions = new TreeMap(); @@ -321,7 +312,7 @@ public class MetaScanner { return true; } }; - metaScan(conf, connection, visitor, tableName); + metaScan(connection, visitor, tableName); return regions; } @@ -340,7 +331,7 @@ public class MetaScanner { return true; } }; - metaScan(conf, connection, visitor, tableName); + metaScan(connection, visitor, tableName); return regions; } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java index eefe40d..16ab852 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiAction.java @@ -19,13 +19,14 @@ package org.apache.hadoop.hbase.client; import java.util.ArrayList; +import java.util.Arrays; import java.util.List; import java.util.Map; import java.util.Set; import java.util.TreeMap; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; /** @@ -68,12 +69,24 @@ public final class MultiAction { * @param a */ public void add(byte[] regionName, Action a) { + add(regionName, Arrays.asList(a)); + } + + /** + * Add an Action to this container based on it's regionName. If the regionName + * is wrong, the initial execution will fail, but will be automatically + * retried after looking up the correct region. + * + * @param regionName + * @param actionList list of actions to add for the region + */ + public void add(byte[] regionName, List> actionList){ List> rsActions = actions.get(regionName); if (rsActions == null) { - rsActions = new ArrayList>(); + rsActions = new ArrayList>(actionList.size()); actions.put(regionName, rsActions); } - rsActions.add(a); + rsActions.addAll(actionList); } public void setNonceGroup(long nonceGroup) { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java index dbc1317..28284e5 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java @@ -28,16 +28,15 @@ import java.util.NavigableMap; import java.util.TreeMap; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScannable; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -50,7 +49,6 @@ import org.apache.hadoop.hbase.util.ClassSize; import com.google.common.collect.ArrayListMultimap; import com.google.common.collect.ListMultimap; -import com.google.common.collect.Lists; import com.google.common.io.ByteArrayDataInput; import com.google.common.io.ByteArrayDataOutput; import com.google.common.io.ByteStreams; @@ -233,28 +231,6 @@ public abstract class Mutation extends OperationWithAttributes implements Row, C } /** - * @deprecated Use {@link #getDurability()} instead. - * @return true if edits should be applied to WAL, false if not - */ - @Deprecated - public boolean getWriteToWAL() { - return this.durability == Durability.SKIP_WAL; - } - - /** - * Set whether this Delete should be written to the WAL or not. - * Not writing the WAL means you may lose edits on server crash. - * This method will reset any changes made via {@link #setDurability(Durability)} - * @param write true if edits should be written to WAL, false if not - * @deprecated Use {@link #setDurability(Durability)} instead. - */ - @Deprecated - public Mutation setWriteToWAL(boolean write) { - setDurability(write ? Durability.USE_DEFAULT : Durability.SKIP_WAL); - return this; - } - - /** * Set the durability for this mutation * @param d */ @@ -287,39 +263,6 @@ public abstract class Mutation extends OperationWithAttributes implements Row, C } /** - * Method for retrieving the put's familyMap that is deprecated and inefficient. - * @return the map - * @deprecated use {@link #getFamilyCellMap()} instead. - */ - @Deprecated - public NavigableMap> getFamilyMap() { - TreeMap> fm = - new TreeMap>(Bytes.BYTES_COMPARATOR); - for (Map.Entry> e : familyMap.entrySet()) { - List kvl = new ArrayList(e.getValue().size()); - for (Cell c : e.getValue()) { - kvl.add(KeyValueUtil.ensureKeyValue(c)); - } - fm.put(e.getKey(), kvl); - } - return fm; - } - - /** - * Method for setting the put's familyMap that is deprecated and inefficient. - * @deprecated use {@link #setFamilyCellMap(NavigableMap)} instead. - */ - @Deprecated - public Mutation setFamilyMap(NavigableMap> map) { - TreeMap> fm = new TreeMap>(Bytes.BYTES_COMPARATOR); - for (Map.Entry> e : map.entrySet()) { - fm.put(e.getKey(), Lists.newArrayList(e.getValue())); - } - this.familyMap = fm; - return this; - } - - /** * Method to check if the familyMap is empty * @return true if empty, false otherwise */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java index d9d54ea..9fdd577 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java @@ -19,15 +19,15 @@ package org.apache.hadoop.hbase.client; +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ClassSize; -import java.util.Collections; -import java.util.HashMap; -import java.util.Map; - @InterfaceAudience.Public @InterfaceStability.Evolving public abstract class OperationWithAttributes extends Operation implements Attributes { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java index 7ac4546..875e1f6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/PerClientRandomNonceGenerator.java @@ -21,8 +21,8 @@ package org.apache.hadoop.hbase.client; import java.util.Arrays; import java.util.Random; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * NonceGenerator implementation that uses client ID hash + random int as nonce group, diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java index 0fb0118..b9d652d 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java @@ -28,13 +28,13 @@ import java.util.NavigableMap; import java.util.TreeMap; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.visibility.CellVisibility; @@ -432,12 +432,6 @@ public class Put extends Mutation implements HeapSize, Comparable { } @Override - @Deprecated - public Put setWriteToWAL(boolean write) { - return (Put) super.setWriteToWAL(write); - } - - @Override public Put setDurability(Durability d) { return (Put) super.setDurability(d); } @@ -448,12 +442,6 @@ public class Put extends Mutation implements HeapSize, Comparable { } @Override - @Deprecated - public Put setFamilyMap(NavigableMap> map) { - return (Put) super.setFamilyMap(map); - } - - @Override public Put setClusterIds(List clusterIds) { return (Put) super.setClusterIds(clusterIds); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Query.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Query.java index 26e36e5..9245f81 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Query.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Query.java @@ -28,6 +28,7 @@ import org.apache.hadoop.hbase.security.access.AccessControlConstants; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.visibility.Authorizations; import org.apache.hadoop.hbase.security.visibility.VisibilityConstants; + import com.google.common.collect.ArrayListMultimap; import com.google.common.collect.ListMultimap; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java new file mode 100644 index 0000000..66dcdce --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionAdminServiceCallable.java @@ -0,0 +1,135 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.client; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.net.ConnectException; +import java.net.SocketTimeoutException; + +import org.apache.hadoop.hbase.HBaseIOException; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.NotServingRegionException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.exceptions.RegionMovedException; +import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; + +/** + * Similar to {@link RegionServerCallable} but for the AdminService interface. This service callable + * assumes a Table and row and thus does region locating similar to RegionServerCallable. + */ +@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="URF_UNREAD_PUBLIC_OR_PROTECTED_FIELD", + justification="stub used by ipc") +@InterfaceAudience.Private +public abstract class RegionAdminServiceCallable implements RetryingCallable { + + protected final ClusterConnection connection; + + protected AdminService.BlockingInterface stub; + + protected HRegionLocation location; + + protected final TableName tableName; + protected final byte[] row; + + protected final static int MIN_WAIT_DEAD_SERVER = 10000; + + public RegionAdminServiceCallable(ClusterConnection connection, TableName tableName, byte[] row) { + this(connection, null, tableName, row); + } + + public RegionAdminServiceCallable(ClusterConnection connection, HRegionLocation location, + TableName tableName, byte[] row) { + this.connection = connection; + this.location = location; + this.tableName = tableName; + this.row = row; + } + + @Override + public void prepare(boolean reload) throws IOException { + if (Thread.interrupted()) { + throw new InterruptedIOException(); + } + + if (reload || location == null) { + location = getLocation(!reload); + } + + if (location == null) { + // With this exception, there will be a retry. + throw new HBaseIOException(getExceptionMessage()); + } + + this.setStub(connection.getAdmin(location.getServerName())); + } + + protected void setStub(AdminService.BlockingInterface stub) { + this.stub = stub; + } + + public abstract HRegionLocation getLocation(boolean useCache) throws IOException; + + @Override + public void throwable(Throwable t, boolean retrying) { + if (t instanceof SocketTimeoutException || + t instanceof ConnectException || + t instanceof RetriesExhaustedException || + (location != null && getConnection().isDeadServer(location.getServerName()))) { + // if thrown these exceptions, we clear all the cache entries that + // map to that slow/dead server; otherwise, let cache miss and ask + // hbase:meta again to find the new location + if (this.location != null) getConnection().clearCaches(location.getServerName()); + } else if (t instanceof RegionMovedException) { + getConnection().updateCachedLocations(tableName, row, t, location); + } else if (t instanceof NotServingRegionException) { + // Purge cache entries for this specific region from hbase:meta cache + // since we don't call connect(true) when number of retries is 1. + getConnection().deleteCachedRegionLocation(location); + } + } + + /** + * @return {@link HConnection} instance used by this Callable. + */ + HConnection getConnection() { + return this.connection; + } + + //subclasses can override this. + protected String getExceptionMessage() { + return "There is no location"; + } + + @Override + public String getExceptionMessageAdditionalDetail() { + return null; + } + + @Override + public long sleep(long pause, int tries) { + long sleep = ConnectionUtils.getPauseTime(pause, tries + 1); + if (sleep < MIN_WAIT_DEAD_SERVER + && (location == null || connection.isDeadServer(location.getServerName()))) { + sleep = ConnectionUtils.addJitter(MIN_WAIT_DEAD_SERVER, 0.10f); + } + return sleep; + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionCoprocessorServiceExec.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionCoprocessorServiceExec.java index 2d62332..ad1d2a1 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionCoprocessorServiceExec.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionCoprocessorServiceExec.java @@ -19,13 +19,13 @@ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.util.Bytes; + import com.google.common.base.Objects; import com.google.protobuf.Descriptors.MethodDescriptor; import com.google.protobuf.Message; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.util.Bytes; - /** * Represents a coprocessor service method execution against a single region. While coprocessor diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionLocator.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionLocator.java index fd5348f..39518a6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionLocator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionLocator.java @@ -18,16 +18,16 @@ */ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.util.Pair; - import java.io.Closeable; import java.io.IOException; import java.util.List; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Pair; + /** * Used to view region location information for a single HBase table. * Obtain an instance from an {@link Connection}. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java index 6c1d1cd..d6cceb9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionOfflineException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.RegionException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.RegionException; /** Thrown when a table can not be located */ @InterfaceAudience.Public diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionReplicaUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionReplicaUtil.java index 01c5234..91d1f9b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionReplicaUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionReplicaUtil.java @@ -21,8 +21,8 @@ package org.apache.hadoop.hbase.client; import java.util.Collection; import java.util.Iterator; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Utility methods which contain the logic for regions and replicas. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java index 40ca4a4..74d699f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionServerCallable.java @@ -25,11 +25,11 @@ import java.net.SocketTimeoutException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.RegionMovedException; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService; import org.apache.hadoop.hbase.util.Bytes; @@ -48,7 +48,7 @@ import org.apache.hadoop.hbase.util.Bytes; public abstract class RegionServerCallable implements RetryingCallable { // Public because used outside of this package over in ipc. static final Log LOG = LogFactory.getLog(RegionServerCallable.class); - protected final HConnection connection; + protected final Connection connection; protected final TableName tableName; protected final byte[] row; protected HRegionLocation location; @@ -61,7 +61,7 @@ public abstract class RegionServerCallable implements RetryingCallable { * @param tableName Table name to which row belongs. * @param row The row we want in tableName. */ - public RegionServerCallable(HConnection connection, TableName tableName, byte [] row) { + public RegionServerCallable(Connection connection, TableName tableName, byte [] row) { this.connection = connection; this.tableName = tableName; this.row = row; @@ -75,7 +75,9 @@ public abstract class RegionServerCallable implements RetryingCallable { */ @Override public void prepare(final boolean reload) throws IOException { - this.location = connection.getRegionLocation(tableName, row, reload); + try (RegionLocator regionLocator = connection.getRegionLocator(tableName)) { + this.location = regionLocator.getRegionLocation(row, reload); + } if (this.location == null) { throw new IOException("Failed to find location, tableName=" + tableName + ", row=" + Bytes.toString(row) + ", reload=" + reload); @@ -87,7 +89,7 @@ public abstract class RegionServerCallable implements RetryingCallable { * @return {@link HConnection} instance used by this Callable. */ HConnection getConnection() { - return this.connection; + return (HConnection) this.connection; } protected ClientService.BlockingInterface getStub() { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Registry.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Registry.java index 58ec3c4..412e4fa 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Registry.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Registry.java @@ -19,9 +19,8 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.RegionLocations; -import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Cluster registry. @@ -47,14 +46,8 @@ interface Registry { String getClusterId(); /** - * @param enabled Return true if table is enabled - * @throws IOException - */ - boolean isTableOnlineState(TableName tableName, boolean enabled) throws IOException; - - /** * @return Count of 'running' regionservers * @throws IOException */ int getCurrentNrHRS() throws IOException; -} +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java index 401710e..08d9b80 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java @@ -29,14 +29,15 @@ import java.util.Map; import java.util.NavigableMap; import java.util.TreeMap; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScannable; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.util.Bytes; /** @@ -94,6 +95,7 @@ public class Result implements CellScannable, CellScanner { * Index for where we are when Result is acting as a {@link CellScanner}. */ private int cellScannerIndex = INITIAL_CELLSCANNER_INDEX; + private ClientProtos.RegionLoadStats stats; /** * Creates an empty Result w/ no KeyValue payload; returns null if you call {@link #rawCells()}. @@ -106,23 +108,6 @@ public class Result implements CellScannable, CellScanner { } /** - * @deprecated Use {@link #create(List)} instead. - */ - @Deprecated - public Result(KeyValue [] cells) { - this.cells = cells; - } - - /** - * @deprecated Use {@link #create(List)} instead. - */ - @Deprecated - public Result(List kvs) { - // TODO: Here we presume the passed in Cells are KVs. One day this won't always be so. - this(kvs.toArray(new Cell[kvs.size()]), null, false); - } - - /** * Instantiate a Result with the specified List of KeyValues. *
    Note: You must ensure that the keyvalues are already sorted. * @param cells List of cells @@ -202,25 +187,6 @@ public class Result implements CellScannable, CellScanner { } /** - * Return an cells of a Result as an array of KeyValues - * - * WARNING do not use, expensive. This does an arraycopy of the cell[]'s value. - * - * Added to ease transition from 0.94 -> 0.96. - * - * @deprecated as of 0.96, use {@link #rawCells()} - * @return array of KeyValues, empty array if nothing in result. - */ - @Deprecated - public KeyValue[] raw() { - KeyValue[] kvs = new KeyValue[cells.length]; - for (int i = 0 ; i < kvs.length; i++) { - kvs[i] = KeyValueUtil.ensureKeyValue(cells[i]); - } - return kvs; - } - - /** * Create a sorted list of the Cell's in this result. * * Since HBase 0.20.5 this is equivalent to raw(). @@ -232,29 +198,6 @@ public class Result implements CellScannable, CellScanner { } /** - * Return an cells of a Result as an array of KeyValues - * - * WARNING do not use, expensive. This does an arraycopy of the cell[]'s value. - * - * Added to ease transition from 0.94 -> 0.96. - * - * @deprecated as of 0.96, use {@link #listCells()} - * @return all sorted List of KeyValues; can be null if no cells in the result - */ - @Deprecated - public List list() { - return isEmpty() ? null : Arrays.asList(raw()); - } - - /** - * @deprecated Use {@link #getColumnCells(byte[], byte[])} instead. - */ - @Deprecated - public List getColumn(byte [] family, byte [] qualifier) { - return KeyValueUtil.ensureKeyValues(getColumnCells(family, qualifier)); - } - - /** * Return the Cells for the specific column. The Cells are sorted in * the {@link KeyValue#COMPARATOR} order. That implies the first entry in * the list is the most recent column. If the query (Scan or Get) only @@ -359,14 +302,6 @@ public class Result implements CellScannable, CellScanner { } /** - * @deprecated Use {@link #getColumnLatestCell(byte[], byte[])} instead. - */ - @Deprecated - public KeyValue getColumnLatest(byte [] family, byte [] qualifier) { - return KeyValueUtil.ensureKeyValue(getColumnLatestCell(family, qualifier)); - } - - /** * The Cell for the most recent timestamp for a given column. * * @param family @@ -391,16 +326,6 @@ public class Result implements CellScannable, CellScanner { } /** - * @deprecated Use {@link #getColumnLatestCell(byte[], int, int, byte[], int, int)} instead. - */ - @Deprecated - public KeyValue getColumnLatest(byte [] family, int foffset, int flength, - byte [] qualifier, int qoffset, int qlength) { - return KeyValueUtil.ensureKeyValue( - getColumnLatestCell(family, foffset, flength, qualifier, qoffset, qlength)); - } - - /** * The Cell for the most recent timestamp for a given column. * * @param family family name @@ -871,4 +796,20 @@ public class Result implements CellScannable, CellScanner { public boolean isStale() { return stale; } -} + + /** + * Add load information about the region to the information about the result + * @param loadStats statistics about the current region from which this was returned + */ + public void addResults(ClientProtos.RegionLoadStats loadStats) { + this.stats = loadStats; + } + + /** + * @return the associated statistics about the region from which this was returned. Can be + * null if stats are disabled. + */ + public ClientProtos.RegionLoadStats getStats() { + return stats; + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java new file mode 100644 index 0000000..1dab776 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultBoundedCompletionService.java @@ -0,0 +1,165 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import java.util.concurrent.ExecutionException; +import java.util.concurrent.Executor; +import java.util.concurrent.RunnableFuture; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.TimeoutException; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.htrace.Trace; + +/** + * A completion service for the RpcRetryingCallerFactory. + * Keeps the list of the futures, and allows to cancel them all. + * This means as well that it can be used for a small set of tasks only. + *
    Implementation is not Thread safe. + */ +@InterfaceAudience.Private +public class ResultBoundedCompletionService { + private final RpcRetryingCallerFactory retryingCallerFactory; + private final Executor executor; + private final QueueingFuture[] tasks; // all the tasks + private volatile QueueingFuture completed = null; + + class QueueingFuture implements RunnableFuture { + private final RetryingCallable future; + private T result = null; + private ExecutionException exeEx = null; + private volatile boolean cancelled; + private final int callTimeout; + private final RpcRetryingCaller retryingCaller; + private boolean resultObtained = false; + + + public QueueingFuture(RetryingCallable future, int callTimeout) { + this.future = future; + this.callTimeout = callTimeout; + this.retryingCaller = retryingCallerFactory.newCaller(); + } + + @SuppressWarnings("unchecked") + @Override + public void run() { + try { + if (!cancelled) { + result = + this.retryingCaller.callWithRetries(future, callTimeout); + resultObtained = true; + } + } catch (Throwable t) { + exeEx = new ExecutionException(t); + } finally { + if (!cancelled && completed == null) { + completed = (QueueingFuture) QueueingFuture.this; + synchronized (tasks) { + tasks.notify(); + } + } + } + } + @Override + public boolean cancel(boolean mayInterruptIfRunning) { + if (resultObtained || exeEx != null) return false; + retryingCaller.cancel(); + if (future instanceof Cancellable) ((Cancellable)future).cancel(); + cancelled = true; + return true; + } + + @Override + public boolean isCancelled() { + return cancelled; + } + + @Override + public boolean isDone() { + return resultObtained || exeEx != null; + } + + @Override + public T get() throws InterruptedException, ExecutionException { + try { + return get(1000, TimeUnit.DAYS); + } catch (TimeoutException e) { + throw new RuntimeException("You did wait for 1000 days here?", e); + } + } + + @Override + public T get(long timeout, TimeUnit unit) + throws InterruptedException, ExecutionException, TimeoutException { + synchronized (tasks) { + if (resultObtained) { + return result; + } + if (exeEx != null) { + throw exeEx; + } + unit.timedWait(tasks, timeout); + } + if (resultObtained) { + return result; + } + if (exeEx != null) { + throw exeEx; + } + + throw new TimeoutException("timeout=" + timeout + ", " + unit); + } + } + + @SuppressWarnings("unchecked") + public ResultBoundedCompletionService( + RpcRetryingCallerFactory retryingCallerFactory, Executor executor, + int maxTasks) { + this.retryingCallerFactory = retryingCallerFactory; + this.executor = executor; + this.tasks = new QueueingFuture[maxTasks]; + } + + + public void submit(RetryingCallable task, int callTimeout, int id) { + QueueingFuture newFuture = new QueueingFuture(task, callTimeout); + executor.execute(Trace.wrap(newFuture)); + tasks[id] = newFuture; + } + + public QueueingFuture take() throws InterruptedException { + synchronized (tasks) { + while (completed == null) tasks.wait(); + } + return completed; + } + + public QueueingFuture poll(long timeout, TimeUnit unit) throws InterruptedException { + synchronized (tasks) { + if (completed == null) unit.timedWait(tasks, timeout); + } + return completed; + } + + public void cancelAll() { + for (QueueingFuture future : tasks) { + if (future != null) future.cancel(true); + } + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java index 8fd28b3..381505c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultScanner.java @@ -18,12 +18,12 @@ */ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; - import java.io.Closeable; import java.io.IOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + /** * Interface for client-side scanning. * Go to {@link Table} to obtain instances. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultStatsUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultStatsUtil.java new file mode 100644 index 0000000..3caa63e --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ResultStatsUtil.java @@ -0,0 +1,76 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; + +/** + * A {@link Result} with some statistics about the server/region status + */ +@InterfaceAudience.Private +public final class ResultStatsUtil { + + private ResultStatsUtil() { + //private ctor for util class + } + + /** + * Update the stats for the specified region if the result is an instance of {@link + * ResultStatsUtil} + * + * @param r object that contains the result and possibly the statistics about the region + * @param serverStats stats tracker to update from the result + * @param server server from which the result was obtained + * @param regionName full region name for the stats. + * @return the underlying {@link Result} if the passed result is an {@link + * ResultStatsUtil} or just returns the result; + */ + public static T updateStats(T r, ServerStatisticTracker serverStats, + ServerName server, byte[] regionName) { + if (!(r instanceof Result)) { + return r; + } + Result result = (Result) r; + // early exit if there are no stats to collect + ClientProtos.RegionLoadStats stats = result.getStats(); + if(stats == null){ + return r; + } + + if (regionName != null) { + serverStats.updateRegionStats(server, regionName, stats); + } + + return r; + } + + public static T updateStats(T r, ServerStatisticTracker stats, + HRegionLocation regionLocation) { + byte[] regionName = null; + ServerName server = null; + if (regionLocation != null) { + server = regionLocation.getServerName(); + regionName = regionLocation.getRegionInfo().getRegionName(); + } + + return updateStats(r, stats, server, regionName); + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java index 0d79921..3c4b39f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java @@ -14,13 +14,13 @@ */ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; - import java.io.IOException; import java.util.Date; import java.util.List; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + /** * Exception thrown by HTable methods when an attempt to do something (like * commit changes) fails after a bunch of retries. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java index 253ff8b..650b5a3d 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedWithDetailsException.java @@ -19,11 +19,6 @@ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; -import org.apache.hadoop.hbase.util.Bytes; - import java.io.PrintWriter; import java.io.StringWriter; import java.util.Collection; @@ -33,6 +28,11 @@ import java.util.List; import java.util.Map; import java.util.Set; +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Bytes; + /** * This subclass of {@link org.apache.hadoop.hbase.client.RetriesExhaustedException} * is thrown when we have more information about which rows were causing which diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedClientScanner.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedClientScanner.java index a03858e..0f244e0 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedClientScanner.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedClientScanner.java @@ -24,10 +24,10 @@ import java.util.concurrent.ExecutorService; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ExceptionUtil; @@ -57,7 +57,8 @@ public class ReversedClientScanner extends ClientScanner { TableName tableName, ClusterConnection connection, RpcRetryingCallerFactory rpcFactory, RpcControllerFactory controllerFactory, ExecutorService pool, int primaryOperationTimeout) throws IOException { - super(conf, scan, tableName, connection, rpcFactory, controllerFactory, pool, primaryOperationTimeout); + super(conf, scan, tableName, connection, rpcFactory, controllerFactory, pool, + primaryOperationTimeout); } @Override @@ -166,7 +167,7 @@ public class ReversedClientScanner extends ClientScanner { * @param row * @return a new byte array which is the closest front row of the specified one */ - protected byte[] createClosestRowBefore(byte[] row) { + protected static byte[] createClosestRowBefore(byte[] row) { if (row == null) { throw new IllegalArgumentException("The passed row is empty"); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedScannerCallable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedScannerCallable.java index f400e83..e7c1acb 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedScannerCallable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ReversedScannerCallable.java @@ -23,8 +23,6 @@ import java.io.InterruptedIOException; import java.util.ArrayList; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionLocation; @@ -54,8 +52,8 @@ public class ReversedScannerCallable extends ScannerCallable { * @param scan * @param scanMetrics * @param locateStartRow The start row for locating regions - * @param rpcFactory to create an - * {@link com.google.protobuf.RpcController} to talk to the regionserver + * @param rpcFactory to create an {@link com.google.protobuf.RpcController} + * to talk to the regionserver */ public ReversedScannerCallable(ClusterConnection connection, TableName tableName, Scan scan, ScanMetrics scanMetrics, byte[] locateStartRow, RpcControllerFactory rpcFactory) { @@ -69,8 +67,8 @@ public class ReversedScannerCallable extends ScannerCallable { * @param scan * @param scanMetrics * @param locateStartRow The start row for locating regions - * @param rpcFactory to create an - * {@link com.google.protobuf.RpcController} to talk to the regionserver + * @param rpcFactory to create an {@link com.google.protobuf.RpcController} + * to talk to the regionserver * @param replicaId the replica id */ public ReversedScannerCallable(ClusterConnection connection, TableName tableName, Scan scan, @@ -81,7 +79,8 @@ public class ReversedScannerCallable extends ScannerCallable { /** * @deprecated use - * {@link #ReversedScannerCallable(ClusterConnection, TableName, Scan, ScanMetrics, byte[], RpcControllerFactory )} + * {@link #ReversedScannerCallable(ClusterConnection, TableName, Scan, + * ScanMetrics, byte[], RpcControllerFactory )} */ @Deprecated public ReversedScannerCallable(ClusterConnection connection, TableName tableName, @@ -168,7 +167,7 @@ public class ReversedScannerCallable extends ScannerCallable { } else { throw new DoNotRetryIOException("Does hbase:meta exist hole? Locating row " + Bytes.toStringBinary(currentKey) + " returns incorrect region " - + regionLocation.getRegionInfo()); + + (regionLocation == null ? null : regionLocation.getRegionInfo())); } currentKey = regionLocation.getRegionInfo().getEndKey(); } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW) diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowTooBigException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowTooBigException.java index d83f14f..69b57b0 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowTooBigException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowTooBigException.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase.client; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java index 49c7efd..807c227 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCaller.java @@ -1,5 +1,4 @@ /** - * * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information @@ -16,93 +15,20 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.client; -import java.io.IOException; -import java.io.InterruptedIOException; -import java.lang.reflect.UndeclaredThrowableException; -import java.net.SocketTimeoutException; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.atomic.AtomicBoolean; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.DoNotRetryIOException; -import org.apache.hadoop.hbase.exceptions.PreemptiveFastFailException; -import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; -import org.apache.hadoop.hbase.util.ExceptionUtil; -import org.apache.hadoop.ipc.RemoteException; +import org.apache.hadoop.hbase.classification.InterfaceStability; -import com.google.protobuf.ServiceException; +import java.io.IOException; /** - * Runs an rpc'ing {@link RetryingCallable}. Sets into rpc client - * threadlocal outstanding timeouts as so we don't persist too much. - * Dynamic rather than static so can set the generic appropriately. * - * This object has a state. It should not be used by in parallel by different threads. - * Reusing it is possible however, even between multiple threads. However, the user will - * have to manage the synchronization on its side: there is no synchronization inside the class. */ -@InterfaceAudience.Private -public class RpcRetryingCaller { - static final Log LOG = LogFactory.getLog(RpcRetryingCaller.class); - /** - * When we started making calls. - */ - private long globalStartTime; - /** - * Start and end times for a single call. - */ - private final static int MIN_RPC_TIMEOUT = 2000; - /** How many retries are allowed before we start to log */ - private final int startLogErrorsCnt; - - private final long pause; - private final int retries; - private final AtomicBoolean cancelled = new AtomicBoolean(false); - private final RetryingCallerInterceptor interceptor; - private final RetryingCallerInterceptorContext context; - - public RpcRetryingCaller(long pause, int retries, int startLogErrorsCnt) { - this(pause, retries, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR, startLogErrorsCnt); - } - - public RpcRetryingCaller(long pause, int retries, - RetryingCallerInterceptor interceptor, int startLogErrorsCnt) { - this.pause = pause; - this.retries = retries; - this.interceptor = interceptor; - context = interceptor.createEmptyContext(); - this.startLogErrorsCnt = startLogErrorsCnt; - } - - private int getRemainingTime(int callTimeout) { - if (callTimeout <= 0) { - return 0; - } else { - if (callTimeout == Integer.MAX_VALUE) return Integer.MAX_VALUE; - int remainingTime = (int) (callTimeout - - (EnvironmentEdgeManager.currentTime() - this.globalStartTime)); - if (remainingTime < MIN_RPC_TIMEOUT) { - // If there is no time left, we're trying anyway. It's too late. - // 0 means no timeout, and it's not the intent here. So we secure both cases by - // resetting to the minimum. - remainingTime = MIN_RPC_TIMEOUT; - } - return remainingTime; - } - } - - public void cancel(){ - cancelled.set(true); - synchronized (cancelled){ - cancelled.notifyAll(); - } - } +@InterfaceAudience.Public +@InterfaceStability.Evolving +public interface RpcRetryingCaller { + void cancel(); /** * Retries if invocation fails. @@ -112,75 +38,8 @@ public class RpcRetryingCaller { * @throws IOException if a remote or network exception occurs * @throws RuntimeException other unspecified error */ - public T callWithRetries(RetryingCallable callable, int callTimeout) - throws IOException, RuntimeException { - List exceptions = - new ArrayList(); - this.globalStartTime = EnvironmentEdgeManager.currentTime(); - context.clear(); - for (int tries = 0;; tries++) { - long expectedSleep; - try { - callable.prepare(tries != 0); // if called with false, check table status on ZK - interceptor.intercept(context.prepare(callable, tries)); - return callable.call(getRemainingTime(callTimeout)); - } catch (PreemptiveFastFailException e) { - throw e; - } catch (Throwable t) { - ExceptionUtil.rethrowIfInterrupt(t); - if (tries > startLogErrorsCnt) { - LOG.info("Call exception, tries=" + tries + ", retries=" + retries + ", started=" + - (EnvironmentEdgeManager.currentTime() - this.globalStartTime) + " ms ago, " - + "cancelled=" + cancelled.get() + ", msg=" - + callable.getExceptionMessageAdditionalDetail()); - } - - // translateException throws exception when should not retry: i.e. when request is bad. - interceptor.handleFailure(context, t); - t = translateException(t); - callable.throwable(t, retries != 1); - RetriesExhaustedException.ThrowableWithExtraContext qt = - new RetriesExhaustedException.ThrowableWithExtraContext(t, - EnvironmentEdgeManager.currentTime(), toString()); - exceptions.add(qt); - if (tries >= retries - 1) { - throw new RetriesExhaustedException(tries, exceptions); - } - // If the server is dead, we need to wait a little before retrying, to give - // a chance to the regions to be - // tries hasn't been bumped up yet so we use "tries + 1" to get right pause time - expectedSleep = callable.sleep(pause, tries + 1); - - // If, after the planned sleep, there won't be enough time left, we stop now. - long duration = singleCallDuration(expectedSleep); - if (duration > callTimeout) { - String msg = "callTimeout=" + callTimeout + ", callDuration=" + duration + - ": " + callable.getExceptionMessageAdditionalDetail(); - throw (SocketTimeoutException)(new SocketTimeoutException(msg).initCause(t)); - } - } finally { - interceptor.updateFailureInfo(context); - } - try { - if (expectedSleep > 0) { - synchronized (cancelled) { - if (cancelled.get()) return null; - cancelled.wait(expectedSleep); - } - } - if (cancelled.get()) return null; - } catch (InterruptedException e) { - throw new InterruptedIOException("Interrupted after " + tries + " tries on " + retries); - } - } - } - - /** - * @return Calculate how long a single call took - */ - private long singleCallDuration(final long expectedSleep) { - return (EnvironmentEdgeManager.currentTime() - this.globalStartTime) + expectedSleep; - } + T callWithRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException; /** * Call the server once only. @@ -191,62 +50,6 @@ public class RpcRetryingCaller { * @throws IOException if a remote or network exception occurs * @throws RuntimeException other unspecified error */ - public T callWithoutRetries(RetryingCallable callable, int callTimeout) - throws IOException, RuntimeException { - // The code of this method should be shared with withRetries. - this.globalStartTime = EnvironmentEdgeManager.currentTime(); - try { - callable.prepare(false); - return callable.call(callTimeout); - } catch (Throwable t) { - Throwable t2 = translateException(t); - ExceptionUtil.rethrowIfInterrupt(t2); - // It would be nice to clear the location cache here. - if (t2 instanceof IOException) { - throw (IOException)t2; - } else { - throw new RuntimeException(t2); - } - } - } - - /** - * Get the good or the remote exception if any, throws the DoNotRetryIOException. - * @param t the throwable to analyze - * @return the translated exception, if it's not a DoNotRetryIOException - * @throws DoNotRetryIOException - if we find it, we throw it instead of translating. - */ - static Throwable translateException(Throwable t) throws DoNotRetryIOException { - if (t instanceof UndeclaredThrowableException) { - if (t.getCause() != null) { - t = t.getCause(); - } - } - if (t instanceof RemoteException) { - t = ((RemoteException)t).unwrapRemoteException(); - } - if (t instanceof LinkageError) { - throw new DoNotRetryIOException(t); - } - if (t instanceof ServiceException) { - ServiceException se = (ServiceException)t; - Throwable cause = se.getCause(); - if (cause != null && cause instanceof DoNotRetryIOException) { - throw (DoNotRetryIOException)cause; - } - // Don't let ServiceException out; its rpc specific. - t = cause; - // t could be a RemoteException so go aaround again. - translateException(t); - } else if (t instanceof DoNotRetryIOException) { - throw (DoNotRetryIOException)t; - } - return t; - } - - @Override - public String toString() { - return "RpcRetryingCaller{" + "globalStartTime=" + globalStartTime + - ", pause=" + pause + ", retries=" + retries + '}'; - } + T callWithoutRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException; } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerFactory.java index 9f05997..0af8210 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerFactory.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerFactory.java @@ -35,6 +35,8 @@ public class RpcRetryingCallerFactory { private final int retries; private final RetryingCallerInterceptor interceptor; private final int startLogErrorsCnt; + private final boolean enableBackPressure; + private ServerStatisticTracker stats; public RpcRetryingCallerFactory(Configuration conf) { this(conf, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR); @@ -49,27 +51,56 @@ public class RpcRetryingCallerFactory { startLogErrorsCnt = conf.getInt(AsyncProcess.START_LOG_ERRORS_AFTER_COUNT_KEY, AsyncProcess.DEFAULT_START_LOG_ERRORS_AFTER_COUNT); this.interceptor = interceptor; + enableBackPressure = conf.getBoolean(HConstants.ENABLE_CLIENT_BACKPRESSURE, + HConstants.DEFAULT_ENABLE_CLIENT_BACKPRESSURE); + } + + /** + * Set the tracker that should be used for tracking statistics about the server + */ + public void setStatisticTracker(ServerStatisticTracker statisticTracker) { + this.stats = statisticTracker; } public RpcRetryingCaller newCaller() { // We store the values in the factory instance. This way, constructing new objects // is cheap as it does not require parsing a complex structure. - return new RpcRetryingCaller(pause, retries, interceptor, startLogErrorsCnt); + RpcRetryingCaller caller = new RpcRetryingCallerImpl(pause, retries, interceptor, + startLogErrorsCnt); + + // wrap it with stats, if we are tracking them + if (enableBackPressure && this.stats != null) { + caller = new StatsTrackingRpcRetryingCaller(caller, this.stats); + } + + return caller; } public static RpcRetryingCallerFactory instantiate(Configuration configuration) { - return instantiate(configuration, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR); + return instantiate(configuration, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR, null); } - + public static RpcRetryingCallerFactory instantiate(Configuration configuration, - RetryingCallerInterceptor interceptor) { + ServerStatisticTracker stats) { + return instantiate(configuration, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR, stats); + } + + public static RpcRetryingCallerFactory instantiate(Configuration configuration, + RetryingCallerInterceptor interceptor, ServerStatisticTracker stats) { String clazzName = RpcRetryingCallerFactory.class.getName(); String rpcCallerFactoryClazz = configuration.get(RpcRetryingCallerFactory.CUSTOM_CALLER_CONF_KEY, clazzName); + RpcRetryingCallerFactory factory; if (rpcCallerFactoryClazz.equals(clazzName)) { - return new RpcRetryingCallerFactory(configuration, interceptor); + factory = new RpcRetryingCallerFactory(configuration, interceptor); + } else { + factory = ReflectionUtils.instantiateWithCustomCtor( + rpcCallerFactoryClazz, new Class[] { Configuration.class }, + new Object[] { configuration }); } - return ReflectionUtils.instantiateWithCustomCtor(rpcCallerFactoryClazz, - new Class[] { Configuration.class }, new Object[] { configuration }); + + // setting for backwards compat with existing caller factories, rather than in the ctor + factory.setStatisticTracker(stats); + return factory; } -} +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java new file mode 100644 index 0000000..1d037bc --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerImpl.java @@ -0,0 +1,238 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.client; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.lang.reflect.UndeclaredThrowableException; +import java.net.SocketTimeoutException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.exceptions.PreemptiveFastFailException; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.ExceptionUtil; +import org.apache.hadoop.ipc.RemoteException; + +import com.google.protobuf.ServiceException; + +/** + * Runs an rpc'ing {@link RetryingCallable}. Sets into rpc client + * threadlocal outstanding timeouts as so we don't persist too much. + * Dynamic rather than static so can set the generic appropriately. + * + * This object has a state. It should not be used by in parallel by different threads. + * Reusing it is possible however, even between multiple threads. However, the user will + * have to manage the synchronization on its side: there is no synchronization inside the class. + */ +@InterfaceAudience.Private +public class RpcRetryingCallerImpl implements RpcRetryingCaller { + public static final Log LOG = LogFactory.getLog(RpcRetryingCallerImpl.class); + /** + * When we started making calls. + */ + private long globalStartTime; + /** + * Start and end times for a single call. + */ + private final static int MIN_RPC_TIMEOUT = 2000; + /** How many retries are allowed before we start to log */ + private final int startLogErrorsCnt; + + private final long pause; + private final int retries; + private final AtomicBoolean cancelled = new AtomicBoolean(false); + private final RetryingCallerInterceptor interceptor; + private final RetryingCallerInterceptorContext context; + + public RpcRetryingCallerImpl(long pause, int retries, int startLogErrorsCnt) { + this(pause, retries, RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR, startLogErrorsCnt); + } + + public RpcRetryingCallerImpl(long pause, int retries, + RetryingCallerInterceptor interceptor, int startLogErrorsCnt) { + this.pause = pause; + this.retries = retries; + this.interceptor = interceptor; + context = interceptor.createEmptyContext(); + this.startLogErrorsCnt = startLogErrorsCnt; + } + + private int getRemainingTime(int callTimeout) { + if (callTimeout <= 0) { + return 0; + } else { + if (callTimeout == Integer.MAX_VALUE) return Integer.MAX_VALUE; + int remainingTime = (int) (callTimeout - + (EnvironmentEdgeManager.currentTime() - this.globalStartTime)); + if (remainingTime < MIN_RPC_TIMEOUT) { + // If there is no time left, we're trying anyway. It's too late. + // 0 means no timeout, and it's not the intent here. So we secure both cases by + // resetting to the minimum. + remainingTime = MIN_RPC_TIMEOUT; + } + return remainingTime; + } + } + + @Override + public void cancel(){ + cancelled.set(true); + synchronized (cancelled){ + cancelled.notifyAll(); + } + } + + @Override + public T callWithRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException { + List exceptions = + new ArrayList(); + this.globalStartTime = EnvironmentEdgeManager.currentTime(); + context.clear(); + for (int tries = 0;; tries++) { + long expectedSleep; + try { + callable.prepare(tries != 0); // if called with false, check table status on ZK + interceptor.intercept(context.prepare(callable, tries)); + return callable.call(getRemainingTime(callTimeout)); + } catch (PreemptiveFastFailException e) { + throw e; + } catch (Throwable t) { + ExceptionUtil.rethrowIfInterrupt(t); + if (tries > startLogErrorsCnt) { + LOG.info("Call exception, tries=" + tries + ", retries=" + retries + ", started=" + + (EnvironmentEdgeManager.currentTime() - this.globalStartTime) + " ms ago, " + + "cancelled=" + cancelled.get() + ", msg=" + + callable.getExceptionMessageAdditionalDetail()); + } + + // translateException throws exception when should not retry: i.e. when request is bad. + interceptor.handleFailure(context, t); + t = translateException(t); + callable.throwable(t, retries != 1); + RetriesExhaustedException.ThrowableWithExtraContext qt = + new RetriesExhaustedException.ThrowableWithExtraContext(t, + EnvironmentEdgeManager.currentTime(), toString()); + exceptions.add(qt); + if (tries >= retries - 1) { + throw new RetriesExhaustedException(tries, exceptions); + } + // If the server is dead, we need to wait a little before retrying, to give + // a chance to the regions to be + // tries hasn't been bumped up yet so we use "tries + 1" to get right pause time + expectedSleep = callable.sleep(pause, tries + 1); + + // If, after the planned sleep, there won't be enough time left, we stop now. + long duration = singleCallDuration(expectedSleep); + if (duration > callTimeout) { + String msg = "callTimeout=" + callTimeout + ", callDuration=" + duration + + ": " + callable.getExceptionMessageAdditionalDetail(); + throw (SocketTimeoutException)(new SocketTimeoutException(msg).initCause(t)); + } + } finally { + interceptor.updateFailureInfo(context); + } + try { + if (expectedSleep > 0) { + synchronized (cancelled) { + if (cancelled.get()) return null; + cancelled.wait(expectedSleep); + } + } + if (cancelled.get()) return null; + } catch (InterruptedException e) { + throw new InterruptedIOException("Interrupted after " + tries + " tries on " + retries); + } + } + } + + /** + * @return Calculate how long a single call took + */ + private long singleCallDuration(final long expectedSleep) { + return (EnvironmentEdgeManager.currentTime() - this.globalStartTime) + expectedSleep; + } + + @Override + public T callWithoutRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException { + // The code of this method should be shared with withRetries. + this.globalStartTime = EnvironmentEdgeManager.currentTime(); + try { + callable.prepare(false); + return callable.call(callTimeout); + } catch (Throwable t) { + Throwable t2 = translateException(t); + ExceptionUtil.rethrowIfInterrupt(t2); + // It would be nice to clear the location cache here. + if (t2 instanceof IOException) { + throw (IOException)t2; + } else { + throw new RuntimeException(t2); + } + } + } + + /** + * Get the good or the remote exception if any, throws the DoNotRetryIOException. + * @param t the throwable to analyze + * @return the translated exception, if it's not a DoNotRetryIOException + * @throws DoNotRetryIOException - if we find it, we throw it instead of translating. + */ + static Throwable translateException(Throwable t) throws DoNotRetryIOException { + if (t instanceof UndeclaredThrowableException) { + if (t.getCause() != null) { + t = t.getCause(); + } + } + if (t instanceof RemoteException) { + t = ((RemoteException)t).unwrapRemoteException(); + } + if (t instanceof LinkageError) { + throw new DoNotRetryIOException(t); + } + if (t instanceof ServiceException) { + ServiceException se = (ServiceException)t; + Throwable cause = se.getCause(); + if (cause != null && cause instanceof DoNotRetryIOException) { + throw (DoNotRetryIOException)cause; + } + // Don't let ServiceException out; its rpc specific. + t = cause; + // t could be a RemoteException so go aaround again. + translateException(t); + } else if (t instanceof DoNotRetryIOException) { + throw (DoNotRetryIOException)t; + } + return t; + } + + @Override + public String toString() { + return "RpcRetryingCaller{" + "globalStartTime=" + globalStartTime + + ", pause=" + pause + ", retries=" + retries + '}'; + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java index 8d937aa..273a1e1 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/RpcRetryingCallerWithReadReplicas.java @@ -21,7 +21,16 @@ package org.apache.hadoop.hbase.client; -import com.google.protobuf.ServiceException; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.CancellationException; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Future; +import java.util.concurrent.TimeUnit; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; @@ -41,20 +50,6 @@ import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import com.google.protobuf.ServiceException; -import org.htrace.Trace; - -import java.io.IOException; -import java.io.InterruptedIOException; -import java.util.Collections; -import java.util.List; -import java.util.concurrent.CancellationException; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Executor; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Future; -import java.util.concurrent.RunnableFuture; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.TimeoutException; /** * Caller that goes to replica if the primary region does no answer within a configurable @@ -100,7 +95,7 @@ public class RpcRetryingCallerWithReadReplicas { * - we need to stop retrying when the call is completed * - we can be interrupted */ - class ReplicaRegionServerCallable extends RegionServerCallable { + class ReplicaRegionServerCallable extends RegionServerCallable implements Cancellable { final int id; private final PayloadCarryingRpcController controller; @@ -113,7 +108,8 @@ public class RpcRetryingCallerWithReadReplicas { controller.setPriority(tableName); } - public void startCancel() { + @Override + public void cancel() { controller.startCancel(); } @@ -170,6 +166,11 @@ public class RpcRetryingCallerWithReadReplicas { throw ProtobufUtil.getRemoteException(se); } } + + @Override + public boolean isCancelled() { + return controller.isCanceled(); + } } /** @@ -195,7 +196,8 @@ public class RpcRetryingCallerWithReadReplicas { RegionLocations rl = getRegionLocations(true, (isTargetReplicaSpecified ? get.getReplicaId() : RegionReplicaUtil.DEFAULT_REPLICA_ID), cConnection, tableName, get.getRow()); - ResultBoundedCompletionService cs = new ResultBoundedCompletionService(pool, rl.size()); + ResultBoundedCompletionService cs = + new ResultBoundedCompletionService(this.rpcRetryingCallerFactory, pool, rl.size()); if(isTargetReplicaSpecified) { addCallsForReplica(cs, rl, get.getReplicaId(), get.getReplicaId()); @@ -274,12 +276,12 @@ public class RpcRetryingCallerWithReadReplicas { * @param min - the id of the first replica, inclusive * @param max - the id of the last replica, inclusive. */ - private void addCallsForReplica(ResultBoundedCompletionService cs, + private void addCallsForReplica(ResultBoundedCompletionService cs, RegionLocations rl, int min, int max) { for (int id = min; id <= max; id++) { HRegionLocation hrl = rl.getRegionLocation(id); ReplicaRegionServerCallable callOnReplica = new ReplicaRegionServerCallable(id, hrl); - cs.submit(callOnReplica, callTimeout); + cs.submit(callOnReplica, callTimeout, id); } } @@ -309,137 +311,4 @@ public class RpcRetryingCallerWithReadReplicas { return rl; } - - - /** - * A completion service for the RpcRetryingCallerFactory. - * Keeps the list of the futures, and allows to cancel them all. - * This means as well that it can be used for a small set of tasks only. - *
    Implementation is not Thread safe. - */ - public class ResultBoundedCompletionService { - private final Executor executor; - private final QueueingFuture[] tasks; // all the tasks - private volatile QueueingFuture completed = null; - - class QueueingFuture implements RunnableFuture { - private final ReplicaRegionServerCallable future; - private Result result = null; - private ExecutionException exeEx = null; - private volatile boolean canceled; - private final int callTimeout; - private final RpcRetryingCaller retryingCaller; - - - public QueueingFuture(ReplicaRegionServerCallable future, int callTimeout) { - this.future = future; - this.callTimeout = callTimeout; - this.retryingCaller = rpcRetryingCallerFactory.newCaller(); - } - - @Override - public void run() { - try { - if (!canceled) { - result = - rpcRetryingCallerFactory.newCaller().callWithRetries(future, callTimeout); - } - } catch (Throwable t) { - exeEx = new ExecutionException(t); - } finally { - if (!canceled && completed == null) { - completed = QueueingFuture.this; - synchronized (tasks) { - tasks.notify(); - } - } - } - } - - @Override - public boolean cancel(boolean mayInterruptIfRunning) { - if (result != null || exeEx != null) return false; - retryingCaller.cancel(); - future.startCancel(); - canceled = true; - return true; - } - - @Override - public boolean isCancelled() { - return canceled; - } - - @Override - public boolean isDone() { - return result != null || exeEx != null; - } - - @Override - public Result get() throws InterruptedException, ExecutionException { - try { - return get(1000, TimeUnit.DAYS); - } catch (TimeoutException e) { - throw new RuntimeException("You did wait for 1000 days here?", e); - } - } - - @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="RCN_REDUNDANT_NULLCHECK_OF_NULL_VALUE", - justification="Is this an issue?") - @Override - public Result get(long timeout, TimeUnit unit) - throws InterruptedException, ExecutionException, TimeoutException { - synchronized (tasks) { - if (result != null) { - return result; - } - if (exeEx != null) { - throw exeEx; - } - unit.timedWait(tasks, timeout); - } - // Findbugs says this null check is redundant. Will result be set across the wait above? - if (result != null) { - return result; - } - if (exeEx != null) { - throw exeEx; - } - - throw new TimeoutException("timeout=" + timeout + ", " + unit); - } - } - - public ResultBoundedCompletionService(Executor executor, int maxTasks) { - this.executor = executor; - this.tasks = new QueueingFuture[maxTasks]; - } - - - public void submit(ReplicaRegionServerCallable task, int callTimeout) { - QueueingFuture newFuture = new QueueingFuture(task, callTimeout); - executor.execute(Trace.wrap(newFuture)); - tasks[task.id] = newFuture; - } - - public QueueingFuture take() throws InterruptedException { - synchronized (tasks) { - while (completed == null) tasks.wait(); - } - return completed; - } - - public QueueingFuture poll(long timeout, TimeUnit unit) throws InterruptedException { - synchronized (tasks) { - if (completed == null) unit.timedWait(tasks, timeout); - } - return completed; - } - - public void cancelAll() { - for (QueueingFuture future : tasks) { - if (future != null) future.cancel(true); - } - } - } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java index b06e398..d2dd770 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java @@ -31,9 +31,9 @@ import java.util.TreeSet; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.IncompatibleFilterException; import org.apache.hadoop.hbase.io.TimeRange; @@ -338,8 +338,10 @@ public class Scan extends Query { /** * Set the start row of the scan. - * @param startRow row to start scan on (inclusive) - * Note: In order to make startRow exclusive add a trailing 0 byte + *

    + * If the specified row does not exist, the Scanner will start from the + * next closest row after the specified row. + * @param startRow row to start scanner at or after * @return this */ public Scan setStartRow(byte [] startRow) { @@ -348,9 +350,11 @@ public class Scan extends Query { } /** - * Set the stop row. + * Set the stop row of the scan. * @param stopRow row to end at (exclusive) - *

    Note: In order to make stopRow inclusive add a trailing 0 byte

    + *

    + * The scan will include rows that are lexicographically less than + * the provided stopRow. *

    Note: When doing a filter for a rowKey Prefix * use {@link #setRowPrefixFilter(byte[])}. * The 'trailing 0' will not yield the desired result.

    @@ -912,4 +916,4 @@ public class Scan extends Query { scan.setCaching(1); return scan; } -} +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java index 5ecc363..22f98a3 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallable.java @@ -24,7 +24,6 @@ import java.net.UnknownHostException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; @@ -35,10 +34,10 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.RegionLocations; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownScannerException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.metrics.ScanMetrics; import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; @@ -89,13 +88,14 @@ public class ScannerCallable extends RegionServerCallable { protected boolean isRegionServerRemote = true; private long nextCallSeq = 0; protected RpcControllerFactory controllerFactory; + protected PayloadCarryingRpcController controller; /** * @param connection which connection * @param tableName table callable is on * @param scan the scan to execute - * @param scanMetrics the ScanMetrics to used, if it is null, - * ScannerCallable won't collect metrics + * @param scanMetrics the ScanMetrics to used, if it is null, ScannerCallable won't collect + * metrics * @param rpcControllerFactory factory to use when creating * {@link com.google.protobuf.RpcController} */ @@ -124,6 +124,10 @@ public class ScannerCallable extends RegionServerCallable { this.controllerFactory = rpcControllerFactory; } + PayloadCarryingRpcController getController() { + return controller; + } + /** * @param reload force reload of server location * @throws IOException @@ -192,7 +196,7 @@ public class ScannerCallable extends RegionServerCallable { incRPCcallsMetrics(); request = RequestConverter.buildScanRequest(scannerId, caching, false, nextCallSeq); ScanResponse response = null; - PayloadCarryingRpcController controller = controllerFactory.newController(); + controller = controllerFactory.newController(); controller.setPriority(getTableName()); controller.setCallTimeout(callTimeout); try { @@ -236,7 +240,7 @@ public class ScannerCallable extends RegionServerCallable { } IOException ioe = e; if (e instanceof RemoteException) { - ioe = RemoteExceptionHandler.decodeRemoteException((RemoteException)e); + ioe = ((RemoteException) e).unwrapRemoteException(); } if (logScannerActivity && (ioe instanceof UnknownScannerException)) { try { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java index 440cddf..92293f2 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java @@ -34,13 +34,18 @@ import java.util.concurrent.atomic.AtomicBoolean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.util.BoundedCompletionService; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; + +import com.google.common.annotations.VisibleForTesting; + +import static org.apache.hadoop.hbase.client.ReversedClientScanner.createClosestRowBefore; + /** * This class has the logic for handling scanners for regions with and without replicas. * 1. A scan is attempted on the default (primary) region @@ -69,8 +74,9 @@ class ScannerCallableWithReplicas implements RetryingCallable { private Configuration conf; private int scannerTimeout; private Set outstandingCallables = new HashSet(); + private boolean someRPCcancelled = false; //required for testing purposes only - public ScannerCallableWithReplicas (TableName tableName, ClusterConnection cConnection, + public ScannerCallableWithReplicas(TableName tableName, ClusterConnection cConnection, ScannerCallable baseCallable, ExecutorService pool, int timeBeforeReplicas, Scan scan, int retries, int scannerTimeout, int caching, Configuration conf, RpcRetryingCaller caller) { @@ -134,8 +140,10 @@ class ScannerCallableWithReplicas implements RetryingCallable { // allocate a boundedcompletion pool of some multiple of number of replicas. // We want to accomodate some RPCs for redundant replica scans (but are still in progress) - BoundedCompletionService> cs = - new BoundedCompletionService>(pool, rl.size() * 5); + ResultBoundedCompletionService> cs = + new ResultBoundedCompletionService>( + new RpcRetryingCallerFactory(ScannerCallableWithReplicas.this.conf), pool, + rl.size() * 5); List exceptions = null; int submitted = 0, completed = 0; @@ -192,7 +200,7 @@ class ScannerCallableWithReplicas implements RetryingCallable { } finally { // We get there because we were interrupted or because one or more of the // calls succeeded or failed. In all case, we stop all our tasks. - cs.cancelAll(true); + cs.cancelAll(); } if (exceptions != null && !exceptions.isEmpty()) { @@ -226,8 +234,14 @@ class ScannerCallableWithReplicas implements RetryingCallable { // want to wait for the "close" to happen yet. The "wait" will happen when // the table is closed (when the awaitTermination of the underlying pool is called) s.setClose(); - RetryingRPC r = new RetryingRPC(s); - pool.submit(r); + final RetryingRPC r = new RetryingRPC(s); + pool.submit(new Callable(){ + @Override + public Void call() throws Exception { + r.call(scannerTimeout); + return null; + } + }); } // now clear outstandingCallables since we scheduled a close for all the contained scanners outstandingCallables.clear(); @@ -244,16 +258,16 @@ class ScannerCallableWithReplicas implements RetryingCallable { } private int addCallsForCurrentReplica( - BoundedCompletionService> cs, RegionLocations rl) { + ResultBoundedCompletionService> cs, RegionLocations rl) { RetryingRPC retryingOnReplica = new RetryingRPC(currentScannerCallable); outstandingCallables.add(currentScannerCallable); - cs.submit(retryingOnReplica); + cs.submit(retryingOnReplica, scannerTimeout, currentScannerCallable.id); return 1; } private int addCallsForOtherReplicas( - BoundedCompletionService> cs, RegionLocations rl, int min, - int max) { + ResultBoundedCompletionService> cs, RegionLocations rl, + int min, int max) { if (scan.getConsistency() == Consistency.STRONG) { return 0; // not scheduling on other replicas for strong consistency } @@ -262,37 +276,95 @@ class ScannerCallableWithReplicas implements RetryingCallable { continue; //this was already scheduled earlier } ScannerCallable s = currentScannerCallable.getScannerCallableForReplica(id); + if (this.lastResult != null) { - s.getScan().setStartRow(this.lastResult.getRow()); + if(s.getScan().isReversed()){ + s.getScan().setStartRow(createClosestRowBefore(this.lastResult.getRow())); + }else { + s.getScan().setStartRow(Bytes.add(this.lastResult.getRow(), new byte[1])); + } } outstandingCallables.add(s); RetryingRPC retryingOnReplica = new RetryingRPC(s); - cs.submit(retryingOnReplica); + cs.submit(retryingOnReplica, scannerTimeout, id); } return max - min + 1; } - class RetryingRPC implements Callable> { + @VisibleForTesting + boolean isAnyRPCcancelled() { + return someRPCcancelled; + } + + class RetryingRPC implements RetryingCallable>, Cancellable { final ScannerCallable callable; + RpcRetryingCaller caller; + private volatile boolean cancelled = false; RetryingRPC(ScannerCallable callable) { this.callable = callable; - } - - @Override - public Pair call() throws IOException { // For the Consistency.STRONG (default case), we reuse the caller // to keep compatibility with what is done in the past // For the Consistency.TIMELINE case, we can't reuse the caller // since we could be making parallel RPCs (caller.callWithRetries is synchronized // and we can't invoke it multiple times at the same time) - RpcRetryingCaller caller = ScannerCallableWithReplicas.this.caller; + this.caller = ScannerCallableWithReplicas.this.caller; if (scan.getConsistency() == Consistency.TIMELINE) { - caller = new RpcRetryingCallerFactory(ScannerCallableWithReplicas.this.conf). + this.caller = new RpcRetryingCallerFactory(ScannerCallableWithReplicas.this.conf). newCaller(); } - Result[] res = caller.callWithRetries(callable, scannerTimeout); - return new Pair(res, callable); + } + + @Override + public Pair call(int callTimeout) throws IOException { + // since the retries is done within the ResultBoundedCompletionService, + // we don't invoke callWithRetries here + if (cancelled) { + return null; + } + Result[] res = this.caller.callWithoutRetries(this.callable, callTimeout); + return new Pair(res, this.callable); + } + + @Override + public void prepare(boolean reload) throws IOException { + if (cancelled) return; + + if (Thread.interrupted()) { + throw new InterruptedIOException(); + } + + callable.prepare(reload); + } + + @Override + public void throwable(Throwable t, boolean retrying) { + callable.throwable(t, retrying); + } + + @Override + public String getExceptionMessageAdditionalDetail() { + return callable.getExceptionMessageAdditionalDetail(); + } + + @Override + public long sleep(long pause, int tries) { + return callable.sleep(pause, tries); + } + + @Override + public void cancel() { + cancelled = true; + caller.cancel(); + if (callable.getController() != null) { + callable.getController().startCancel(); + } + someRPCcancelled = true; + } + + @Override + public boolean isCancelled() { + return cancelled; } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java index 63ec370..9e0827c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerTimeoutException.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * Thrown when a scanner has timed out. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerStatisticTracker.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerStatisticTracker.java new file mode 100644 index 0000000..0c7b683 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ServerStatisticTracker.java @@ -0,0 +1,74 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.backoff.ServerStatistics; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; + +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Tracks the statistics for multiple regions + */ +@InterfaceAudience.Private +public class ServerStatisticTracker { + + private final Map stats = + new ConcurrentHashMap(); + + public void updateRegionStats(ServerName server, byte[] region, ClientProtos.RegionLoadStats + currentStats) { + ServerStatistics stat = stats.get(server); + + if (stat == null) { + // create a stats object and update the stats + synchronized (this) { + stat = stats.get(server); + // we don't have stats for that server yet, so we need to make some + if (stat == null) { + stat = new ServerStatistics(); + stats.put(server, stat); + } + } + } + stat.update(region, currentStats); + } + + public ServerStatistics getStats(ServerName server) { + return this.stats.get(server); + } + + public static ServerStatisticTracker create(Configuration conf) { + if (!conf.getBoolean(HConstants.ENABLE_CLIENT_BACKPRESSURE, + HConstants.DEFAULT_ENABLE_CLIENT_BACKPRESSURE)) { + return null; + } + return new ServerStatisticTracker(); + } + + @VisibleForTesting + ServerStatistics getServerStatsForTesting(ServerName server) { + return stats.get(server); + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/StatsTrackingRpcRetryingCaller.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/StatsTrackingRpcRetryingCaller.java new file mode 100644 index 0000000..e82f1e8 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/StatsTrackingRpcRetryingCaller.java @@ -0,0 +1,77 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +import java.io.IOException; + +/** + * An {@link RpcRetryingCaller} that will update the per-region stats for the call on return, + * if stats are available + */ +@InterfaceAudience.Private +public class StatsTrackingRpcRetryingCaller implements RpcRetryingCaller { + private final ServerStatisticTracker stats; + private final RpcRetryingCaller delegate; + + public StatsTrackingRpcRetryingCaller(RpcRetryingCaller delegate, + ServerStatisticTracker stats) { + this.delegate = delegate; + this.stats = stats; + } + + @Override + public void cancel() { + delegate.cancel(); + } + + @Override + public T callWithRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException { + T result = delegate.callWithRetries(callable, callTimeout); + return updateStatsAndUnwrap(result, callable); + } + + @Override + public T callWithoutRetries(RetryingCallable callable, int callTimeout) + throws IOException, RuntimeException { + T result = delegate.callWithRetries(callable, callTimeout); + return updateStatsAndUnwrap(result, callable); + } + + private T updateStatsAndUnwrap(T result, RetryingCallable callable) { + // don't track stats about requests that aren't to regionservers + if (!(callable instanceof RegionServerCallable)) { + return result; + } + + // mutli-server callables span multiple regions, so they don't have a location, + // but they are region server callables, so we have to handle them when we process the + // result in AsyncProcess#receiveMultiAction, not in here + if (callable instanceof MultiServerCallable) { + return result; + } + + // update the stats for the single server callable + RegionServerCallable regionCallable = (RegionServerCallable) callable; + HRegionLocation location = regionCallable.getLocation(); + return ResultStatsUtil.updateStats(result, stats, location); + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java index 07e4c08..a408b1d 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java @@ -23,20 +23,20 @@ import java.io.IOException; import java.util.List; import java.util.Map; -import com.google.protobuf.Descriptors; -import com.google.protobuf.Message; -import com.google.protobuf.Service; -import com.google.protobuf.ServiceException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.filter.CompareFilter; import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; +import com.google.protobuf.Descriptors; +import com.google.protobuf.Message; +import com.google.protobuf.Service; +import com.google.protobuf.ServiceException; + /** * Used to communicate with a single HBase table. * Obtain an instance from a {@link Connection} and call {@link #close()} afterwards. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java new file mode 100644 index 0000000..be9b80c --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableState.java @@ -0,0 +1,203 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; + +/** + * Represents table state. + */ +@InterfaceAudience.Private +public class TableState { + + @InterfaceAudience.Public + @InterfaceStability.Evolving + public static enum State { + ENABLED, + DISABLED, + DISABLING, + ENABLING; + + /** + * Covert from PB version of State + * + * @param state convert from + * @return POJO + */ + public static State convert(HBaseProtos.TableState.State state) { + State ret; + switch (state) { + case ENABLED: + ret = State.ENABLED; + break; + case DISABLED: + ret = State.DISABLED; + break; + case DISABLING: + ret = State.DISABLING; + break; + case ENABLING: + ret = State.ENABLING; + break; + default: + throw new IllegalStateException(state.toString()); + } + return ret; + } + + /** + * Covert to PB version of State + * + * @return PB + */ + public HBaseProtos.TableState.State convert() { + HBaseProtos.TableState.State state; + switch (this) { + case ENABLED: + state = HBaseProtos.TableState.State.ENABLED; + break; + case DISABLED: + state = HBaseProtos.TableState.State.DISABLED; + break; + case DISABLING: + state = HBaseProtos.TableState.State.DISABLING; + break; + case ENABLING: + state = HBaseProtos.TableState.State.ENABLING; + break; + default: + throw new IllegalStateException(this.toString()); + } + return state; + } + + } + + private final long timestamp; + private final TableName tableName; + private final State state; + + /** + * Create instance of TableState. + * @param state table state + */ + public TableState(TableName tableName, State state, long timestamp) { + this.tableName = tableName; + this.state = state; + this.timestamp = timestamp; + } + + /** + * Create instance of TableState with current timestamp + * + * @param tableName table for which state is created + * @param state state of the table + */ + public TableState(TableName tableName, State state) { + this(tableName, state, System.currentTimeMillis()); + } + + /** + * @return table state + */ + public State getState() { + return state; + } + + /** + * Timestamp of table state + * + * @return milliseconds + */ + public long getTimestamp() { + return timestamp; + } + + /** + * Table name for state + * + * @return milliseconds + */ + public TableName getTableName() { + return tableName; + } + + /** + * Check that table in given states + * @param state state + * @return true if satisfies + */ + public boolean inStates(State state) { + return this.state.equals(state); + } + + /** + * Check that table in given states + * @param states state list + * @return true if satisfies + */ + public boolean inStates(State... states) { + for (State s : states) { + if (s.equals(this.state)) + return true; + } + return false; + } + + + /** + * Covert to PB version of TableState + * @return PB + */ + public HBaseProtos.TableState convert() { + return HBaseProtos.TableState.newBuilder() + .setState(this.state.convert()) + .setTable(ProtobufUtil.toProtoTableName(this.tableName)) + .setTimestamp(this.timestamp) + .build(); + } + + /** + * Covert from PB version of TableState + * @param tableState convert from + * @return POJO + */ + public static TableState convert(HBaseProtos.TableState tableState) { + TableState.State state = State.convert(tableState.getState()); + return new TableState(ProtobufUtil.toTableName(tableState.getTable()), + state, tableState.getTimestamp()); + } + + /** + * Static version of state checker + * @param state desired + * @param target equals to any of + * @return true if satisfies + */ + public static boolean isInStates(State state, State... target) { + for (State tableState : target) { + if (state.equals(tableState)) + return true; + } + return false; + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java index 7fe2a73..33aef79 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHRegionInfo.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HRegionInfo; @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java index c5a93e1..55a81d6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/UnmodifyableHTableDescriptor.java @@ -19,10 +19,10 @@ package org.apache.hadoop.hbase.client; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; /** * Read-only table descriptor. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/WrongRowIOException.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/WrongRowIOException.java index 09c1d64..e0609da 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/WrongRowIOException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/WrongRowIOException.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseIOException; @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java index 9b987b5..04fd20f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperKeepAliveConnection.java @@ -20,11 +20,11 @@ package org.apache.hadoop.hbase.client; +import java.io.IOException; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import java.io.IOException; - /** * We inherit the current ZooKeeperWatcher implementation to change the semantic * of the close: the new close won't immediately close the connection but diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java index c43b4e2..11a095e 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/ZooKeeperRegistry.java @@ -18,7 +18,6 @@ package org.apache.hadoop.hbase.client; import java.io.IOException; -import java.io.InterruptedIOException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -26,10 +25,8 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZKClusterId; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.zookeeper.KeeperException; @@ -98,24 +95,6 @@ class ZooKeeperRegistry implements Registry { } @Override - public boolean isTableOnlineState(TableName tableName, boolean enabled) - throws IOException { - ZooKeeperKeepAliveConnection zkw = hci.getKeepAliveZooKeeperWatcher(); - try { - if (enabled) { - return ZKTableStateClientSideReader.isEnabledTable(zkw, tableName); - } - return ZKTableStateClientSideReader.isDisabledTable(zkw, tableName); - } catch (KeeperException e) { - throw new IOException("Enable/Disable failed", e); - } catch (InterruptedException e) { - throw new InterruptedIOException(); - } finally { - zkw.close(); - } - } - - @Override public int getCurrentNrHRS() throws IOException { ZooKeeperKeepAliveConnection zkw = hci.getKeepAliveZooKeeperWatcher(); try { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicy.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicy.java new file mode 100644 index 0000000..94e434f --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicy.java @@ -0,0 +1,42 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client.backoff; + +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Configurable policy for the amount of time a client should wait for a new request to the + * server when given the server load statistics. + *

    + * Must have a single-argument constructor that takes a {@link org.apache.hadoop.conf.Configuration} + *

    + */ +@InterfaceAudience.Public +@InterfaceStability.Unstable +public interface ClientBackoffPolicy { + + public static final String BACKOFF_POLICY_CLASS = + "hbase.client.statistics.backoff-policy"; + + /** + * @return the number of ms to wait on the client based on the + */ + public long getBackoffTime(ServerName serverName, byte[] region, ServerStatistics stats); +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicyFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicyFactory.java new file mode 100644 index 0000000..879a0e2 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ClientBackoffPolicyFactory.java @@ -0,0 +1,59 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client.backoff; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.ReflectionUtils; + +@InterfaceAudience.Private +@InterfaceStability.Evolving +public final class ClientBackoffPolicyFactory { + + private static final Log LOG = LogFactory.getLog(ClientBackoffPolicyFactory.class); + + private ClientBackoffPolicyFactory() { + } + + public static ClientBackoffPolicy create(Configuration conf) { + // create the backoff policy + String className = + conf.get(ClientBackoffPolicy.BACKOFF_POLICY_CLASS, NoBackoffPolicy.class + .getName()); + return ReflectionUtils.instantiateWithCustomCtor(className, + new Class[] { Configuration.class }, new Object[] { conf }); + } + + /** + * Default backoff policy that doesn't create any backoff for the client, regardless of load + */ + public static class NoBackoffPolicy implements ClientBackoffPolicy { + public NoBackoffPolicy(Configuration conf){ + // necessary to meet contract + } + + @Override + public long getBackoffTime(ServerName serverName, byte[] region, ServerStatistics stats) { + return 0; + } + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ExponentialClientBackoffPolicy.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ExponentialClientBackoffPolicy.java new file mode 100644 index 0000000..6e75670 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ExponentialClientBackoffPolicy.java @@ -0,0 +1,71 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client.backoff; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Simple exponential backoff policy on for the client that uses a percent^4 times the + * max backoff to generate the backoff time. + */ +@InterfaceAudience.Public +@InterfaceStability.Unstable +public class ExponentialClientBackoffPolicy implements ClientBackoffPolicy { + + private static final Log LOG = LogFactory.getLog(ExponentialClientBackoffPolicy.class); + + private static final long ONE_MINUTE = 60 * 1000; + public static final long DEFAULT_MAX_BACKOFF = 5 * ONE_MINUTE; + public static final String MAX_BACKOFF_KEY = "hbase.client.exponential-backoff.max"; + private long maxBackoff; + + public ExponentialClientBackoffPolicy(Configuration conf) { + this.maxBackoff = conf.getLong(MAX_BACKOFF_KEY, DEFAULT_MAX_BACKOFF); + } + + @Override + public long getBackoffTime(ServerName serverName, byte[] region, ServerStatistics stats) { + // no stats for the server yet, so don't backoff + if (stats == null) { + return 0; + } + + ServerStatistics.RegionStatistics regionStats = stats.getStatsForRegion(region); + // no stats for the region yet - don't backoff + if (regionStats == null) { + return 0; + } + + // square the percent as a value less than 1. Closer we move to 100 percent, + // the percent moves to 1, but squaring causes the exponential curve + double percent = regionStats.getMemstoreLoadPercent() / 100.0; + double multiplier = Math.pow(percent, 4.0); + // shouldn't ever happen, but just incase something changes in the statistic data + if (multiplier > 1) { + LOG.warn("Somehow got a backoff multiplier greater than the allowed backoff. Forcing back " + + "down to the max backoff"); + multiplier = 1; + } + return (long) (multiplier * maxBackoff); + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ServerStatistics.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ServerStatistics.java new file mode 100644 index 0000000..a3b8e11 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/backoff/ServerStatistics.java @@ -0,0 +1,68 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client.backoff; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; +import org.apache.hadoop.hbase.util.Bytes; + +import java.util.Map; +import java.util.TreeMap; + +/** + * Track the statistics for a single region + */ +@InterfaceAudience.Private +public class ServerStatistics { + + private Map + stats = new TreeMap(Bytes.BYTES_COMPARATOR); + + /** + * Good enough attempt. Last writer wins. It doesn't really matter which one gets to update, + * as something gets set + * @param region + * @param currentStats + */ + public void update(byte[] region, ClientProtos.RegionLoadStats currentStats) { + RegionStatistics regionStat = this.stats.get(region); + if(regionStat == null){ + regionStat = new RegionStatistics(); + this.stats.put(region, regionStat); + } + + regionStat.update(currentStats); + } + + @InterfaceAudience.Private + public RegionStatistics getStatsForRegion(byte[] regionName){ + return stats.get(regionName); + } + + public static class RegionStatistics{ + private int memstoreLoad = 0; + + public void update(ClientProtos.RegionLoadStats currentStats) { + this.memstoreLoad = currentStats.getMemstoreLoad(); + } + + public int getMemstoreLoadPercent(){ + return this.memstoreLoad; + } + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java index 1e378e7..5421e57 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/AggregationClient.java @@ -32,7 +32,6 @@ import java.util.concurrent.atomic.AtomicLong; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HConstants; @@ -135,7 +134,7 @@ public class AggregationClient implements Closeable { * The caller is supposed to handle the exception as they are thrown * & propagated to it. */ - public + public R max(final Table table, final ColumnInterpreter ci, final Scan scan) throws Throwable { final AggregateRequest requestArg = validateArgAndGetPB(scan, ci, false); @@ -157,7 +156,7 @@ public class AggregationClient implements Closeable { @Override public R call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getMax(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -181,12 +180,11 @@ public class AggregationClient implements Closeable { */ private void validateParameters(Scan scan, boolean canFamilyBeAbsent) throws IOException { if (scan == null - || (Bytes.equals(scan.getStartRow(), scan.getStopRow()) && !Bytes - .equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)) - || ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow()) > 0) && - !Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW))) { - throw new IOException( - "Agg client Exception: Startrow should be smaller than Stoprow"); + || (Bytes.equals(scan.getStartRow(), scan.getStopRow()) && !Bytes.equals( + scan.getStartRow(), HConstants.EMPTY_START_ROW)) + || ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow()) > 0) && !Bytes.equals( + scan.getStopRow(), HConstants.EMPTY_END_ROW))) { + throw new IOException("Agg client Exception: Startrow should be smaller than Stoprow"); } else if (!canFamilyBeAbsent) { if (scan.getFamilyMap().size() != 1) { throw new IOException("There must be only one family."); @@ -222,7 +220,7 @@ public class AggregationClient implements Closeable { * @return min val * @throws Throwable */ - public + public R min(final Table table, final ColumnInterpreter ci, final Scan scan) throws Throwable { final AggregateRequest requestArg = validateArgAndGetPB(scan, ci, false); @@ -246,7 +244,7 @@ public class AggregationClient implements Closeable { @Override public R call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getMin(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -299,7 +297,7 @@ public class AggregationClient implements Closeable { * @return * @throws Throwable */ - public + public long rowCount(final Table table, final ColumnInterpreter ci, final Scan scan) throws Throwable { final AggregateRequest requestArg = validateArgAndGetPB(scan, ci, true); @@ -321,7 +319,7 @@ public class AggregationClient implements Closeable { @Override public Long call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getRowNum(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -363,11 +361,11 @@ public class AggregationClient implements Closeable { * @return sum * @throws Throwable */ - public + public S sum(final Table table, final ColumnInterpreter ci, final Scan scan) throws Throwable { final AggregateRequest requestArg = validateArgAndGetPB(scan, ci, false); - + class SumCallBack implements Batch.Callback { S sumVal = null; @@ -386,7 +384,7 @@ public class AggregationClient implements Closeable { @Override public S call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getSum(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -453,7 +451,7 @@ public class AggregationClient implements Closeable { @Override public Pair call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getAvg(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -557,7 +555,7 @@ public class AggregationClient implements Closeable { @Override public Pair, Long> call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getStd(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -630,7 +628,7 @@ public class AggregationClient implements Closeable { } /** - * It helps locate the region with median for a given column whose weight + * It helps locate the region with median for a given column whose weight * is specified in an optional column. * From individual regions, it obtains sum of values and sum of weights. * @param table @@ -673,7 +671,7 @@ public class AggregationClient implements Closeable { @Override public List call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); instance.getMedian(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback.get(); @@ -740,7 +738,7 @@ public class AggregationClient implements Closeable { weighted = true; halfSumVal = ci.divideForAvg(sumWeights, 2L); } - + for (Map.Entry> entry : map.entrySet()) { S s = weighted ? entry.getValue().get(1) : entry.getValue().get(0); double newSumVal = movingSumVal + ci.divideForAvg(s, 1L); @@ -772,7 +770,7 @@ public class AggregationClient implements Closeable { for (int i = 0; i < results.length; i++) { Result r = results[i]; // retrieve weight - Cell kv = r.getColumnLatest(colFamily, weightQualifier); + Cell kv = r.getColumnLatestCell(colFamily, weightQualifier); R newValue = ci.getValue(colFamily, weightQualifier, kv); S s = ci.castToReturnType(newValue); double newSumVal = movingSumVal + ci.divideForAvg(s, 1L); @@ -781,7 +779,7 @@ public class AggregationClient implements Closeable { return value; } movingSumVal = newSumVal; - kv = r.getColumnLatest(colFamily, qualifier); + kv = r.getColumnLatestCell(colFamily, qualifier); value = ci.getValue(colFamily, qualifier, kv); } } @@ -794,15 +792,15 @@ public class AggregationClient implements Closeable { return null; } - AggregateRequest + AggregateRequest validateArgAndGetPB(Scan scan, ColumnInterpreter ci, boolean canFamilyBeAbsent) throws IOException { validateParameters(scan, canFamilyBeAbsent); - final AggregateRequest.Builder requestBuilder = + final AggregateRequest.Builder requestBuilder = AggregateRequest.newBuilder(); requestBuilder.setInterpreterClassName(ci.getClass().getCanonicalName()); P columnInterpreterSpecificData = null; - if ((columnInterpreterSpecificData = ci.getRequestData()) + if ((columnInterpreterSpecificData = ci.getRequestData()) != null) { requestBuilder.setInterpreterSpecificBytes(columnInterpreterSpecificData.toByteString()); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java index 55343ac..f8a0e1c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/Batch.java @@ -19,11 +19,11 @@ package org.apache.hadoop.hbase.client.coprocessor; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * A collection of interfaces and utilities used for interacting with custom RPC * interfaces exposed by Coprocessors. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java index d693f0c..97724bd 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/BigDecimalColumnInterpreter.java @@ -22,13 +22,13 @@ import java.io.IOException; import java.math.BigDecimal; import java.math.RoundingMode; -import org.apache.hadoop.hbase.util.ByteStringer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.coprocessor.ColumnInterpreter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BigDecimalMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java index 6db94d2..8b0c690 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/DoubleColumnInterpreter.java @@ -20,12 +20,12 @@ package org.apache.hadoop.hbase.client.coprocessor; import java.io.IOException; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.coprocessor.ColumnInterpreter; -import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.DoubleMsg; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java index e63fd3b..e8e5e3a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/LongColumnInterpreter.java @@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.client.coprocessor; import java.io.IOException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.coprocessor.ColumnInterpreter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.LongMsg; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java index e2e87c2..c27322a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/coprocessor/SecureBulkLoadClient.java @@ -18,16 +18,15 @@ package org.apache.hadoop.hbase.client.coprocessor; -import org.apache.hadoop.hbase.client.Table; -import org.apache.hadoop.hbase.util.ByteStringer; import java.io.IOException; import java.util.ArrayList; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.ipc.BlockingRpcCallback; import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; import org.apache.hadoop.hbase.ipc.ServerRpcController; @@ -35,13 +34,10 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos; import org.apache.hadoop.hbase.security.SecureBulkLoadUtil; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.security.token.Token; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; - /** * Client proxy for SecureBulkLoadProtocol * used in conjunction with SecureBulkLoadEndpoint @@ -77,7 +73,7 @@ public class SecureBulkLoadClient { if (controller.failedOnException()) { throw controller.getFailedOn(); } - + return response.getBulkToken(); } catch (Throwable throwable) { throw new IOException(throwable); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/package-info.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/package-info.java index 10261cd..ecf4595 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/package-info.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/package-info.java @@ -82,9 +82,9 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; - // Class that has nothing but a main. // Does a Put, Get and a Scan against an hbase table. +// The API described here is since HBase 1.0. public class MyLittleHBaseClient { public static void main(String[] args) throws IOException { // You need a configuration object to tell the client where to connect. @@ -94,15 +94,24 @@ public class MyLittleHBaseClient { Configuration config = HBaseConfiguration.create(); // Next you need a Connection to the cluster. Create one. When done with it, - // close it (Should start a try/finally after this creation so it gets closed - // for sure but leaving this out for readibility's sake). + // close it. A try/finally is a good way to ensure it gets closed or use + // the jdk7 idiom, try-with-resources: see + // https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html + // + // Connections are heavyweight. Create one once and keep it around. From a Connection + // you get a Table instance to access Tables, an Admin instance to administer the cluster, + // and RegionLocator to find where regions are out on the cluster. As opposed to Connections, + // Table, Admin and RegionLocator instances are lightweight; create as you need them and then + // close when done. + // Connection connection = ConnectionFactory.createConnection(config); try { - // This instantiates a Table object that connects you to - // the "myLittleHBaseTable" table (TableName.valueOf turns String into TableName instance). + // The below instantiates a Table object that connects you to the "myLittleHBaseTable" table + // (TableName.valueOf turns String into a TableName instance). // When done with it, close it (Should start a try/finally after this creation so it gets - // closed for sure but leaving this out for readibility's sake). + // closed for sure the jdk7 idiom, try-with-resources: see + // https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) Table table = connection.getTable(TableName.valueOf("myLittleHBaseTable")); try { @@ -112,7 +121,7 @@ public class MyLittleHBaseClient { // below, we are converting the String "myLittleRow" into a byte array to // use as a row key for our update. Once you have a Put instance, you can // adorn it by setting the names of columns you want to update on the row, - // the timestamp to use in your update, etc.If no timestamp, the server + // the timestamp to use in your update, etc. If no timestamp, the server // applies current time to the edits. Put p = new Put(Bytes.toBytes("myLittleRow")); @@ -138,6 +147,7 @@ public class MyLittleHBaseClient { Result r = table.get(g); byte [] value = r.getValue(Bytes.toBytes("myLittleFamily"), Bytes.toBytes("someQualifier")); + // If we convert the value bytes, we should get back 'Some Value', the // value we inserted at this location. String valueStr = Bytes.toString(value); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java index 74b413b..3db8c1c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java @@ -32,8 +32,6 @@ import java.util.Set; import org.apache.commons.lang.StringUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HColumnDescriptor; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java index 8a0cb9f..43efb66 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java @@ -21,8 +21,8 @@ package org.apache.hadoop.hbase.coprocessor; import java.io.IOException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import com.google.protobuf.Message; @@ -30,9 +30,9 @@ import com.google.protobuf.Message; * Defines how value for specific column is interpreted and provides utility * methods like compare, add, multiply etc for them. Takes column family, column * qualifier and return the cell value. Its concrete implementation should - * handle null case gracefully. Refer to - * {@link org.apache.hadoop.hbase.client.coprocessor.LongColumnInterpreter} for an - * example. + * handle null case gracefully. + * Refer to {@link org.apache.hadoop.hbase.client.coprocessor.LongColumnInterpreter} + * for an example. *

    * Takes two generic parameters and three Message parameters. * The cell value type of the interpreter is . @@ -128,8 +128,8 @@ Q extends Message, R extends Message> { * server side to construct the ColumnInterpreter. The server * will pass this to the {@link #initialize} * method. If there is no ColumnInterpreter specific data (for e.g., - * {@link org.apache.hadoop.hbase.client.coprocessor.LongColumnInterpreter}) - * then null should be returned. + * {@link org.apache.hadoop.hbase.client.coprocessor.LongColumnInterpreter}) + * then null should be returned. * @return the PB message */ public abstract P getRequestData(); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorException.java hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorException.java index 330b9d5..9946d97 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.coprocessor; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * Thrown if a coprocessor encounters any exception. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java deleted file mode 100644 index 0ce0219..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java +++ /dev/null @@ -1,43 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.exceptions; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; - -/** - * Failed deserialization. - */ -@InterfaceAudience.Private -@SuppressWarnings("serial") -public class DeserializationException extends HBaseException { - public DeserializationException() { - super(); - } - - public DeserializationException(final String message) { - super(message); - } - - public DeserializationException(final String message, final Throwable t) { - super(message, t); - } - - public DeserializationException(final Throwable t) { - super(t); - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java deleted file mode 100644 index fe0d7d7..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java +++ /dev/null @@ -1,44 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.exceptions; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; - -/** - * Base checked exception in HBase. - * @see HBASE-5796 - */ -@SuppressWarnings("serial") -@InterfaceAudience.Private -public class HBaseException extends Exception { - public HBaseException() { - super(); - } - - public HBaseException(final String message) { - super(message); - } - - public HBaseException(final String message, final Throwable t) { - super(message, t); - } - - public HBaseException(final Throwable t) { - super(t); - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OperationConflictException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OperationConflictException.java index ca5eeb0..c40b8d9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OperationConflictException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OperationConflictException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.exceptions; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * The exception that is thrown if there's duplicate execution of non-idempotent operation. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OutOfOrderScannerNextException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OutOfOrderScannerNextException.java index 5a49be1..6357bd6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OutOfOrderScannerNextException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/OutOfOrderScannerNextException.java @@ -17,8 +17,8 @@ */ package org.apache.hadoop.hbase.exceptions; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Thrown by a RegionServer while doing next() calls on a ResultScanner. Both client and server diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionInRecoveryException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionInRecoveryException.java index 78b7c42..06db472 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionInRecoveryException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionInRecoveryException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.exceptions; +import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.NotServingRegionException; /** * Thrown when a read request issued against a region which is in recovering state. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionMovedException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionMovedException.java index c7bd3f0..c76be0f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionMovedException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionMovedException.java @@ -19,11 +19,11 @@ package org.apache.hadoop.hbase.exceptions; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; /** * Subclass if the server knows the region is now on another server. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionOpeningException.java hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionOpeningException.java index d0bf5c8..8833372 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionOpeningException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/exceptions/RegionOpeningException.java @@ -20,9 +20,9 @@ package org.apache.hadoop.hbase.exceptions; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.NotServingRegionException; /** * Subclass if the server knows the region is now on another server. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java index d5c2613..0fc624f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryComparator.java @@ -19,13 +19,14 @@ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; import org.apache.hadoop.hbase.util.Bytes; +import com.google.protobuf.InvalidProtocolBufferException; + /** * A binary comparator which lexicographically compares against the specified * byte array using {@link org.apache.hadoop.hbase.util.Bytes#compareTo(byte[], byte[])}. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java index c05eb8f..b6311b0 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BinaryPrefixComparator.java @@ -19,13 +19,14 @@ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; import org.apache.hadoop.hbase.util.Bytes; +import com.google.protobuf.InvalidProtocolBufferException; + /** * A comparator which compares against a specified byte array, but only compares * up to the length of this byte array. For the rest it is similar to diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java index 0b7c52d..d527a2c 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/BitComparator.java @@ -19,12 +19,13 @@ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; +import com.google.protobuf.InvalidProtocolBufferException; + /** * A bit comparator which performs the specified bitwise operation on each of the bytes * with the specified byte array. Then returns whether the result is non-zero. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java index c6e42d4..4642ab9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ByteArrayComparable.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase.filter; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java index 18f49f6..572de9f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnCountGetFilter.java @@ -21,9 +21,9 @@ package org.apache.hadoop.hbase.filter; import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -62,13 +62,6 @@ public class ColumnCountGetFilter extends FilterBase { return filterAllRemaining() ? ReturnCode.NEXT_COL : ReturnCode.INCLUDE_AND_NEXT_COL; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override public void reset() { this.count = 0; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java index 14ccacb..673ca6e 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPaginationFilter.java @@ -20,13 +20,13 @@ package org.apache.hadoop.hbase.filter; import java.util.ArrayList; -import org.apache.hadoop.hbase.util.ByteStringer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.common.base.Preconditions; @@ -143,18 +143,11 @@ public class ColumnPaginationFilter extends FilterBase } } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override - public Cell getNextCellHint(Cell kv) { + public Cell getNextCellHint(Cell cell) { return KeyValueUtil.createFirstOnRow( - kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), kv.getFamilyArray(), - kv.getFamilyOffset(), kv.getFamilyLength(), columnOffset, 0, columnOffset.length); + cell.getRowArray(), cell.getRowOffset(), cell.getRowLength(), cell.getFamilyArray(), + cell.getFamilyOffset(), cell.getFamilyLength(), columnOffset, 0, columnOffset.length); } @Override diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java index 6a9e6e9..d2f058a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnPrefixFilter.java @@ -21,13 +21,13 @@ package org.apache.hadoop.hbase.filter; import java.util.ArrayList; -import org.apache.hadoop.hbase.util.ByteStringer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.common.base.Preconditions; @@ -60,13 +60,6 @@ public class ColumnPrefixFilter extends FilterBase { } } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public ReturnCode filterColumn(byte[] buffer, int qualifierOffset, int qualifierLength) { if (qualifierLength < prefix.length) { int cmp = Bytes.compareTo(buffer, qualifierOffset, qualifierLength, this.prefix, 0, @@ -137,10 +130,10 @@ public class ColumnPrefixFilter extends FilterBase { } @Override - public Cell getNextCellHint(Cell kv) { + public Cell getNextCellHint(Cell cell) { return KeyValueUtil.createFirstOnRow( - kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), kv.getFamilyArray(), - kv.getFamilyOffset(), kv.getFamilyLength(), prefix, 0, prefix.length); + cell.getRowArray(), cell.getRowOffset(), cell.getRowLength(), cell.getFamilyArray(), + cell.getFamilyOffset(), cell.getFamilyLength(), prefix, 0, prefix.length); } @Override diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java index fb627fd..9963af6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ColumnRangeFilter.java @@ -21,19 +21,19 @@ package org.apache.hadoop.hbase.filter; import static org.apache.hadoop.hbase.util.Bytes.len; -import com.google.common.base.Preconditions; -import org.apache.hadoop.hbase.util.ByteStringer; -import com.google.protobuf.InvalidProtocolBufferException; +import java.util.ArrayList; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; -import java.util.ArrayList; +import com.google.common.base.Preconditions; +import com.google.protobuf.InvalidProtocolBufferException; /** * This filter is used for selecting only those keys with columns that are @@ -151,13 +151,6 @@ public class ColumnRangeFilter extends FilterBase { return ReturnCode.NEXT_ROW; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public static Filter createFilterFromArguments(ArrayList filterArguments) { Preconditions.checkArgument(filterArguments.size() == 4, "Expected 4 but got: %s", filterArguments.size()); @@ -223,9 +216,9 @@ public class ColumnRangeFilter extends FilterBase { } @Override - public Cell getNextCellHint(Cell kv) { - return KeyValueUtil.createFirstOnRow(kv.getRowArray(), kv.getRowOffset(), kv - .getRowLength(), kv.getFamilyArray(), kv.getFamilyOffset(), kv + public Cell getNextCellHint(Cell cell) { + return KeyValueUtil.createFirstOnRow(cell.getRowArray(), cell.getRowOffset(), cell + .getRowLength(), cell.getFamilyArray(), cell.getFamilyOffset(), cell .getFamilyLength(), this.minColumn, 0, len(this.minColumn)); } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java index 9987e23..319e123 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/CompareFilter.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.filter; -import com.google.common.base.Preconditions; +import java.util.ArrayList; + import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -29,7 +29,7 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.CompareType; import org.apache.hadoop.hbase.util.Bytes; -import java.util.ArrayList; +import com.google.common.base.Preconditions; /** * This is a generic filter to be used to filter by comparison. It takes an * operator (equal, greater, not equal, etc) and a byte [] comparator. @@ -123,13 +123,6 @@ public abstract class CompareFilter extends FilterBase { } } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - // returns an array of heterogeneous objects public static ArrayList extractArguments(ArrayList filterArguments) { Preconditions.checkArgument(filterArguments.size() == 2, diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java index 2843751..6d19842 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/DependentColumnFilter.java @@ -25,14 +25,14 @@ import java.util.Iterator; import java.util.List; import java.util.Set; -import org.apache.hadoop.hbase.util.ByteStringer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.common.base.Preconditions; @@ -69,7 +69,7 @@ public class DependentColumnFilter extends CompareFilter { */ public DependentColumnFilter(final byte [] family, final byte[] qualifier, final boolean dropDependentColumn, final CompareOp valueCompareOp, - final ByteArrayComparable valueComparator) { + final ByteArrayComparable valueComparator) { // set up the comparator super(valueCompareOp, valueComparator); this.columnFamily = family; @@ -137,15 +137,15 @@ public class DependentColumnFilter extends CompareFilter { public ReturnCode filterKeyValue(Cell c) { // Check if the column and qualifier match if (!CellUtil.matchingColumn(c, this.columnFamily, this.columnQualifier)) { - // include non-matches for the time being, they'll be discarded afterwards - return ReturnCode.INCLUDE; + // include non-matches for the time being, they'll be discarded afterwards + return ReturnCode.INCLUDE; } // If it doesn't pass the op, skip it if (comparator != null && doCompare(compareOp, comparator, c.getValueArray(), c.getValueOffset(), c.getValueLength())) return ReturnCode.SKIP; - + stampSet.add(c.getTimestamp()); if(dropDependentColumn) { return ReturnCode.SKIP; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java index c7b0b66..e289026 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FamilyFilter.java @@ -22,9 +22,9 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java index 729afe1..d66ad50 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java @@ -22,10 +22,9 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.List; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.exceptions.DeserializationException; /** @@ -36,10 +35,10 @@ import org.apache.hadoop.hbase.exceptions.DeserializationException; *
  • {@link #reset()} : reset the filter state before filtering a new row.
  • *
  • {@link #filterAllRemaining()}: true means row scan is over; false means keep going.
  • *
  • {@link #filterRowKey(byte[],int,int)}: true means drop this row; false means include.
  • - *
  • {@link #filterKeyValue(Cell)}: decides whether to include or exclude this KeyValue. + *
  • {@link #filterKeyValue(Cell)}: decides whether to include or exclude this Cell. * See {@link ReturnCode}.
  • - *
  • {@link #transform(KeyValue)}: if the KeyValue is included, let the filter transform the - * KeyValue.
  • + *
  • {@link #transformCell(Cell)}: if the Cell is included, let the filter transform the + * Cell.
  • *
  • {@link #filterRowCells(List)}: allows direct modification of the final list to be submitted *
  • {@link #filterRow()}: last chance to drop entire row based on the sequence of * filter calls. Eg: filter a row if it doesn't contain a specified column.
  • @@ -123,7 +122,7 @@ public abstract class Filter { * @see org.apache.hadoop.hbase.KeyValue#shallowCopy() * The transformed KeyValue is what is eventually returned to the client. Most filters will * return the passed KeyValue unchanged. - * @see org.apache.hadoop.hbase.filter.KeyOnlyFilter#transform(KeyValue) for an example of a + * @see org.apache.hadoop.hbase.filter.KeyOnlyFilter#transformCell(Cell) for an example of a * transformation. * * Concrete implementers can signal a failure condition in their code by throwing an @@ -136,14 +135,6 @@ public abstract class Filter { abstract public Cell transformCell(final Cell v) throws IOException; /** - * WARNING: please to not override this method. Instead override {@link #transformCell(Cell)}. - * This is for transition from 0.94 -> 0.96 - **/ - @Deprecated // use Cell transformCell(final Cell) - abstract public KeyValue transform(final KeyValue currentKV) throws IOException; - - - /** * Return codes for filterValue(). */ @InterfaceAudience.Public @@ -209,16 +200,6 @@ public abstract class Filter { abstract public boolean filterRow() throws IOException; /** - * @param currentKV - * @return KeyValue which must be next seeked. return null if the filter is not sure which key to - * seek to next. - * @throws IOException - * Function is Deprecated. Use {@link #getNextCellHint(Cell)} instead. - */ - @Deprecated - abstract public KeyValue getNextKeyHint(final KeyValue currentKV) throws IOException; - - /** * If the filter returns the match code SEEK_NEXT_USING_HINT, then it should also tell which is * the next key it must seek to. After receiving the match code SEEK_NEXT_USING_HINT, the * QueryMatcher would call this function to find out which key it must next seek to. @@ -230,7 +211,7 @@ public abstract class Filter { * seek to next. * @throws IOException in case an I/O or an filter specific failure needs to be signaled. */ - abstract public Cell getNextCellHint(final Cell currentKV) throws IOException; + abstract public Cell getNextCellHint(final Cell currentCell) throws IOException; /** * Check that given column family is essential for filter to check row. Most filters always return diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java index e9fad92..a04dd89 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterBase.java @@ -23,10 +23,8 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Abstract base class to help you implement new Filters. Common "ignore" or NOOP type @@ -78,20 +76,7 @@ public abstract class FilterBase extends Filter { */ @Override public Cell transformCell(Cell v) throws IOException { - // Old filters based off of this class will override KeyValue transform(KeyValue). - // Thus to maintain compatibility we need to call the old version. - return transform(KeyValueUtil.ensureKeyValue(v)); - } - - /** - * WARNING: please to not override this method. Instead override {@link #transformCell(Cell)}. - * - * This is for transition from 0.94 -> 0.96 - */ - @Override - @Deprecated - public KeyValue transform(KeyValue currentKV) throws IOException { - return currentKV; + return v; } /** @@ -128,24 +113,13 @@ public abstract class FilterBase extends Filter { } /** - * This method is deprecated and you should override Cell getNextKeyHint(Cell) instead. - */ - @Override - @Deprecated - public KeyValue getNextKeyHint(KeyValue currentKV) throws IOException { - return null; - } - - /** * Filters that are not sure which key must be next seeked to, can inherit * this implementation that, by default, returns a null Cell. * * @inheritDoc */ - public Cell getNextCellHint(Cell currentKV) throws IOException { - // Old filters based off of this class will override KeyValue getNextKeyHint(KeyValue). - // Thus to maintain compatibility we need to call the old version. - return getNextKeyHint(KeyValueUtil.ensureKeyValue(currentKV)); + public Cell getNextCellHint(Cell currentCell) throws IOException { + return null; } /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java index 579fe2c..ba1a818 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java @@ -23,11 +23,11 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -70,7 +70,7 @@ final public class FilterList extends Filter { private Filter seekHintFilter = null; /** Reference Cell used by {@link #transformCell(Cell)} for validation purpose. */ - private Cell referenceKV = null; + private Cell referenceCell = null; /** * When filtering a given Cell in {@link #filterKeyValue(Cell)}, @@ -79,7 +79,7 @@ final public class FilterList extends Filter { * Individual filters transformation are applied only when the filter includes the Cell. * Transformations are composed in the order specified by {@link #filters}. */ - private Cell transformedKV = null; + private Cell transformedCell = null; /** * Constructor that takes a set of {@link Filter}s. The default operator @@ -215,42 +215,22 @@ final public class FilterList extends Filter { } @Override - public Cell transformCell(Cell v) throws IOException { - // transformCell() is expected to follow an inclusive filterKeyValue() immediately: - if (!v.equals(this.referenceKV)) { - throw new IllegalStateException("Reference Cell: " + this.referenceKV + " does not match: " - + v); + public Cell transformCell(Cell c) throws IOException { + if (!CellComparator.equals(c, referenceCell)) { + throw new IllegalStateException("Reference Cell: " + this.referenceCell + " does not match: " + + c); } - return this.transformedKV; - } - - /** - * WARNING: please to not override this method. Instead override {@link #transformCell(Cell)}. - * - * When removing this, its body should be placed in transformCell. - * - * This is for transition from 0.94 -> 0.96 - */ - @Deprecated - @Override - public KeyValue transform(KeyValue v) throws IOException { - // transform() is expected to follow an inclusive filterKeyValue() immediately: - if (!v.equals(this.referenceKV)) { - throw new IllegalStateException( - "Reference Cell: " + this.referenceKV + " does not match: " + v); - } - return KeyValueUtil.ensureKeyValue(this.transformedKV); + return this.transformedCell; } - @Override @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="SF_SWITCH_FALLTHROUGH", justification="Intentional") - public ReturnCode filterKeyValue(Cell v) throws IOException { - this.referenceKV = v; + public ReturnCode filterKeyValue(Cell c) throws IOException { + this.referenceCell = c; // Accumulates successive transformation of every filter that includes the Cell: - Cell transformed = v; + Cell transformed = c; ReturnCode rc = operator == Operator.MUST_PASS_ONE? ReturnCode.SKIP: ReturnCode.INCLUDE; @@ -261,7 +241,7 @@ final public class FilterList extends Filter { if (filter.filterAllRemaining()) { return ReturnCode.NEXT_ROW; } - ReturnCode code = filter.filterKeyValue(v); + ReturnCode code = filter.filterKeyValue(c); switch (code) { // Override INCLUDE and continue to evaluate. case INCLUDE_AND_NEXT_COL: @@ -280,7 +260,7 @@ final public class FilterList extends Filter { continue; } - switch (filter.filterKeyValue(v)) { + switch (filter.filterKeyValue(c)) { case INCLUDE: if (rc != ReturnCode.INCLUDE_AND_NEXT_COL) { rc = ReturnCode.INCLUDE; @@ -307,7 +287,7 @@ final public class FilterList extends Filter { } // Save the transformed Cell for transform(): - this.transformedKV = transformed; + this.transformedCell = transformed; return rc; } @@ -414,23 +394,17 @@ final public class FilterList extends Filter { } @Override - @Deprecated - public KeyValue getNextKeyHint(KeyValue currentKV) throws IOException { - return KeyValueUtil.ensureKeyValue(getNextCellHint((Cell)currentKV)); - } - - @Override - public Cell getNextCellHint(Cell currentKV) throws IOException { + public Cell getNextCellHint(Cell currentCell) throws IOException { Cell keyHint = null; if (operator == Operator.MUST_PASS_ALL) { - keyHint = seekHintFilter.getNextCellHint(currentKV); + keyHint = seekHintFilter.getNextCellHint(currentCell); return keyHint; } // If any condition can pass, we need to keep the min hint int listize = filters.size(); for (int i = 0; i < listize; i++) { - Cell curKeyHint = filters.get(i).getNextCellHint(currentKV); + Cell curKeyHint = filters.get(i).getNextCellHint(currentCell); if (curKeyHint == null) { // If we ever don't have a hint and this is must-pass-one, then no hint return null; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java index a48c515..5176115 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterWrapper.java @@ -22,10 +22,8 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -97,19 +95,9 @@ final public class FilterWrapper extends Filter { return this.filter.filterRow(); } - /** - * This method is deprecated and you should override Cell getNextKeyHint(Cell) instead. - */ - @Override - @Deprecated - public KeyValue getNextKeyHint(KeyValue currentKV) throws IOException { - // This will never get called. - return KeyValueUtil.ensureKeyValue(this.filter.getNextCellHint((Cell)currentKV)); - } - @Override - public Cell getNextCellHint(Cell currentKV) throws IOException { - return this.filter.getNextCellHint(currentKV); + public Cell getNextCellHint(Cell currentCell) throws IOException { + return this.filter.getNextCellHint(currentCell); } @Override @@ -127,18 +115,6 @@ final public class FilterWrapper extends Filter { return this.filter.transformCell(v); } - /** - * WARNING: please to not override this method. Instead override {@link #transformCell(Cell)}. - * - * This is for transition from 0.94 -> 0.96 - */ - @Override - @Deprecated - public KeyValue transform(KeyValue currentKV) throws IOException { - // This will never get called. - return KeyValueUtil.ensureKeyValue(this.filter.transformCell(currentKV)); - } - @Override public boolean hasFilterRow() { return this.filter.hasFilterRow(); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java index dafb485..77ed7d9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.java @@ -20,9 +20,9 @@ package org.apache.hadoop.hbase.filter; import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -53,13 +53,6 @@ public class FirstKeyOnlyFilter extends FilterBase { return ReturnCode.INCLUDE; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public static Filter createFilterFromArguments(ArrayList filterArguments) { Preconditions.checkArgument(filterArguments.size() == 0, "Expected 0 but got: %s", filterArguments.size()); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java index fc40982..622f5ab 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FirstKeyValueMatchingQualifiersFilter.java @@ -18,10 +18,13 @@ package org.apache.hadoop.hbase.filter; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; +import java.util.Set; +import java.util.TreeSet; + import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; import org.apache.hadoop.hbase.util.ByteStringer; @@ -30,9 +33,6 @@ import org.apache.hadoop.hbase.util.Bytes; import com.google.protobuf.ByteString; import com.google.protobuf.InvalidProtocolBufferException; -import java.util.Set; -import java.util.TreeSet; - /** * The filter looks for the given columns in KeyValue. Once there is a match for * any one of the columns, it returns ReturnCode.NEXT_ROW for remaining diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java index 870e0ff..9b99b71 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java @@ -21,12 +21,10 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.List; -import com.google.common.annotations.VisibleForTesting; -import com.google.protobuf.InvalidProtocolBufferException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BytesBytesPair; @@ -34,9 +32,8 @@ import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; +import com.google.common.annotations.VisibleForTesting; +import com.google.protobuf.InvalidProtocolBufferException; /** * Filters data based on fuzzy row key. Performs fast-forwards during scanning. @@ -88,12 +85,12 @@ public class FuzzyRowFilter extends FilterBase { // TODO: possible improvement: save which fuzzy row key to use when providing a hint @Override - public ReturnCode filterKeyValue(Cell cell) { + public ReturnCode filterKeyValue(Cell c) { // assigning "worst" result first and looking for better options SatisfiesCode bestOption = SatisfiesCode.NO_NEXT; for (Pair fuzzyData : fuzzyKeysData) { - SatisfiesCode satisfiesCode = satisfies(isReversed(), cell.getRowArray(), - cell.getRowOffset(), cell.getRowLength(), fuzzyData.getFirst(), fuzzyData.getSecond()); + SatisfiesCode satisfiesCode = satisfies(isReversed(), c.getRowArray(), c.getRowOffset(), + c.getRowLength(), fuzzyData.getFirst(), fuzzyData.getSecond()); if (satisfiesCode == SatisfiesCode.YES) { return ReturnCode.INCLUDE; } @@ -112,21 +109,14 @@ public class FuzzyRowFilter extends FilterBase { return ReturnCode.NEXT_ROW; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override - public Cell getNextCellHint(Cell curCell) { + public Cell getNextCellHint(Cell currentCell) { byte[] nextRowKey = null; // Searching for the "smallest" row key that satisfies at least one fuzzy row key for (Pair fuzzyData : fuzzyKeysData) { - byte[] nextRowKeyCandidate = getNextForFuzzyRule(isReversed(), curCell.getRowArray(), - curCell.getRowOffset(), curCell.getRowLength(), fuzzyData.getFirst(), - fuzzyData.getSecond()); + byte[] nextRowKeyCandidate = getNextForFuzzyRule(isReversed(), currentCell.getRowArray(), + currentCell.getRowOffset(), currentCell.getRowLength(), fuzzyData.getFirst(), + fuzzyData.getSecond()); if (nextRowKeyCandidate == null) { continue; } @@ -142,10 +132,9 @@ public class FuzzyRowFilter extends FilterBase { // Can happen in reversed scanner when currentKV is just before the next possible match; in // this case, fall back on scanner simply calling KeyValueHeap.next() // TODO: is there a better way than throw exception? (stop the scanner?) - throw new IllegalStateException("No next row key that satisfies fuzzy exists when" + - " getNextKeyHint() is invoked." + - " Filter: " + this.toString() + - " currentKV: " + curCell); + throw new IllegalStateException("No next row key that satisfies fuzzy exists when" + + " getNextKeyHint() is invoked." + " Filter: " + this.toString() + " currentKV: " + + currentCell); } return nextRowKey == null ? null : KeyValueUtil.createFirstOnRow(nextRowKey); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java index ebe72ee..cf2d153 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/InclusiveStopFilter.java @@ -21,12 +21,12 @@ package org.apache.hadoop.hbase.filter; import java.util.ArrayList; -import org.apache.hadoop.hbase.util.ByteStringer; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.common.base.Preconditions; @@ -58,13 +58,6 @@ public class InclusiveStopFilter extends FilterBase { return ReturnCode.INCLUDE; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public boolean filterRowKey(byte[] buffer, int offset, int length) { if (buffer == null) { //noinspection RedundantIfStatement diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java index cebb26a..2a2b525 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/KeyOnlyFilter.java @@ -22,11 +22,11 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.ArrayList; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java index d3eb642..b7ec11a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultipleColumnPrefixFilter.java @@ -17,20 +17,21 @@ */ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Comparator; +import java.util.TreeSet; + import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Comparator; -import java.util.TreeSet; +import com.google.protobuf.InvalidProtocolBufferException; /** * This filter is used for selecting only those keys with columns that matches @@ -71,13 +72,6 @@ public class MultipleColumnPrefixFilter extends FilterBase { } } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public ReturnCode filterColumn(byte[] buffer, int qualifierOffset, int qualifierLength) { byte [] qualifier = Arrays.copyOfRange(buffer, qualifierOffset, qualifierLength + qualifierOffset); @@ -161,10 +155,10 @@ public class MultipleColumnPrefixFilter extends FilterBase { } @Override - public Cell getNextCellHint(Cell kv) { + public Cell getNextCellHint(Cell cell) { return KeyValueUtil.createFirstOnRow( - kv.getRowArray(), kv.getRowOffset(), kv.getRowLength(), kv.getFamilyArray(), - kv.getFamilyOffset(), kv.getFamilyLength(), hint, 0, hint.length); + cell.getRowArray(), cell.getRowOffset(), cell.getRowLength(), cell.getFamilyArray(), + cell.getFamilyOffset(), cell.getFamilyLength(), hint, 0, hint.length); } public TreeSet createTreeSet() { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java index d3bb89d..0dbd97b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PageFilter.java @@ -18,17 +18,17 @@ */ package org.apache.hadoop.hbase.filter; -import com.google.common.base.Preconditions; -import com.google.protobuf.InvalidProtocolBufferException; +import java.io.IOException; +import java.util.ArrayList; + +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.filter.Filter.ReturnCode; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; -import java.io.IOException; -import java.util.ArrayList; +import com.google.common.base.Preconditions; +import com.google.protobuf.InvalidProtocolBufferException; /** * Implementation of Filter interface that limits results to a specific page * size. It terminates scanning once the number of filter-passed rows is > @@ -64,14 +64,7 @@ public class PageFilter extends FilterBase { public ReturnCode filterKeyValue(Cell ignored) throws IOException { return ReturnCode.INCLUDE; } - - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - + public boolean filterAllRemaining() { return this.rowsAccepted >= this.pageSize; } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java index 8e6cfae..3a20772 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseConstants.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase.filter; +import java.nio.ByteBuffer; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.nio.ByteBuffer; - /** * ParseConstants holds a bunch of constants related to parsing Filter Strings * Used by {@link ParseFilter} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java index 9d6bd3c..8101f4a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/ParseFilter.java @@ -18,14 +18,6 @@ */ package org.apache.hadoop.hbase.filter; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; -import org.apache.hadoop.hbase.util.Bytes; - import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.nio.ByteBuffer; @@ -38,6 +30,14 @@ import java.util.Map; import java.util.Set; import java.util.Stack; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; +import org.apache.hadoop.hbase.util.Bytes; + /** * This class allows a user to specify a filter via a string * The string is parsed using the methods of this class and diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java index 8030ff6..5b56748 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/PrefixFilter.java @@ -19,18 +19,18 @@ package org.apache.hadoop.hbase.filter; -import com.google.common.base.Preconditions; -import org.apache.hadoop.hbase.util.ByteStringer; -import com.google.protobuf.InvalidProtocolBufferException; +import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; -import java.util.ArrayList; +import com.google.common.base.Preconditions; +import com.google.protobuf.InvalidProtocolBufferException; /** * Pass results that have same row prefix. @@ -73,13 +73,6 @@ public class PrefixFilter extends FilterBase { return ReturnCode.INCLUDE; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public boolean filterRow() { return filterRow; } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java index bf3a5f9..fb183f1 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/QualifierFilter.java @@ -22,9 +22,9 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -41,8 +41,8 @@ import com.google.protobuf.InvalidProtocolBufferException; *

    * Multiple filters can be combined using {@link FilterList}. *

    - * If an already known column qualifier is looked for, use - * {@link org.apache.hadoop.hbase.client.Get#addColumn} + * If an already known column qualifier is looked for, + * use {@link org.apache.hadoop.hbase.client.Get#addColumn} * directly rather than a filter. */ @InterfaceAudience.Public diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RandomRowFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RandomRowFilter.java index 243923f..2a25b32 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RandomRowFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RandomRowFilter.java @@ -21,9 +21,9 @@ package org.apache.hadoop.hbase.filter; import java.util.Random; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -79,13 +79,6 @@ public class RandomRowFilter extends FilterBase { return ReturnCode.INCLUDE; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override public boolean filterRow() { return filterOutRow; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java index 0bc20f3..70dd1f9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RegexStringComparator.java @@ -18,8 +18,6 @@ */ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; - import java.nio.charset.Charset; import java.nio.charset.IllegalCharsetNameException; import java.util.Arrays; @@ -32,7 +30,6 @@ import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; import org.apache.hadoop.hbase.util.Bytes; - import org.jcodings.Encoding; import org.jcodings.EncodingDB; import org.jcodings.specific.UTF8Encoding; @@ -41,6 +38,8 @@ import org.joni.Option; import org.joni.Regex; import org.joni.Syntax; +import com.google.protobuf.InvalidProtocolBufferException; + /** * This comparator is for use with {@link CompareFilter} implementations, such * as {@link RowFilter}, {@link QualifierFilter}, and {@link ValueFilter}, for diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java index 23a1e5d..cb4337e 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/RowFilter.java @@ -22,9 +22,9 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.ArrayList; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; @@ -40,8 +40,8 @@ import com.google.protobuf.InvalidProtocolBufferException; *

    * Multiple filters can be combined using {@link FilterList}. *

    - * If an already known row range needs to be scanned, use - * {@link org.apache.hadoop.hbase.CellScanner} start + * If an already known row range needs to be scanned, + * use {@link org.apache.hadoop.hbase.CellScanner} start * and stop rows directly rather than a filter. */ @InterfaceAudience.Public diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java index 5c8668b..d030fd2 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueExcludeFilter.java @@ -24,10 +24,10 @@ import java.util.ArrayList; import java.util.Iterator; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -106,10 +106,9 @@ public class SingleColumnValueExcludeFilter extends SingleColumnValueFilter { public void filterRowCells(List kvs) { Iterator it = kvs.iterator(); while (it.hasNext()) { - Cell cell = it.next(); // If the current column is actually the tested column, // we will skip it instead. - if (CellUtil.matchingColumn(cell, this.columnFamily, this.columnQualifier)) { + if (CellUtil.matchingColumn(it.next(), this.columnFamily, this.columnQualifier)) { it.remove(); } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java index 55e75ba..d905868 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.java @@ -22,19 +22,19 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; import java.util.ArrayList; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.CompareType; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.common.base.Preconditions; @@ -170,6 +170,7 @@ public class SingleColumnValueFilter extends FilterBase { @Override public ReturnCode filterKeyValue(Cell c) { + // System.out.println("REMOVE KEY=" + keyValue.toString() + ", value=" + Bytes.toString(keyValue.getValue())); if (this.matchedColumn) { // We already found and matched the single column, all keys now pass return ReturnCode.INCLUDE; @@ -181,20 +182,14 @@ public class SingleColumnValueFilter extends FilterBase { return ReturnCode.INCLUDE; } foundColumn = true; - if (filterColumnValue(c.getValueArray(), c.getValueOffset(), c.getValueLength())) { + if (filterColumnValue(c.getValueArray(), + c.getValueOffset(), c.getValueLength())) { return this.latestVersionOnly? ReturnCode.NEXT_ROW: ReturnCode.INCLUDE; } this.matchedColumn = true; return ReturnCode.INCLUDE; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - private boolean filterColumnValue(final byte [] data, final int offset, final int length) { int compareResult = this.comparator.compareTo(data, offset, length); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java index f6d120b..ce8e511 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SkipFilter.java @@ -21,9 +21,9 @@ package org.apache.hadoop.hbase.filter; import java.io.IOException; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java index 5eb3703..63fd0a3 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/SubstringComparator.java @@ -18,13 +18,14 @@ */ package org.apache.hadoop.hbase.filter; -import com.google.protobuf.InvalidProtocolBufferException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.ComparatorProtos; import org.apache.hadoop.hbase.util.Bytes; +import com.google.protobuf.InvalidProtocolBufferException; + /** * This comparator is for use with SingleColumnValueFilter, for filtering based on diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java index 27896ea..32a3d73 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/filter/TimestampsFilter.java @@ -17,17 +17,18 @@ */ package org.apache.hadoop.hbase.filter; -import com.google.common.base.Preconditions; -import com.google.protobuf.InvalidProtocolBufferException; +import java.util.ArrayList; +import java.util.List; +import java.util.TreeSet; + +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; -import java.util.ArrayList; -import java.util.List; -import java.util.TreeSet; +import com.google.common.base.Preconditions; +import com.google.protobuf.InvalidProtocolBufferException; /** * Filter that returns only cells whose timestamp (version) is @@ -99,13 +100,6 @@ public class TimestampsFilter extends FilterBase { return ReturnCode.SKIP; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - public static Filter createFilterFromArguments(ArrayList filterArguments) { ArrayList timestamps = new ArrayList(); for (int i = 0; i * When implementing {@link com.google.protobuf.Service} defined methods, - * coprocessor endpoints can use the following - * pattern to pass exceptions back to the RPC client: + * coprocessor endpoints can use the following pattern to pass exceptions back to the RPC client: * * public void myMethod(RpcController controller, MyRequest request, RpcCallback done) { * MyResponse response = null; @@ -58,7 +54,7 @@ public class ServerRpcController implements RpcController { /** * The exception thrown within * {@link com.google.protobuf.Service#callMethod( - * Descriptors.MethodDescriptor, RpcController, Message, RpcCallback)}, + * Descriptors.MethodDescriptor, RpcController, Message, RpcCallback)}, * if any. */ // TODO: it would be good widen this to just Throwable, but IOException is what we allow now diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java index c11cb28..a224a12 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/StoppedRpcClientException.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.ipc; +import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseIOException; @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java index 40fe38a..fd1c432 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/master/RegionState.java @@ -18,18 +18,16 @@ package org.apache.hadoop.hbase.master; import java.util.Date; -import java.util.concurrent.atomic.AtomicLong; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; /** * State of a Region while undergoing transitions. - * Region state cannot be modified except the stamp field. - * So it is almost immutable. + * This class is immutable. */ @InterfaceAudience.Private public class RegionState { @@ -38,10 +36,10 @@ public class RegionState { @InterfaceStability.Evolving public enum State { OFFLINE, // region is in an offline state - PENDING_OPEN, // sent rpc to server to open but has not begun + PENDING_OPEN, // same as OPENING, to be removed OPENING, // server has begun to open but not yet done OPEN, // server opened region and updated meta - PENDING_CLOSE, // sent rpc to server to close but has not begun + PENDING_CLOSE, // same as CLOSING, to be removed CLOSING, // server has begun to close but not yet done CLOSED, // server closed region and updated meta SPLITTING, // server started split of a region @@ -174,16 +172,10 @@ public class RegionState { } } - // Many threads can update the state at the stamp at the same time - private final AtomicLong stamp; - private HRegionInfo hri; - - private volatile ServerName serverName; - private volatile State state; - - public RegionState() { - this.stamp = new AtomicLong(System.currentTimeMillis()); - } + private final long stamp; + private final HRegionInfo hri; + private final ServerName serverName; + private final State state; public RegionState(HRegionInfo region, State state) { this(region, state, System.currentTimeMillis(), null); @@ -198,20 +190,16 @@ public class RegionState { State state, long stamp, ServerName serverName) { this.hri = region; this.state = state; - this.stamp = new AtomicLong(stamp); + this.stamp = stamp; this.serverName = serverName; } - public void updateTimestampToNow() { - setTimestamp(System.currentTimeMillis()); - } - public State getState() { return state; } public long getStamp() { - return stamp.get(); + return stamp; } public HRegionInfo getRegion() { @@ -222,30 +210,28 @@ public class RegionState { return serverName; } + /** + * PENDING_CLOSE (to be removed) is the same as CLOSING + */ public boolean isClosing() { - return state == State.CLOSING; + return state == State.PENDING_CLOSE || state == State.CLOSING; } public boolean isClosed() { return state == State.CLOSED; } - public boolean isPendingClose() { - return state == State.PENDING_CLOSE; - } - + /** + * PENDING_OPEN (to be removed) is the same as OPENING + */ public boolean isOpening() { - return state == State.OPENING; + return state == State.PENDING_OPEN || state == State.OPENING; } public boolean isOpened() { return state == State.OPEN; } - public boolean isPendingOpen() { - return state == State.PENDING_OPEN; - } - public boolean isOffline() { return state == State.OFFLINE; } @@ -282,42 +268,56 @@ public class RegionState { return state == State.MERGING_NEW; } - public boolean isOpenOrMergingOnServer(final ServerName sn) { - return isOnServer(sn) && (isOpened() || isMerging()); + public boolean isOnServer(final ServerName sn) { + return serverName != null && serverName.equals(sn); } - public boolean isOpenOrMergingNewOnServer(final ServerName sn) { - return isOnServer(sn) && (isOpened() || isMergingNew()); + public boolean isMergingOnServer(final ServerName sn) { + return isOnServer(sn) && isMerging(); } - public boolean isOpenOrSplittingOnServer(final ServerName sn) { - return isOnServer(sn) && (isOpened() || isSplitting()); + public boolean isMergingNewOnServer(final ServerName sn) { + return isOnServer(sn) && isMergingNew(); } - public boolean isOpenOrSplittingNewOnServer(final ServerName sn) { - return isOnServer(sn) && (isOpened() || isSplittingNew()); + public boolean isMergingNewOrOpenedOnServer(final ServerName sn) { + return isOnServer(sn) && (isMergingNew() || isOpened()); } - public boolean isPendingOpenOrOpeningOnServer(final ServerName sn) { - return isOnServer(sn) && isPendingOpenOrOpening(); + public boolean isMergingNewOrOfflineOnServer(final ServerName sn) { + return isOnServer(sn) && (isMergingNew() || isOffline()); } - // Failed open is also kind of pending open - public boolean isPendingOpenOrOpening() { - return isPendingOpen() || isOpening() || isFailedOpen(); + public boolean isSplittingOnServer(final ServerName sn) { + return isOnServer(sn) && isSplitting(); } - public boolean isPendingCloseOrClosingOnServer(final ServerName sn) { - return isOnServer(sn) && isPendingCloseOrClosing(); + public boolean isSplittingNewOnServer(final ServerName sn) { + return isOnServer(sn) && isSplittingNew(); } - // Failed close is also kind of pending close - public boolean isPendingCloseOrClosing() { - return isPendingClose() || isClosing() || isFailedClose(); + public boolean isSplittingOrOpenedOnServer(final ServerName sn) { + return isOnServer(sn) && (isSplitting() || isOpened()); } - public boolean isOnServer(final ServerName sn) { - return serverName != null && serverName.equals(sn); + public boolean isSplittingOrSplitOnServer(final ServerName sn) { + return isOnServer(sn) && (isSplitting() || isSplit()); + } + + public boolean isClosingOrClosedOnServer(final ServerName sn) { + return isOnServer(sn) && (isClosing() || isClosed()); + } + + public boolean isOpeningOrFailedOpenOnServer(final ServerName sn) { + return isOnServer(sn) && (isOpening() || isFailedOpen()); + } + + public boolean isOpeningOrOpenedOnServer(final ServerName sn) { + return isOnServer(sn) && (isOpening() || isOpened()); + } + + public boolean isOpenedOnServer(final ServerName sn) { + return isOnServer(sn) && isOpened(); } /** @@ -364,12 +364,10 @@ public class RegionState { * A slower (but more easy-to-read) stringification */ public String toDescriptiveString() { - long lstamp = stamp.get(); - long relTime = System.currentTimeMillis() - lstamp; - + long relTime = System.currentTimeMillis() - stamp; return hri.getRegionNameAsString() + " state=" + state - + ", ts=" + new Date(lstamp) + " (" + (relTime/1000) + "s ago)" + + ", ts=" + new Date(stamp) + " (" + (relTime/1000) + "s ago)" + ", server=" + serverName; } @@ -396,10 +394,6 @@ public class RegionState { State.convert(proto.getState()), proto.getStamp(), null); } - protected void setTimestamp(final long timestamp) { - stamp.set(timestamp); - } - /** * Check if two states are the same, except timestamp */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java index 6d82444..9c383f8 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java @@ -35,6 +35,7 @@ import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.NavigableSet; +import java.util.concurrent.TimeUnit; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; @@ -67,6 +68,7 @@ import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.filter.ByteArrayComparable; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.io.TimeRange; +import org.apache.hadoop.hbase.protobuf.ProtobufMagic; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; @@ -114,6 +116,7 @@ import org.apache.hadoop.hbase.protobuf.generated.MapReduceProtos; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.CreateTableRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableDescriptorsResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.MasterService; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionServerReportRequest; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionServerStartupRequest; import org.apache.hadoop.hbase.protobuf.generated.WALProtos; @@ -122,6 +125,9 @@ import org.apache.hadoop.hbase.protobuf.generated.WALProtos.FlushDescriptor; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.FlushDescriptor.FlushAction; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.RegionEventDescriptor; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.RegionEventDescriptor.EventType; +import org.apache.hadoop.hbase.quotas.QuotaScope; +import org.apache.hadoop.hbase.quotas.QuotaType; +import org.apache.hadoop.hbase.quotas.ThrottleType; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.access.TablePermission; import org.apache.hadoop.hbase.security.access.UserPermission; @@ -167,7 +173,6 @@ public final class ProtobufUtil { private final static Map> PRIMITIVES = new HashMap>(); - /** * Many results are simple: no cell, exists true or false. To save on object creations, * we reuse them across calls. @@ -236,27 +241,20 @@ public final class ProtobufUtil { } /** - * Magic we put ahead of a serialized protobuf message. - * For example, all znode content is protobuf messages with the below magic - * for preamble. - */ - public static final byte [] PB_MAGIC = new byte [] {'P', 'B', 'U', 'F'}; - private static final String PB_MAGIC_STR = Bytes.toString(PB_MAGIC); - - /** - * Prepend the passed bytes with four bytes of magic, {@link #PB_MAGIC}, to flag what - * follows as a protobuf in hbase. Prepend these bytes to all content written to znodes, etc. + * Prepend the passed bytes with four bytes of magic, {@link ProtobufMagic#PB_MAGIC}, + * to flag what follows as a protobuf in hbase. Prepend these bytes to all content written to + * znodes, etc. * @param bytes Bytes to decorate * @return The passed bytes with magic prepended (Creates a new - * byte array that is bytes.length plus {@link #PB_MAGIC}.length. + * byte array that is bytes.length plus {@link ProtobufMagic#PB_MAGIC}.length. */ public static byte [] prependPBMagic(final byte [] bytes) { - return Bytes.add(PB_MAGIC, bytes); + return Bytes.add(ProtobufMagic.PB_MAGIC, bytes); } /** * @param bytes Bytes to check. - * @return True if passed bytes has {@link #PB_MAGIC} for a prefix. + * @return True if passed bytes has {@link ProtobufMagic#PB_MAGIC} for a prefix. */ public static boolean isPBMagicPrefix(final byte [] bytes) { if (bytes == null) return false; @@ -265,11 +263,12 @@ public final class ProtobufUtil { /** * @param bytes Bytes to check. - * @return True if passed bytes has {@link #PB_MAGIC} for a prefix. + * @return True if passed bytes has {@link ProtobufMagic#PB_MAGIC} for a prefix. */ public static boolean isPBMagicPrefix(final byte [] bytes, int offset, int len) { - if (bytes == null || len < PB_MAGIC.length) return false; - return Bytes.compareTo(PB_MAGIC, 0, PB_MAGIC.length, bytes, offset, PB_MAGIC.length) == 0; + if (bytes == null || len < ProtobufMagic.PB_MAGIC.length) return false; + return Bytes.compareTo(ProtobufMagic.PB_MAGIC, 0, ProtobufMagic.PB_MAGIC.length, + bytes, offset, ProtobufMagic.PB_MAGIC.length) == 0; } /** @@ -278,15 +277,16 @@ public final class ProtobufUtil { */ public static void expectPBMagicPrefix(final byte [] bytes) throws DeserializationException { if (!isPBMagicPrefix(bytes)) { - throw new DeserializationException("Missing pb magic " + PB_MAGIC_STR + " prefix"); + throw new DeserializationException("Missing pb magic " + + Bytes.toString(ProtobufMagic.PB_MAGIC) + " prefix"); } } /** - * @return Length of {@link #PB_MAGIC} + * @return Length of {@link ProtobufMagic#PB_MAGIC} */ public static int lengthOfPBMagic() { - return PB_MAGIC.length; + return ProtobufMagic.PB_MAGIC.length; } /** @@ -1160,6 +1160,7 @@ public final class ProtobufUtil { return toMutation(type, mutation, builder, HConstants.NO_NONCE); } + @SuppressWarnings("deprecation") public static MutationProto toMutation(final MutationType type, final Mutation mutation, MutationProto.Builder builder, long nonce) throws IOException { @@ -1684,13 +1685,12 @@ public final class ProtobufUtil { * * @param admin * @param regionName - * @param transitionInZK * @throws IOException */ public static void closeRegion(final AdminService.BlockingInterface admin, - final ServerName server, final byte[] regionName, final boolean transitionInZK) throws IOException { + final ServerName server, final byte[] regionName) throws IOException { CloseRegionRequest closeRegionRequest = - RequestConverter.buildCloseRegionRequest(server, regionName, transitionInZK); + RequestConverter.buildCloseRegionRequest(server, regionName); try { admin.closeRegion(null, closeRegionRequest); } catch (ServiceException se) { @@ -1704,18 +1704,15 @@ public final class ProtobufUtil { * * @param admin * @param regionName - * @param versionOfClosingNode * @return true if the region is closed * @throws IOException */ public static boolean closeRegion(final AdminService.BlockingInterface admin, - final ServerName server, - final byte[] regionName, - final int versionOfClosingNode, final ServerName destinationServer, - final boolean transitionInZK) throws IOException { + final ServerName server, final byte[] regionName, + final ServerName destinationServer) throws IOException { CloseRegionRequest closeRegionRequest = RequestConverter.buildCloseRegionRequest(server, - regionName, versionOfClosingNode, destinationServer, transitionInZK); + regionName, destinationServer); try { CloseRegionResponse response = admin.closeRegion(null, closeRegionRequest); return ResponseConverter.isClosed(response); @@ -1734,7 +1731,7 @@ public final class ProtobufUtil { public static void openRegion(final AdminService.BlockingInterface admin, ServerName server, final HRegionInfo region) throws IOException { OpenRegionRequest request = - RequestConverter.buildOpenRegionRequest(server, region, -1, null, null); + RequestConverter.buildOpenRegionRequest(server, region, null, null); try { admin.openRegion(null, request); } catch (ServiceException se) { @@ -1881,7 +1878,7 @@ public final class ProtobufUtil { public static byte [] toDelimitedByteArray(final Message m) throws IOException { // Allocate arbitrary big size so we avoid resizing. ByteArrayOutputStream baos = new ByteArrayOutputStream(4096); - baos.write(PB_MAGIC); + baos.write(ProtobufMagic.PB_MAGIC); m.writeDelimitedTo(baos); return baos.toByteArray(); } @@ -2556,6 +2553,7 @@ public final class ProtobufUtil { } } + @SuppressWarnings("deprecation") public static CompactionDescriptor toCompactionDescriptor(HRegionInfo info, byte[] family, List inputPaths, List outputPaths, Path storeDir) { // compaction descriptor contains relative paths. @@ -2828,4 +2826,141 @@ public final class ProtobufUtil { } return result; } + + /** + * Convert a protocol buffer TimeUnit to a client TimeUnit + * + * @param proto + * @return the converted client TimeUnit + */ + public static TimeUnit toTimeUnit(final HBaseProtos.TimeUnit proto) { + switch (proto) { + case NANOSECONDS: return TimeUnit.NANOSECONDS; + case MICROSECONDS: return TimeUnit.MICROSECONDS; + case MILLISECONDS: return TimeUnit.MILLISECONDS; + case SECONDS: return TimeUnit.SECONDS; + case MINUTES: return TimeUnit.MINUTES; + case HOURS: return TimeUnit.HOURS; + case DAYS: return TimeUnit.DAYS; + } + throw new RuntimeException("Invalid TimeUnit " + proto); + } + + /** + * Convert a client TimeUnit to a protocol buffer TimeUnit + * + * @param timeUnit + * @return the converted protocol buffer TimeUnit + */ + public static HBaseProtos.TimeUnit toProtoTimeUnit(final TimeUnit timeUnit) { + switch (timeUnit) { + case NANOSECONDS: return HBaseProtos.TimeUnit.NANOSECONDS; + case MICROSECONDS: return HBaseProtos.TimeUnit.MICROSECONDS; + case MILLISECONDS: return HBaseProtos.TimeUnit.MILLISECONDS; + case SECONDS: return HBaseProtos.TimeUnit.SECONDS; + case MINUTES: return HBaseProtos.TimeUnit.MINUTES; + case HOURS: return HBaseProtos.TimeUnit.HOURS; + case DAYS: return HBaseProtos.TimeUnit.DAYS; + } + throw new RuntimeException("Invalid TimeUnit " + timeUnit); + } + + /** + * Convert a protocol buffer ThrottleType to a client ThrottleType + * + * @param proto + * @return the converted client ThrottleType + */ + public static ThrottleType toThrottleType(final QuotaProtos.ThrottleType proto) { + switch (proto) { + case REQUEST_NUMBER: return ThrottleType.REQUEST_NUMBER; + case REQUEST_SIZE: return ThrottleType.REQUEST_SIZE; + } + throw new RuntimeException("Invalid ThrottleType " + proto); + } + + /** + * Convert a client ThrottleType to a protocol buffer ThrottleType + * + * @param type + * @return the converted protocol buffer ThrottleType + */ + public static QuotaProtos.ThrottleType toProtoThrottleType(final ThrottleType type) { + switch (type) { + case REQUEST_NUMBER: return QuotaProtos.ThrottleType.REQUEST_NUMBER; + case REQUEST_SIZE: return QuotaProtos.ThrottleType.REQUEST_SIZE; + } + throw new RuntimeException("Invalid ThrottleType " + type); + } + + /** + * Convert a protocol buffer QuotaScope to a client QuotaScope + * + * @param proto + * @return the converted client QuotaScope + */ + public static QuotaScope toQuotaScope(final QuotaProtos.QuotaScope proto) { + switch (proto) { + case CLUSTER: return QuotaScope.CLUSTER; + case MACHINE: return QuotaScope.MACHINE; + } + throw new RuntimeException("Invalid QuotaScope " + proto); + } + + /** + * Convert a client QuotaScope to a protocol buffer QuotaScope + * + * @param scope + * @return the converted protocol buffer QuotaScope + */ + public static QuotaProtos.QuotaScope toProtoQuotaScope(final QuotaScope scope) { + switch (scope) { + case CLUSTER: return QuotaProtos.QuotaScope.CLUSTER; + case MACHINE: return QuotaProtos.QuotaScope.MACHINE; + } + throw new RuntimeException("Invalid QuotaScope " + scope); + } + + /** + * Convert a protocol buffer QuotaType to a client QuotaType + * + * @param proto + * @return the converted client QuotaType + */ + public static QuotaType toQuotaScope(final QuotaProtos.QuotaType proto) { + switch (proto) { + case THROTTLE: return QuotaType.THROTTLE; + } + throw new RuntimeException("Invalid QuotaType " + proto); + } + + /** + * Convert a client QuotaType to a protocol buffer QuotaType + * + * @param type + * @return the converted protocol buffer QuotaType + */ + public static QuotaProtos.QuotaType toProtoQuotaScope(final QuotaType type) { + switch (type) { + case THROTTLE: return QuotaProtos.QuotaType.THROTTLE; + } + throw new RuntimeException("Invalid QuotaType " + type); + } + + /** + * Build a protocol buffer TimedQuota + * + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @param scope the quota scope + * @return the protocol buffer TimedQuota + */ + public static QuotaProtos.TimedQuota toTimedQuota(final long limit, final TimeUnit timeUnit, + final QuotaScope scope) { + return QuotaProtos.TimedQuota.newBuilder() + .setSoftLimit(limit) + .setTimeUnit(toProtoTimeUnit(timeUnit)) + .setScope(toProtoQuotaScope(scope)) + .build(); + } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java index 7095fbd..d23aa02 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java @@ -21,9 +21,6 @@ import java.io.IOException; import java.util.List; import java.util.regex.Pattern; -import org.apache.hadoop.hbase.util.ByteStringer; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.CellScannable; import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.HColumnDescriptor; @@ -32,6 +29,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Action; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.Delete; @@ -92,6 +90,7 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusR import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetSchemaAlterStatusRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableDescriptorsRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableNamesRequest; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsCatalogJanitorEnabledRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ModifyColumnRequest; @@ -106,7 +105,6 @@ import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.GetLa import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; -import org.apache.hadoop.hbase.util.Triple; import com.google.protobuf.ByteString; @@ -757,14 +755,12 @@ public final class RequestConverter { * @return a protocol buffer OpenRegionRequest */ public static OpenRegionRequest - buildOpenRegionRequest(ServerName server, final List>> regionOpenInfos, Boolean openForReplay) { OpenRegionRequest.Builder builder = OpenRegionRequest.newBuilder(); - for (Triple> regionOpenInfo: regionOpenInfos) { - Integer second = regionOpenInfo.getSecond(); - int versionOfOfflineNode = second == null ? -1 : second.intValue(); - builder.addOpenInfo(buildRegionOpenInfo(regionOpenInfo.getFirst(), versionOfOfflineNode, - regionOpenInfo.getThird(), openForReplay)); + for (Pair> regionOpenInfo: regionOpenInfos) { + builder.addOpenInfo(buildRegionOpenInfo(regionOpenInfo.getFirst(), + regionOpenInfo.getSecond(), openForReplay)); } if (server != null) { builder.setServerStartCode(server.getStartcode()); @@ -777,16 +773,15 @@ public final class RequestConverter { * * @param server the serverName for the RPC * @param region the region to open - * @param versionOfOfflineNode that needs to be present in the offline node * @param favoredNodes * @param openForReplay * @return a protocol buffer OpenRegionRequest */ public static OpenRegionRequest buildOpenRegionRequest(ServerName server, - final HRegionInfo region, final int versionOfOfflineNode, List favoredNodes, + final HRegionInfo region, List favoredNodes, Boolean openForReplay) { OpenRegionRequest.Builder builder = OpenRegionRequest.newBuilder(); - builder.addOpenInfo(buildRegionOpenInfo(region, versionOfOfflineNode, favoredNodes, + builder.addOpenInfo(buildRegionOpenInfo(region, favoredNodes, openForReplay)); if (server != null) { builder.setServerStartCode(server.getStartcode()); @@ -817,33 +812,21 @@ public final class RequestConverter { * Create a CloseRegionRequest for a given region name * * @param regionName the name of the region to close - * @param transitionInZK indicator if to transition in ZK * @return a CloseRegionRequest */ public static CloseRegionRequest buildCloseRegionRequest(ServerName server, - final byte[] regionName, final boolean transitionInZK) { - CloseRegionRequest.Builder builder = CloseRegionRequest.newBuilder(); - RegionSpecifier region = buildRegionSpecifier( - RegionSpecifierType.REGION_NAME, regionName); - builder.setRegion(region); - builder.setTransitionInZK(transitionInZK); - if (server != null) { - builder.setServerStartCode(server.getStartcode()); - } - return builder.build(); + final byte[] regionName) { + return buildCloseRegionRequest(server, regionName, null); } public static CloseRegionRequest buildCloseRegionRequest(ServerName server, - final byte[] regionName, final int versionOfClosingNode, - ServerName destinationServer, final boolean transitionInZK) { + final byte[] regionName, ServerName destinationServer) { CloseRegionRequest.Builder builder = CloseRegionRequest.newBuilder(); RegionSpecifier region = buildRegionSpecifier( RegionSpecifierType.REGION_NAME, regionName); builder.setRegion(region); - builder.setVersionOfClosingNode(versionOfClosingNode); - builder.setTransitionInZK(transitionInZK); if (destinationServer != null){ - builder.setDestinationServer(ProtobufUtil.toServerName( destinationServer) ); + builder.setDestinationServer(ProtobufUtil.toServerName(destinationServer)); } if (server != null) { builder.setServerStartCode(server.getStartcode()); @@ -855,18 +838,15 @@ public final class RequestConverter { * Create a CloseRegionRequest for a given encoded region name * * @param encodedRegionName the name of the region to close - * @param transitionInZK indicator if to transition in ZK * @return a CloseRegionRequest */ public static CloseRegionRequest - buildCloseRegionRequest(ServerName server, final String encodedRegionName, - final boolean transitionInZK) { + buildCloseRegionRequest(ServerName server, final String encodedRegionName) { CloseRegionRequest.Builder builder = CloseRegionRequest.newBuilder(); RegionSpecifier region = buildRegionSpecifier( RegionSpecifierType.ENCODED_REGION_NAME, Bytes.toBytes(encodedRegionName)); builder.setRegion(region); - builder.setTransitionInZK(transitionInZK); if (server != null) { builder.setServerStartCode(server.getStartcode()); } @@ -1279,6 +1259,19 @@ public final class RequestConverter { } /** + * Creates a protocol buffer GetTableStateRequest + * + * @param tableName table to get request for + * @return a GetTableStateRequest + */ + public static GetTableStateRequest buildGetTableStateRequest( + final TableName tableName) { + return GetTableStateRequest.newBuilder() + .setTableName(ProtobufUtil.toProtoTableName(tableName)) + .build(); + } + + /** * Creates a protocol buffer GetTableDescriptorsRequest for a single table * * @param tableName the table name @@ -1580,13 +1573,10 @@ public final class RequestConverter { * Create a RegionOpenInfo based on given region info and version of offline node */ private static RegionOpenInfo buildRegionOpenInfo( - final HRegionInfo region, final int versionOfOfflineNode, + final HRegionInfo region, final List favoredNodes, Boolean openForReplay) { RegionOpenInfo.Builder builder = RegionOpenInfo.newBuilder(); builder.setRegion(HRegionInfo.convert(region)); - if (versionOfOfflineNode >= 0) { - builder.setVersionOfOfflineNode(versionOfOfflineNode); - } if (favoredNodes != null) { for (ServerName server : favoredNodes) { builder.addFavoredNodes(ProtobufUtil.toServerName(server)); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java index 725736a..1d42a82 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java @@ -23,12 +23,12 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.ipc.ServerRpcController; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsResponse; @@ -39,11 +39,11 @@ import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.OpenRegionResponse import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.ServerInfo; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MultiRequest; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MultiResponse; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionAction; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionActionResult; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ResultOrException; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MultiResponse; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameBytesPair; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.EnableCatalogJanitorResponse; @@ -114,17 +114,23 @@ public final class ResponseConverter { } for (ResultOrException roe : actionResult.getResultOrExceptionList()) { + Object responseValue; if (roe.hasException()) { - results.add(regionName, roe.getIndex(), ProtobufUtil.toException(roe.getException())); + responseValue = ProtobufUtil.toException(roe.getException()); } else if (roe.hasResult()) { - results.add(regionName, roe.getIndex(), ProtobufUtil.toResult(roe.getResult(), cells)); + responseValue = ProtobufUtil.toResult(roe.getResult(), cells); + // add the load stats, if we got any + if (roe.hasLoadStats()) { + ((Result) responseValue).addResults(roe.getLoadStats()); + } } else if (roe.hasServiceResult()) { - results.add(regionName, roe.getIndex(), roe.getServiceResult()); + responseValue = roe.getServiceResult(); } else { // no result & no exception. Unexpected. throw new IllegalStateException("No result & no exception roe=" + roe + " for region " + actions.getRegion()); } + results.add(regionName, roe.getIndex(), responseValue); } } @@ -149,9 +155,11 @@ public final class ResponseConverter { * @param r * @return an action result builder */ - public static ResultOrException.Builder buildActionResult(final ClientProtos.Result r) { + public static ResultOrException.Builder buildActionResult(final ClientProtos.Result r, + ClientProtos.RegionLoadStats stats) { ResultOrException.Builder builder = ResultOrException.newBuilder(); if (r != null) builder.setResult(r); + if(stats != null) builder.setLoadStats(stats); return builder; } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/InvalidQuotaSettingsException.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/InvalidQuotaSettingsException.java new file mode 100644 index 0000000..54a1545 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/InvalidQuotaSettingsException.java @@ -0,0 +1,32 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * Generic quota exceeded exception for invalid settings + */ +@InterfaceAudience.Private +public class InvalidQuotaSettingsException extends DoNotRetryIOException { + public InvalidQuotaSettingsException(String msg) { + super(msg); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaExceededException.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaExceededException.java new file mode 100644 index 0000000..e0386b5 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaExceededException.java @@ -0,0 +1,34 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Generic quota exceeded exception + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class QuotaExceededException extends DoNotRetryIOException { + public QuotaExceededException(String msg) { + super(msg); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaFilter.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaFilter.java new file mode 100644 index 0000000..c3db6ee --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaFilter.java @@ -0,0 +1,110 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.quotas; + +import java.util.HashSet; +import java.util.Set; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Strings; + +/** + * Filter to use to filter the QuotaRetriever results. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class QuotaFilter { + private Set types = new HashSet(); + private boolean hasFilters = false; + private String namespaceRegex; + private String tableRegex; + private String userRegex; + + public QuotaFilter() { + } + + /** + * Set the user filter regex + * @param regex the user filter + * @return the quota filter object + */ + public QuotaFilter setUserFilter(final String regex) { + this.userRegex = regex; + hasFilters |= !Strings.isEmpty(regex); + return this; + } + + /** + * Set the table filter regex + * @param regex the table filter + * @return the quota filter object + */ + public QuotaFilter setTableFilter(final String regex) { + this.tableRegex = regex; + hasFilters |= !Strings.isEmpty(regex); + return this; + } + + /** + * Set the namespace filter regex + * @param regex the namespace filter + * @return the quota filter object + */ + public QuotaFilter setNamespaceFilter(final String regex) { + this.namespaceRegex = regex; + hasFilters |= !Strings.isEmpty(regex); + return this; + } + + /** + * Add a type to the filter list + * @param type the type to filter on + * @return the quota filter object + */ + public QuotaFilter addTypeFilter(final QuotaType type) { + this.types.add(type); + hasFilters |= true; + return this; + } + + /** @return true if the filter is empty */ + public boolean isNull() { + return !hasFilters; + } + + /** @return the QuotaType types that we want to filter one */ + public Set getTypeFilters() { + return types; + } + + /** @return the Namespace filter regex */ + public String getNamespaceFilter() { + return namespaceRegex; + } + + /** @return the Table filter regex */ + public String getTableFilter() { + return tableRegex; + } + + /** @return the User filter regex */ + public String getUserFilter() { + return userRegex; + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java new file mode 100644 index 0000000..68c8e0a --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaRetriever.java @@ -0,0 +1,185 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.Closeable; +import java.io.IOException; +import java.util.Iterator; +import java.util.LinkedList; +import java.util.Queue; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.ResultScanner; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.util.StringUtils; + +/** + * Scanner to iterate over the quota settings. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class QuotaRetriever implements Closeable, Iterable { + private static final Log LOG = LogFactory.getLog(QuotaRetriever.class); + + private final Queue cache = new LinkedList(); + private ResultScanner scanner; + /** + * Connection to use. + * Could pass one in and have this class use it but this class wants to be standalone. + */ + private Connection connection; + private Table table; + + private QuotaRetriever() { + } + + void init(final Configuration conf, final Scan scan) throws IOException { + this.connection = ConnectionFactory.createConnection(conf); + this.table = this.connection.getTable(QuotaTableUtil.QUOTA_TABLE_NAME); + try { + scanner = table.getScanner(scan); + } catch (IOException e) { + try { + close(); + } catch (IOException ioe) { + LOG.warn("Failed getting scanner and then failed close on cleanup", e); + } + throw e; + } + } + + public void close() throws IOException { + if (this.table != null) { + this.table.close(); + this.table = null; + } + if (this.connection != null) { + this.connection.close(); + this.connection = null; + } + } + + public QuotaSettings next() throws IOException { + if (cache.isEmpty()) { + Result result = scanner.next(); + if (result == null) return null; + + QuotaTableUtil.parseResult(result, new QuotaTableUtil.QuotasVisitor() { + @Override + public void visitUserQuotas(String userName, Quotas quotas) { + cache.addAll(QuotaSettingsFactory.fromUserQuotas(userName, quotas)); + } + + @Override + public void visitUserQuotas(String userName, TableName table, Quotas quotas) { + cache.addAll(QuotaSettingsFactory.fromUserQuotas(userName, table, quotas)); + } + + @Override + public void visitUserQuotas(String userName, String namespace, Quotas quotas) { + cache.addAll(QuotaSettingsFactory.fromUserQuotas(userName, namespace, quotas)); + } + + @Override + public void visitTableQuotas(TableName tableName, Quotas quotas) { + cache.addAll(QuotaSettingsFactory.fromTableQuotas(tableName, quotas)); + } + + @Override + public void visitNamespaceQuotas(String namespace, Quotas quotas) { + cache.addAll(QuotaSettingsFactory.fromNamespaceQuotas(namespace, quotas)); + } + }); + } + return cache.poll(); + } + + @Override + public Iterator iterator() { + return new Iter(); + } + + private class Iter implements Iterator { + QuotaSettings cache; + + public Iter() { + try { + cache = QuotaRetriever.this.next(); + } catch (IOException e) { + LOG.warn(StringUtils.stringifyException(e)); + } + } + + @Override + public boolean hasNext() { + return cache != null; + } + + @Override + public QuotaSettings next() { + QuotaSettings result = cache; + try { + cache = QuotaRetriever.this.next(); + } catch (IOException e) { + LOG.warn(StringUtils.stringifyException(e)); + } + return result; + } + + @Override + public void remove() { + throw new RuntimeException("remove() not supported"); + } + } + + /** + * Open a QuotaRetriever with no filter, all the quota settings will be returned. + * @param conf Configuration object to use. + * @return the QuotaRetriever + * @throws IOException if a remote or network exception occurs + */ + public static QuotaRetriever open(final Configuration conf) throws IOException { + return open(conf, null); + } + + /** + * Open a QuotaRetriever with the specified filter. + * @param conf Configuration object to use. + * @param filter the QuotaFilter + * @return the QuotaRetriever + * @throws IOException if a remote or network exception occurs + */ + public static QuotaRetriever open(final Configuration conf, final QuotaFilter filter) + throws IOException { + Scan scan = QuotaTableUtil.makeScan(filter); + QuotaRetriever scanner = new QuotaRetriever(); + scanner.init(conf, scan); + return scanner; + } +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaScope.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaScope.java new file mode 100644 index 0000000..2e215b6 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaScope.java @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Describe the Scope of the quota rules. + * The quota can be enforced at the cluster level or at machine level. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public enum QuotaScope { + /** + * The specified throttling rules will be applied at the cluster level. + * A limit of 100req/min means 100req/min in total. + * If you execute 50req on a machine and then 50req on another machine + * then you have to wait your quota to fill up. + */ + CLUSTER, + + /** + * The specified throttling rules will be applied on the machine level. + * A limit of 100req/min means that each machine can execute 100req/min. + */ + MACHINE, +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettings.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettings.java new file mode 100644 index 0000000..592c4db --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettings.java @@ -0,0 +1,124 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.quotas; + +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest; + +@InterfaceAudience.Public +@InterfaceStability.Evolving +public abstract class QuotaSettings { + private final String userName; + private final String namespace; + private final TableName tableName; + + protected QuotaSettings(final String userName, final TableName tableName, + final String namespace) { + this.userName = userName; + this.namespace = namespace; + this.tableName = tableName; + } + + public abstract QuotaType getQuotaType(); + + public String getUserName() { + return userName; + } + + public TableName getTableName() { + return tableName; + } + + public String getNamespace() { + return namespace; + } + + /** + * Convert a QuotaSettings to a protocol buffer SetQuotaRequest. + * This is used internally by the Admin client to serialize the quota settings + * and send them to the master. + */ + public static SetQuotaRequest buildSetQuotaRequestProto(final QuotaSettings settings) { + SetQuotaRequest.Builder builder = SetQuotaRequest.newBuilder(); + if (settings.getUserName() != null) { + builder.setUserName(settings.getUserName()); + } + if (settings.getTableName() != null) { + builder.setTableName(ProtobufUtil.toProtoTableName(settings.getTableName())); + } + if (settings.getNamespace() != null) { + builder.setNamespace(settings.getNamespace()); + } + settings.setupSetQuotaRequest(builder); + return builder.build(); + } + + /** + * Called by toSetQuotaRequestProto() + * the subclass should implement this method to set the specific SetQuotaRequest + * properties. + */ + protected abstract void setupSetQuotaRequest(SetQuotaRequest.Builder builder); + + protected String ownerToString() { + StringBuilder builder = new StringBuilder(); + if (userName != null) { + builder.append("USER => '"); + builder.append(userName); + builder.append("', "); + } + if (tableName != null) { + builder.append("TABLE => '"); + builder.append(tableName.toString()); + builder.append("', "); + } + if (namespace != null) { + builder.append("NAMESPACE => '"); + builder.append(namespace); + builder.append("', "); + } + return builder.toString(); + } + + protected static String sizeToString(final long size) { + if (size >= (1L << 50)) return String.format("%dP", size / (1L << 50)); + if (size >= (1L << 40)) return String.format("%dT", size / (1L << 40)); + if (size >= (1L << 30)) return String.format("%dG", size / (1L << 30)); + if (size >= (1L << 20)) return String.format("%dM", size / (1L << 20)); + if (size >= (1L << 10)) return String.format("%dK", size / (1L << 10)); + return String.format("%dB", size); + } + + protected static String timeToString(final TimeUnit timeUnit) { + switch (timeUnit) { + case NANOSECONDS: return "nsec"; + case MICROSECONDS: return "usec"; + case MILLISECONDS: return "msec"; + case SECONDS: return "sec"; + case MINUTES: return "min"; + case HOURS: return "hour"; + case DAYS: return "day"; + } + throw new RuntimeException("Invalid TimeUnit " + timeUnit); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java new file mode 100644 index 0000000..e29fef1 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaSettingsFactory.java @@ -0,0 +1,267 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.quotas; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; + +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class QuotaSettingsFactory { + static class QuotaGlobalsSettingsBypass extends QuotaSettings { + private final boolean bypassGlobals; + + QuotaGlobalsSettingsBypass(final String userName, final TableName tableName, + final String namespace, final boolean bypassGlobals) { + super(userName, tableName, namespace); + this.bypassGlobals = bypassGlobals; + } + + @Override + public QuotaType getQuotaType() { + return QuotaType.GLOBAL_BYPASS; + } + + @Override + protected void setupSetQuotaRequest(SetQuotaRequest.Builder builder) { + builder.setBypassGlobals(bypassGlobals); + } + + @Override + public String toString() { + return "GLOBAL_BYPASS => " + bypassGlobals; + } + } + + /* ========================================================================== + * QuotaSettings from the Quotas object + */ + static List fromUserQuotas(final String userName, final Quotas quotas) { + return fromQuotas(userName, null, null, quotas); + } + + static List fromUserQuotas(final String userName, final TableName tableName, + final Quotas quotas) { + return fromQuotas(userName, tableName, null, quotas); + } + + static List fromUserQuotas(final String userName, final String namespace, + final Quotas quotas) { + return fromQuotas(userName, null, namespace, quotas); + } + + static List fromTableQuotas(final TableName tableName, final Quotas quotas) { + return fromQuotas(null, tableName, null, quotas); + } + + static List fromNamespaceQuotas(final String namespace, final Quotas quotas) { + return fromQuotas(null, null, namespace, quotas); + } + + private static List fromQuotas(final String userName, final TableName tableName, + final String namespace, final Quotas quotas) { + List settings = new ArrayList(); + if (quotas.hasThrottle()) { + settings.addAll(fromThrottle(userName, tableName, namespace, quotas.getThrottle())); + } + if (quotas.getBypassGlobals() == true) { + settings.add(new QuotaGlobalsSettingsBypass(userName, tableName, namespace, true)); + } + return settings; + } + + private static List fromThrottle(final String userName, final TableName tableName, + final String namespace, final QuotaProtos.Throttle throttle) { + List settings = new ArrayList(); + if (throttle.hasReqNum()) { + settings.add(ThrottleSettings.fromTimedQuota(userName, tableName, namespace, + ThrottleType.REQUEST_NUMBER, throttle.getReqNum())); + } + if (throttle.hasReqSize()) { + settings.add(ThrottleSettings.fromTimedQuota(userName, tableName, namespace, + ThrottleType.REQUEST_SIZE, throttle.getReqSize())); + } + return settings; + } + + /* ========================================================================== + * RPC Throttle + */ + + /** + * Throttle the specified user. + * + * @param userName the user to throttle + * @param type the type of throttling + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @return the quota settings + */ + public static QuotaSettings throttleUser(final String userName, final ThrottleType type, + final long limit, final TimeUnit timeUnit) { + return throttle(userName, null, null, type, limit, timeUnit); + } + + /** + * Throttle the specified user on the specified table. + * + * @param userName the user to throttle + * @param tableName the table to throttle + * @param type the type of throttling + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @return the quota settings + */ + public static QuotaSettings throttleUser(final String userName, final TableName tableName, + final ThrottleType type, final long limit, final TimeUnit timeUnit) { + return throttle(userName, tableName, null, type, limit, timeUnit); + } + + /** + * Throttle the specified user on the specified namespace. + * + * @param userName the user to throttle + * @param namespace the namespace to throttle + * @param type the type of throttling + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @return the quota settings + */ + public static QuotaSettings throttleUser(final String userName, final String namespace, + final ThrottleType type, final long limit, final TimeUnit timeUnit) { + return throttle(userName, null, namespace, type, limit, timeUnit); + } + + /** + * Remove the throttling for the specified user. + * + * @param userName the user + * @return the quota settings + */ + public static QuotaSettings unthrottleUser(final String userName) { + return throttle(userName, null, null, null, 0, null); + } + + /** + * Remove the throttling for the specified user on the specified table. + * + * @param userName the user + * @param tableName the table + * @return the quota settings + */ + public static QuotaSettings unthrottleUser(final String userName, final TableName tableName) { + return throttle(userName, tableName, null, null, 0, null); + } + + /** + * Remove the throttling for the specified user on the specified namespace. + * + * @param userName the user + * @param namespace the namespace + * @return the quota settings + */ + public static QuotaSettings unthrottleUser(final String userName, final String namespace) { + return throttle(userName, null, namespace, null, 0, null); + } + + /** + * Throttle the specified table. + * + * @param tableName the table to throttle + * @param type the type of throttling + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @return the quota settings + */ + public static QuotaSettings throttleTable(final TableName tableName, final ThrottleType type, + final long limit, final TimeUnit timeUnit) { + return throttle(null, tableName, null, type, limit, timeUnit); + } + + /** + * Remove the throttling for the specified table. + * + * @param tableName the table + * @return the quota settings + */ + public static QuotaSettings unthrottleTable(final TableName tableName) { + return throttle(null, tableName, null, null, 0, null); + } + + /** + * Throttle the specified namespace. + * + * @param namespace the namespace to throttle + * @param type the type of throttling + * @param limit the allowed number of request/data per timeUnit + * @param timeUnit the limit time unit + * @return the quota settings + */ + public static QuotaSettings throttleNamespace(final String namespace, final ThrottleType type, + final long limit, final TimeUnit timeUnit) { + return throttle(null, null, namespace, type, limit, timeUnit); + } + + /** + * Remove the throttling for the specified namespace. + * + * @param namespace the namespace + * @return the quota settings + */ + public static QuotaSettings unthrottleNamespace(final String namespace) { + return throttle(null, null, namespace, null, 0, null); + } + + /* Throttle helper */ + private static QuotaSettings throttle(final String userName, final TableName tableName, + final String namespace, final ThrottleType type, final long limit, + final TimeUnit timeUnit) { + QuotaProtos.ThrottleRequest.Builder builder = QuotaProtos.ThrottleRequest.newBuilder(); + if (type != null) { + builder.setType(ProtobufUtil.toProtoThrottleType(type)); + } + if (timeUnit != null) { + builder.setTimedQuota(ProtobufUtil.toTimedQuota(limit, timeUnit, QuotaScope.MACHINE)); + } + return new ThrottleSettings(userName, tableName, namespace, builder.build()); + } + + /* ========================================================================== + * Global Settings + */ + + /** + * Set the "bypass global settings" for the specified user + * + * @param userName the user to throttle + * @param bypassGlobals true if the global settings should be bypassed + * @return the quota settings + */ + public static QuotaSettings bypassGlobals(final String userName, final boolean bypassGlobals) { + return new QuotaGlobalsSettingsBypass(userName, null, null, bypassGlobals); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java new file mode 100644 index 0000000..9491795 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/QuotaTableUtil.java @@ -0,0 +1,412 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.ByteArrayInputStream; +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.util.List; +import java.util.Map; +import java.util.regex.Pattern; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.filter.CompareFilter; +import org.apache.hadoop.hbase.filter.Filter; +import org.apache.hadoop.hbase.filter.FilterList; +import org.apache.hadoop.hbase.filter.QualifierFilter; +import org.apache.hadoop.hbase.filter.RegexStringComparator; +import org.apache.hadoop.hbase.filter.RowFilter; +import org.apache.hadoop.hbase.protobuf.ProtobufMagic; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.Strings; + +/** + * Helper class to interact with the quota table. + *

    + *     ROW-KEY      FAM/QUAL        DATA
    + *   n. q:s         
    + *   t.     q:s         
    + *   u.      q:s         
    + *   u.      q:s.
    + * u. q:s.: + * + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class QuotaTableUtil { + private static final Log LOG = LogFactory.getLog(QuotaTableUtil.class); + + /** System table for quotas */ + public static final TableName QUOTA_TABLE_NAME = + TableName.valueOf(NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR, "quota"); + + protected static final byte[] QUOTA_FAMILY_INFO = Bytes.toBytes("q"); + protected static final byte[] QUOTA_FAMILY_USAGE = Bytes.toBytes("u"); + protected static final byte[] QUOTA_QUALIFIER_SETTINGS = Bytes.toBytes("s"); + protected static final byte[] QUOTA_QUALIFIER_SETTINGS_PREFIX = Bytes.toBytes("s."); + protected static final byte[] QUOTA_USER_ROW_KEY_PREFIX = Bytes.toBytes("u."); + protected static final byte[] QUOTA_TABLE_ROW_KEY_PREFIX = Bytes.toBytes("t."); + protected static final byte[] QUOTA_NAMESPACE_ROW_KEY_PREFIX = Bytes.toBytes("n."); + + /* ========================================================================= + * Quota "settings" helpers + */ + public static Quotas getTableQuota(final Connection connection, final TableName table) + throws IOException { + return getQuotas(connection, getTableRowKey(table)); + } + + public static Quotas getNamespaceQuota(final Connection connection, final String namespace) + throws IOException { + return getQuotas(connection, getNamespaceRowKey(namespace)); + } + + public static Quotas getUserQuota(final Connection connection, final String user) + throws IOException { + return getQuotas(connection, getUserRowKey(user)); + } + + public static Quotas getUserQuota(final Connection connection, final String user, + final TableName table) throws IOException { + return getQuotas(connection, getUserRowKey(user), getSettingsQualifierForUserTable(table)); + } + + public static Quotas getUserQuota(final Connection connection, final String user, + final String namespace) throws IOException { + return getQuotas(connection, getUserRowKey(user), + getSettingsQualifierForUserNamespace(namespace)); + } + + private static Quotas getQuotas(final Connection connection, final byte[] rowKey) + throws IOException { + return getQuotas(connection, rowKey, QUOTA_QUALIFIER_SETTINGS); + } + + private static Quotas getQuotas(final Connection connection, final byte[] rowKey, + final byte[] qualifier) throws IOException { + Get get = new Get(rowKey); + get.addColumn(QUOTA_FAMILY_INFO, qualifier); + Result result = doGet(connection, get); + if (result.isEmpty()) { + return null; + } + return quotasFromData(result.getValue(QUOTA_FAMILY_INFO, qualifier)); + } + + public static Get makeGetForTableQuotas(final TableName table) { + Get get = new Get(getTableRowKey(table)); + get.addFamily(QUOTA_FAMILY_INFO); + return get; + } + + public static Get makeGetForNamespaceQuotas(final String namespace) { + Get get = new Get(getNamespaceRowKey(namespace)); + get.addFamily(QUOTA_FAMILY_INFO); + return get; + } + + public static Get makeGetForUserQuotas(final String user, final Iterable tables, + final Iterable namespaces) { + Get get = new Get(getUserRowKey(user)); + get.addColumn(QUOTA_FAMILY_INFO, QUOTA_QUALIFIER_SETTINGS); + for (final TableName table: tables) { + get.addColumn(QUOTA_FAMILY_INFO, getSettingsQualifierForUserTable(table)); + } + for (final String ns: namespaces) { + get.addColumn(QUOTA_FAMILY_INFO, getSettingsQualifierForUserNamespace(ns)); + } + return get; + } + + public static Scan makeScan(final QuotaFilter filter) { + Scan scan = new Scan(); + scan.addFamily(QUOTA_FAMILY_INFO); + if (filter != null && !filter.isNull()) { + scan.setFilter(makeFilter(filter)); + } + return scan; + } + + /** + * converts quotafilter to serializeable filterlists. + */ + public static Filter makeFilter(final QuotaFilter filter) { + FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL); + if (!Strings.isEmpty(filter.getUserFilter())) { + FilterList userFilters = new FilterList(FilterList.Operator.MUST_PASS_ONE); + boolean hasFilter = false; + + if (!Strings.isEmpty(filter.getNamespaceFilter())) { + FilterList nsFilters = new FilterList(FilterList.Operator.MUST_PASS_ALL); + nsFilters.addFilter(new RowFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator(getUserRowKeyRegex(filter.getUserFilter()), 0))); + nsFilters.addFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator( + getSettingsQualifierRegexForUserNamespace(filter.getNamespaceFilter()), 0))); + userFilters.addFilter(nsFilters); + hasFilter = true; + } + if (!Strings.isEmpty(filter.getTableFilter())) { + FilterList tableFilters = new FilterList(FilterList.Operator.MUST_PASS_ALL); + tableFilters.addFilter(new RowFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator(getUserRowKeyRegex(filter.getUserFilter()), 0))); + tableFilters.addFilter(new QualifierFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator( + getSettingsQualifierRegexForUserTable(filter.getTableFilter()), 0))); + userFilters.addFilter(tableFilters); + hasFilter = true; + } + if (!hasFilter) { + userFilters.addFilter(new RowFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator(getUserRowKeyRegex(filter.getUserFilter()), 0))); + } + + filterList.addFilter(userFilters); + } else if (!Strings.isEmpty(filter.getTableFilter())) { + filterList.addFilter(new RowFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator(getTableRowKeyRegex(filter.getTableFilter()), 0))); + } else if (!Strings.isEmpty(filter.getNamespaceFilter())) { + filterList.addFilter(new RowFilter(CompareFilter.CompareOp.EQUAL, + new RegexStringComparator(getNamespaceRowKeyRegex(filter.getNamespaceFilter()), 0))); + } + return filterList; + } + + public static interface UserQuotasVisitor { + void visitUserQuotas(final String userName, final Quotas quotas) + throws IOException; + void visitUserQuotas(final String userName, final TableName table, final Quotas quotas) + throws IOException; + void visitUserQuotas(final String userName, final String namespace, final Quotas quotas) + throws IOException; + } + + public static interface TableQuotasVisitor { + void visitTableQuotas(final TableName tableName, final Quotas quotas) + throws IOException; + } + + public static interface NamespaceQuotasVisitor { + void visitNamespaceQuotas(final String namespace, final Quotas quotas) + throws IOException; + } + + public static interface QuotasVisitor extends UserQuotasVisitor, + TableQuotasVisitor, NamespaceQuotasVisitor { + } + + public static void parseResult(final Result result, final QuotasVisitor visitor) + throws IOException { + byte[] row = result.getRow(); + if (isNamespaceRowKey(row)) { + parseNamespaceResult(result, visitor); + } else if (isTableRowKey(row)) { + parseTableResult(result, visitor); + } else if (isUserRowKey(row)) { + parseUserResult(result, visitor); + } else { + LOG.warn("unexpected row-key: " + Bytes.toString(row)); + } + } + + public static void parseNamespaceResult(final Result result, + final NamespaceQuotasVisitor visitor) throws IOException { + String namespace = getNamespaceFromRowKey(result.getRow()); + parseNamespaceResult(namespace, result, visitor); + } + + protected static void parseNamespaceResult(final String namespace, final Result result, + final NamespaceQuotasVisitor visitor) throws IOException { + byte[] data = result.getValue(QUOTA_FAMILY_INFO, QUOTA_QUALIFIER_SETTINGS); + if (data != null) { + Quotas quotas = quotasFromData(data); + visitor.visitNamespaceQuotas(namespace, quotas); + } + } + + public static void parseTableResult(final Result result, final TableQuotasVisitor visitor) + throws IOException { + TableName table = getTableFromRowKey(result.getRow()); + parseTableResult(table, result, visitor); + } + + protected static void parseTableResult(final TableName table, final Result result, + final TableQuotasVisitor visitor) throws IOException { + byte[] data = result.getValue(QUOTA_FAMILY_INFO, QUOTA_QUALIFIER_SETTINGS); + if (data != null) { + Quotas quotas = quotasFromData(data); + visitor.visitTableQuotas(table, quotas); + } + } + + public static void parseUserResult(final Result result, final UserQuotasVisitor visitor) + throws IOException { + String userName = getUserFromRowKey(result.getRow()); + parseUserResult(userName, result, visitor); + } + + protected static void parseUserResult(final String userName, final Result result, + final UserQuotasVisitor visitor) throws IOException { + Map familyMap = result.getFamilyMap(QUOTA_FAMILY_INFO); + if (familyMap == null || familyMap.isEmpty()) return; + + for (Map.Entry entry: familyMap.entrySet()) { + Quotas quotas = quotasFromData(entry.getValue()); + if (Bytes.startsWith(entry.getKey(), QUOTA_QUALIFIER_SETTINGS_PREFIX)) { + String name = Bytes.toString(entry.getKey(), QUOTA_QUALIFIER_SETTINGS_PREFIX.length); + if (name.charAt(name.length() - 1) == TableName.NAMESPACE_DELIM) { + String namespace = name.substring(0, name.length() - 1); + visitor.visitUserQuotas(userName, namespace, quotas); + } else { + TableName table = TableName.valueOf(name); + visitor.visitUserQuotas(userName, table, quotas); + } + } else if (Bytes.equals(entry.getKey(), QUOTA_QUALIFIER_SETTINGS)) { + visitor.visitUserQuotas(userName, quotas); + } + } + } + + /* ========================================================================= + * Quotas protobuf helpers + */ + protected static Quotas quotasFromData(final byte[] data) throws IOException { + int magicLen = ProtobufMagic.lengthOfPBMagic(); + if (!ProtobufMagic.isPBMagicPrefix(data, 0, magicLen)) { + throw new IOException("Missing pb magic prefix"); + } + return Quotas.parseFrom(new ByteArrayInputStream(data, magicLen, data.length - magicLen)); + } + + protected static byte[] quotasToData(final Quotas data) throws IOException { + ByteArrayOutputStream stream = new ByteArrayOutputStream(); + stream.write(ProtobufMagic.PB_MAGIC); + data.writeTo(stream); + return stream.toByteArray(); + } + + public static boolean isEmptyQuota(final Quotas quotas) { + boolean hasSettings = false; + hasSettings |= quotas.hasThrottle(); + hasSettings |= quotas.hasBypassGlobals(); + return !hasSettings; + } + + /* ========================================================================= + * HTable helpers + */ + protected static Result doGet(final Connection connection, final Get get) + throws IOException { + try (Table table = connection.getTable(QUOTA_TABLE_NAME)) { + return table.get(get); + } + } + + protected static Result[] doGet(final Connection connection, final List gets) + throws IOException { + try (Table table = connection.getTable(QUOTA_TABLE_NAME)) { + return table.get(gets); + } + } + + /* ========================================================================= + * Quota table row key helpers + */ + protected static byte[] getUserRowKey(final String user) { + return Bytes.add(QUOTA_USER_ROW_KEY_PREFIX, Bytes.toBytes(user)); + } + + protected static byte[] getTableRowKey(final TableName table) { + return Bytes.add(QUOTA_TABLE_ROW_KEY_PREFIX, table.getName()); + } + + protected static byte[] getNamespaceRowKey(final String namespace) { + return Bytes.add(QUOTA_NAMESPACE_ROW_KEY_PREFIX, Bytes.toBytes(namespace)); + } + + protected static byte[] getSettingsQualifierForUserTable(final TableName tableName) { + return Bytes.add(QUOTA_QUALIFIER_SETTINGS_PREFIX, tableName.getName()); + } + + protected static byte[] getSettingsQualifierForUserNamespace(final String namespace) { + return Bytes.add(QUOTA_QUALIFIER_SETTINGS_PREFIX, + Bytes.toBytes(namespace + TableName.NAMESPACE_DELIM)); + } + + protected static String getUserRowKeyRegex(final String user) { + return getRowKeyRegEx(QUOTA_USER_ROW_KEY_PREFIX, user); + } + + protected static String getTableRowKeyRegex(final String table) { + return getRowKeyRegEx(QUOTA_TABLE_ROW_KEY_PREFIX, table); + } + + protected static String getNamespaceRowKeyRegex(final String namespace) { + return getRowKeyRegEx(QUOTA_NAMESPACE_ROW_KEY_PREFIX, namespace); + } + + private static String getRowKeyRegEx(final byte[] prefix, final String regex) { + return '^' + Pattern.quote(Bytes.toString(prefix)) + regex + '$'; + } + + protected static String getSettingsQualifierRegexForUserTable(final String table) { + return '^' + Pattern.quote(Bytes.toString(QUOTA_QUALIFIER_SETTINGS_PREFIX)) + + table + "(? THROTTLE"); + if (proto.hasType()) { + builder.append(", THROTTLE_TYPE => "); + builder.append(proto.getType().toString()); + } + if (proto.hasTimedQuota()) { + QuotaProtos.TimedQuota timedQuota = proto.getTimedQuota(); + builder.append(", LIMIT => "); + if (timedQuota.hasSoftLimit()) { + switch (getThrottleType()) { + case REQUEST_NUMBER: + builder.append(String.format("%dreq", timedQuota.getSoftLimit())); + break; + case REQUEST_SIZE: + builder.append(sizeToString(timedQuota.getSoftLimit())); + break; + } + } else if (timedQuota.hasShare()) { + builder.append(String.format("%.2f%%", timedQuota.getShare())); + } + builder.append('/'); + builder.append(timeToString(ProtobufUtil.toTimeUnit(timedQuota.getTimeUnit()))); + if (timedQuota.hasScope()) { + builder.append(", SCOPE => "); + builder.append(timedQuota.getScope().toString()); + } + } else { + builder.append(", LIMIT => NONE"); + } + return builder.toString(); + } + + static ThrottleSettings fromTimedQuota(final String userName, + final TableName tableName, final String namespace, + ThrottleType type, QuotaProtos.TimedQuota timedQuota) { + QuotaProtos.ThrottleRequest.Builder builder = QuotaProtos.ThrottleRequest.newBuilder(); + builder.setType(ProtobufUtil.toProtoThrottleType(type)); + builder.setTimedQuota(timedQuota); + return new ThrottleSettings(userName, tableName, namespace, builder.build()); + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottleType.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottleType.java new file mode 100644 index 0000000..bb5c093 --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottleType.java @@ -0,0 +1,34 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Describe the Throttle Type. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public enum ThrottleType { + /** Throttling based on the number of request per time-unit */ + REQUEST_NUMBER, + + /** Throttling based on the read+write data size */ + REQUEST_SIZE, +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottlingException.java hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottlingException.java new file mode 100644 index 0000000..dad1edd --- /dev/null +++ hbase-client/src/main/java/org/apache/hadoop/hbase/quotas/ThrottlingException.java @@ -0,0 +1,170 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.regex.Matcher; +import java.util.regex.Pattern; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Describe the throttling result. + * + * TODO: At some point this will be handled on the client side to prevent + * operation to go on the server if the waitInterval is grater than the one got + * as result of this exception. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class ThrottlingException extends QuotaExceededException { + private static final long serialVersionUID = 1406576492085155743L; + + private static final Log LOG = LogFactory.getLog(ThrottlingException.class); + + @InterfaceAudience.Public + @InterfaceStability.Evolving + public enum Type { + NumRequestsExceeded, + NumReadRequestsExceeded, + NumWriteRequestsExceeded, + WriteSizeExceeded, + ReadSizeExceeded, + } + + private static final String[] MSG_TYPE = new String[] { + "number of requests exceeded", + "number of read requests exceeded", + "number of write requests exceeded", + "write size limit exceeded", + "read size limit exceeded", + }; + + private static final String MSG_WAIT = " - wait "; + + private long waitInterval; + private Type type; + + public ThrottlingException(String msg) { + super(msg); + + // Dirty workaround to get the information after + // ((RemoteException)e.getCause()).unwrapRemoteException() + for (int i = 0; i < MSG_TYPE.length; ++i) { + int index = msg.indexOf(MSG_TYPE[i]); + if (index >= 0) { + String waitTimeStr = msg.substring(index + MSG_TYPE[i].length() + MSG_WAIT.length()); + type = Type.values()[i];; + waitInterval = timeFromString(waitTimeStr); + break; + } + } + } + + public ThrottlingException(final Type type, final long waitInterval, final String msg) { + super(msg); + this.waitInterval = waitInterval; + this.type = type; + } + + public Type getType() { + return this.type; + } + + public long getWaitInterval() { + return this.waitInterval; + } + + public static void throwNumRequestsExceeded(final long waitInterval) + throws ThrottlingException { + throwThrottlingException(Type.NumRequestsExceeded, waitInterval); + } + + public static void throwNumReadRequestsExceeded(final long waitInterval) + throws ThrottlingException { + throwThrottlingException(Type.NumReadRequestsExceeded, waitInterval); + } + + public static void throwNumWriteRequestsExceeded(final long waitInterval) + throws ThrottlingException { + throwThrottlingException(Type.NumWriteRequestsExceeded, waitInterval); + } + + public static void throwWriteSizeExceeded(final long waitInterval) + throws ThrottlingException { + throwThrottlingException(Type.WriteSizeExceeded, waitInterval); + } + + public static void throwReadSizeExceeded(final long waitInterval) + throws ThrottlingException { + throwThrottlingException(Type.ReadSizeExceeded, waitInterval); + } + + private static void throwThrottlingException(final Type type, final long waitInterval) + throws ThrottlingException { + String msg = MSG_TYPE[type.ordinal()] + MSG_WAIT + formatTime(waitInterval); + throw new ThrottlingException(type, waitInterval, msg); + } + + public static String formatTime(long timeDiff) { + StringBuilder buf = new StringBuilder(); + long hours = timeDiff / (60*60*1000); + long rem = (timeDiff % (60*60*1000)); + long minutes = rem / (60*1000); + rem = rem % (60*1000); + float seconds = rem / 1000.0f; + + if (hours != 0){ + buf.append(hours); + buf.append("hrs, "); + } + if (minutes != 0){ + buf.append(minutes); + buf.append("mins, "); + } + buf.append(String.format("%.2fsec", seconds)); + return buf.toString(); + } + + private static long timeFromString(String timeDiff) { + Pattern[] patterns = new Pattern[] { + Pattern.compile("^(\\d+\\.\\d\\d)sec"), + Pattern.compile("^(\\d+)mins, (\\d+\\.\\d\\d)sec"), + Pattern.compile("^(\\d+)hrs, (\\d+)mins, (\\d+\\.\\d\\d)sec") + }; + + for (int i = 0; i < patterns.length; ++i) { + Matcher m = patterns[i].matcher(timeDiff); + if (m.find()) { + long time = Math.round(Float.parseFloat(m.group(1 + i)) * 1000); + if (i > 0) { + time += Long.parseLong(m.group(i)) * (60 * 1000); + } + if (i > 1) { + time += Long.parseLong(m.group(i - 1)) * (60 * 60 * 1000); + } + return time; + } + } + + return -1; + } +} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java index de1726b..d1fdae3 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/LeaseException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.regionserver; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * Reports a problem with a lease diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java index 4b541c8..d3b1ec1 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/NoSuchColumnFamilyException.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.regionserver; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * Thrown if request for nonexistent column family. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/RegionAlreadyInTransitionException.java hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/RegionAlreadyInTransitionException.java deleted file mode 100644 index 3d54827..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/RegionAlreadyInTransitionException.java +++ /dev/null @@ -1,39 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.regionserver; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; - -import java.io.IOException; - -/** - * This exception is thrown when a region server is asked to open or close - * a region but it's already processing it - */ -@SuppressWarnings("serial") -@InterfaceAudience.Public -@InterfaceStability.Stable -public class RegionAlreadyInTransitionException extends IOException { - - public RegionAlreadyInTransitionException(String s) { - super(s); - } - -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java index 4338539..c2460d4 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/WrongRegionException.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase.regionserver; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * Thrown when a request contains a key which is not part of this region */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java index 8af8d75..cc42819 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FailedLogCloseException.java @@ -18,11 +18,11 @@ */ package org.apache.hadoop.hbase.regionserver.wal; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * Thrown when we fail close of the write-ahead-log file. * Package private. Only used inside this package. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java index 6925778..b8b5b22 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeer.java @@ -20,10 +20,10 @@ package org.apache.hadoop.hbase.replication; import java.util.List; import java.util.Map; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * ReplicationPeer manages enabled / disabled state for the peer. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java index d20ab44..6d42d5b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeerZKImpl.java @@ -26,10 +26,10 @@ import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeers.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeers.java index a601fb5..359dbff 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeers.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeers.java @@ -22,9 +22,9 @@ import java.util.List; import java.util.Map; import java.util.Set; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Pair; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java index de6f79e..8f01a76 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationPeersZKImpl.java @@ -43,8 +43,8 @@ import org.apache.hadoop.hbase.replication.ReplicationPeer.PeerState; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp; +import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.KeeperException; import com.google.protobuf.ByteString; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java index fa0c654..ab9a2c2 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueueInfo.java @@ -19,16 +19,16 @@ package org.apache.hadoop.hbase.replication; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.ServerName; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; - /** * This class is responsible for the parsing logic for a znode representing a queue. * It will extract the peerId if it's recovered as well as the dead region servers diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java index 6a30511..635b021 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationQueuesZKImpl.java @@ -36,8 +36,8 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp; +import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.KeeperException; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java index b469df4..51d7473 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/replication/ReplicationTracker.java @@ -18,10 +18,10 @@ */ package org.apache.hadoop.hbase.replication; -import org.apache.hadoop.hbase.classification.InterfaceAudience; - import java.util.List; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + /** * This is the interface for a Replication Tracker. A replication tracker provides the facility to * subscribe and track events that reflect a change in replication state. These events are used by diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/AccessDeniedException.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/AccessDeniedException.java index 8b446f5..07b871d 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/AccessDeniedException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/AccessDeniedException.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.security; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/AuthMethod.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/AuthMethod.java index 38065fd..e2cc3cf 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/AuthMethod.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/AuthMethod.java @@ -19,13 +19,13 @@ package org.apache.hadoop.hbase.security; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.security.UserGroupInformation; - import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.security.UserGroupInformation; + /** Authentication method */ @InterfaceAudience.Private public enum AuthMethod { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java index f446c66..f4bc3e9 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java @@ -26,14 +26,14 @@ import java.security.SecureRandom; import javax.crypto.spec.SecretKeySpec; -import org.apache.hadoop.hbase.util.ByteStringer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.io.crypto.Cipher; import org.apache.hadoop.hbase.io.crypto.Encryption; import org.apache.hadoop.hbase.protobuf.generated.EncryptionProtos; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcClient.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcClient.java index 8f6e8e1..5a31f26 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcClient.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/HBaseSaslRpcClient.java @@ -18,15 +18,13 @@ package org.apache.hadoop.hbase.security; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.io.WritableUtils; -import org.apache.hadoop.ipc.RemoteException; -import org.apache.hadoop.security.SaslInputStream; -import org.apache.hadoop.security.SaslOutputStream; -import org.apache.hadoop.security.token.Token; -import org.apache.hadoop.security.token.TokenIdentifier; +import java.io.BufferedInputStream; +import java.io.BufferedOutputStream; +import java.io.DataInputStream; +import java.io.DataOutputStream; +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; @@ -39,13 +37,15 @@ import javax.security.sasl.Sasl; import javax.security.sasl.SaslClient; import javax.security.sasl.SaslException; -import java.io.BufferedInputStream; -import java.io.BufferedOutputStream; -import java.io.DataInputStream; -import java.io.DataOutputStream; -import java.io.IOException; -import java.io.InputStream; -import java.io.OutputStream; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.io.WritableUtils; +import org.apache.hadoop.ipc.RemoteException; +import org.apache.hadoop.security.SaslInputStream; +import org.apache.hadoop.security.SaslOutputStream; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenIdentifier; import com.google.common.annotations.VisibleForTesting; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java index 9cde790..726a753 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/SaslUtil.java @@ -18,14 +18,14 @@ */ package org.apache.hadoop.hbase.security; -import org.apache.commons.codec.binary.Base64; -import org.apache.hadoop.hbase.classification.InterfaceAudience; - import java.util.Map; import java.util.TreeMap; import javax.security.sasl.Sasl; +import org.apache.commons.codec.binary.Base64; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + @InterfaceAudience.Private public class SaslUtil { public static final String SASL_DEFAULT_REALM = "default"; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecureBulkLoadUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecureBulkLoadUtil.java index 2fde925..30959a0 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecureBulkLoadUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/SecureBulkLoadUtil.java @@ -18,9 +18,9 @@ */ package org.apache.hadoop.hbase.security; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; @InterfaceAudience.Private diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java index 4500573..d0eb40d 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlClient.java @@ -189,39 +189,27 @@ public class AccessControlClient { * @throws Throwable */ public static List getUserPermissions(Configuration conf, String tableRegex) - throws Throwable { - try (Connection connection = ConnectionFactory.createConnection(conf)) { - return getUserPermissions(connection, tableRegex); - } - } - - /** - * List all the userPermissions matching the given pattern. - * @param connection - * @param tableRegex The regular expression string to match against - * @return - returns an array of UserPermissions - * @throws Throwable - */ - public static List getUserPermissions(Connection connection, String tableRegex) - throws Throwable { + throws Throwable { List permList = new ArrayList(); // TODO: Make it so caller passes in a Connection rather than have us do this expensive // setup each time. This class only used in test and shell at moment though. - try (Table table = connection.getTable(ACL_TABLE_NAME)) { - try (Admin admin = connection.getAdmin()) { - CoprocessorRpcChannel service = table.coprocessorService(HConstants.EMPTY_START_ROW); - BlockingInterface protocol = + try (Connection connection = ConnectionFactory.createConnection(conf)) { + try (Table table = connection.getTable(ACL_TABLE_NAME)) { + try (Admin admin = connection.getAdmin()) { + CoprocessorRpcChannel service = table.coprocessorService(HConstants.EMPTY_START_ROW); + BlockingInterface protocol = AccessControlProtos.AccessControlService.newBlockingStub(service); - HTableDescriptor[] htds = null; - if (tableRegex == null || tableRegex.isEmpty()) { - permList = ProtobufUtil.getUserPermissions(protocol); - } else if (tableRegex.charAt(0) == '@') { - String namespace = tableRegex.substring(1); - permList = ProtobufUtil.getUserPermissions(protocol, Bytes.toBytes(namespace)); - } else { - htds = admin.listTables(Pattern.compile(tableRegex), true); - for (HTableDescriptor hd : htds) { - permList.addAll(ProtobufUtil.getUserPermissions(protocol, hd.getTableName())); + HTableDescriptor[] htds = null; + if (tableRegex == null || tableRegex.isEmpty()) { + permList = ProtobufUtil.getUserPermissions(protocol); + } else if (tableRegex.charAt(0) == '@') { + String namespace = tableRegex.substring(1); + permList = ProtobufUtil.getUserPermissions(protocol, Bytes.toBytes(namespace)); + } else { + htds = admin.listTables(Pattern.compile(tableRegex), true); + for (HTableDescriptor hd : htds) { + permList.addAll(ProtobufUtil.getUserPermissions(protocol, hd.getTableName())); + } } } } diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/Permission.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/Permission.java index f4538a6..7bf5304 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/Permission.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/Permission.java @@ -18,7 +18,12 @@ package org.apache.hadoop.hbase.security.access; -import com.google.common.collect.Maps; +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.Arrays; +import java.util.Map; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; @@ -26,11 +31,7 @@ import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.VersionedWritable; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; -import java.util.Arrays; -import java.util.Map; +import com.google.common.collect.Maps; /** * Base permissions instance representing the ability to perform a given set diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java index 1451c1a..e4758b0 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/TablePermission.java @@ -18,17 +18,17 @@ package org.apache.hadoop.hbase.security.access; +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; - /** * Represents an authorization for access for the given actions, optionally * restricted to the given column family or column qualifier, over the diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/UserPermission.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/UserPermission.java index f4e87f5..7313989 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/UserPermission.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/UserPermission.java @@ -18,16 +18,16 @@ package org.apache.hadoop.hbase.security.access; +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; - /** * Represents an authorization for access over the given table, column family * plus qualifier, for the given user. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenIdentifier.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenIdentifier.java index 0fb6969..604a21a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenIdentifier.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenIdentifier.java @@ -18,16 +18,17 @@ package org.apache.hadoop.hbase.security.token; -import com.google.protobuf.ByteString; +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos; import org.apache.hadoop.io.Text; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.security.token.TokenIdentifier; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; +import com.google.protobuf.ByteString; /** * Represents the identity information stored in an HBase authentication token. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSelector.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSelector.java index bc6e678..2ce2919 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSelector.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/AuthenticationTokenSelector.java @@ -18,6 +18,8 @@ package org.apache.hadoop.hbase.security.token; +import java.util.Collection; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; @@ -26,8 +28,6 @@ import org.apache.hadoop.security.token.Token; import org.apache.hadoop.security.token.TokenIdentifier; import org.apache.hadoop.security.token.TokenSelector; -import java.util.Collection; - @InterfaceAudience.Private public class AuthenticationTokenSelector implements TokenSelector { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java index 3c71215..03e657a 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java @@ -25,10 +25,10 @@ import java.security.PrivilegedExceptionAction; import com.google.protobuf.ServiceException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/InvalidLabelException.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/InvalidLabelException.java index 81e665e..d11c167 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/InvalidLabelException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/InvalidLabelException.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.security.visibility; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/LabelAlreadyExistsException.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/LabelAlreadyExistsException.java index bda9321..3fbf937 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/LabelAlreadyExistsException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/LabelAlreadyExistsException.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.security.visibility; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.DoNotRetryIOException; @InterfaceAudience.Public @InterfaceStability.Evolving diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityClient.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityClient.java index 5ef8fad..fef7d14 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityClient.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityClient.java @@ -25,8 +25,6 @@ import java.util.regex.Pattern; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.client.Table; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Connection; @@ -44,6 +42,7 @@ import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.Visibil import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsRequest; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsService; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import com.google.protobuf.ServiceException; @@ -57,7 +56,7 @@ public class VisibilityClient { /** * Utility method for adding label to the system. - * + * * @param conf * @param label * @return VisibilityLabelsResponse @@ -70,7 +69,7 @@ public class VisibilityClient { /** * Utility method for adding labels to the system. - * + * * @param conf * @param labels * @return VisibilityLabelsResponse @@ -82,10 +81,10 @@ public class VisibilityClient { // setup each time. This class only used in test and shell at moment though. try (Connection connection = ConnectionFactory.createConnection(conf)) { try (Table table = connection.getTable(LABELS_TABLE_NAME)) { - Batch.Call callable = + Batch.Call callable = new Batch.Call() { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); public VisibilityLabelsResponse call(VisibilityLabelsService service) @@ -139,10 +138,10 @@ public class VisibilityClient { // setup each time. This class only used in test and shell at moment though. try (Connection connection = ConnectionFactory.createConnection(conf)) { try (Table table = connection.getTable(LABELS_TABLE_NAME)) { - Batch.Call callable = + Batch.Call callable = new Batch.Call() { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); public GetAuthsResponse call(VisibilityLabelsService service) throws IOException { @@ -235,10 +234,10 @@ public class VisibilityClient { // setup each time. This class only used in test and shell at moment though. try (Connection connection = ConnectionFactory.createConnection(conf)) { try (Table table = connection.getTable(LABELS_TABLE_NAME)) { - Batch.Call callable = + Batch.Call callable = new Batch.Call() { ServerRpcController controller = new ServerRpcController(); - BlockingRpcCallback rpcCallback = + BlockingRpcCallback rpcCallback = new BlockingRpcCallback(); public VisibilityLabelsResponse call(VisibilityLabelsService service) throws IOException { @@ -269,4 +268,4 @@ public class VisibilityClient { } } } -} +} \ No newline at end of file diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityConstants.java hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityConstants.java index 89a94a9..570c203 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityConstants.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityConstants.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.security.visibility; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; @InterfaceAudience.Private diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java index d439c8b..59ba837 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/ClientSnapshotDescriptionUtils.java @@ -19,8 +19,8 @@ package org.apache.hadoop.hbase.snapshot; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java index 4a28461..5d03eab 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.java @@ -17,10 +17,10 @@ */ package org.apache.hadoop.hbase.snapshot; +import org.apache.hadoop.hbase.DoNotRetryIOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; -import org.apache.hadoop.hbase.DoNotRetryIOException; /** * General exception base class for when a snapshot fails diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/TablePartiallyOpenException.java hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/TablePartiallyOpenException.java index abeb7af..b27ff65 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/TablePartiallyOpenException.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/snapshot/TablePartiallyOpenException.java @@ -17,13 +17,13 @@ */ package org.apache.hadoop.hbase.snapshot; +import java.io.IOException; + +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.util.Bytes; -import java.io.IOException; - /** * Thrown if a table should be online/offline but is partially open */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/util/Writables.java hbase-client/src/main/java/org/apache/hadoop/hbase/util/Writables.java index 15af6c9..e04d789 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/util/Writables.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/util/Writables.java @@ -18,10 +18,6 @@ */ package org.apache.hadoop.hbase.util; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.io.DataInputBuffer; -import org.apache.hadoop.io.Writable; - import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.DataInputStream; @@ -30,6 +26,10 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.io.DataInputBuffer; +import org.apache.hadoop.io.Writable; + /** * Utility class with methods for manipulating Writable objects */ diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java index f0d6ba2..1e04948 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java @@ -18,17 +18,8 @@ */ package org.apache.hadoop.hbase.zookeeper; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.util.Strings; -import org.apache.hadoop.net.DNS; -import org.apache.hadoop.util.StringUtils; -import org.apache.zookeeper.server.ServerConfig; -import org.apache.zookeeper.server.ZooKeeperServerMain; -import org.apache.zookeeper.server.quorum.QuorumPeerConfig; -import org.apache.zookeeper.server.quorum.QuorumPeerMain; +import static org.apache.hadoop.hbase.HConstants.DEFAULT_ZK_SESSION_TIMEOUT; +import static org.apache.hadoop.hbase.HConstants.ZK_SESSION_TIMEOUT; import java.io.File; import java.io.IOException; @@ -42,11 +33,18 @@ import java.util.List; import java.util.Map.Entry; import java.util.Properties; -import static org.apache.hadoop.hbase.HConstants.DEFAULT_ZK_SESSION_TIMEOUT; -import static org.apache.hadoop.hbase.HConstants.ZK_SESSION_TIMEOUT; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.util.Strings; +import org.apache.hadoop.net.DNS; +import org.apache.hadoop.util.StringUtils; +import org.apache.zookeeper.server.ServerConfig; +import org.apache.zookeeper.server.ZooKeeperServerMain; +import org.apache.zookeeper.server.quorum.QuorumPeerConfig; +import org.apache.zookeeper.server.quorum.QuorumPeerMain; /** * HBase's version of ZooKeeper's QuorumPeer. When HBase is set to manage diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MasterAddressTracker.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MasterAddressTracker.java index 6ffb813..1a538be 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MasterAddressTracker.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MasterAddressTracker.java @@ -17,10 +17,13 @@ */ package org.apache.hadoop.hbase.zookeeper; -import org.apache.hadoop.hbase.classification.InterfaceAudience; +import java.io.IOException; +import java.io.InterruptedIOException; + import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; @@ -28,8 +31,6 @@ import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.data.Stat; -import java.io.IOException; -import java.io.InterruptedIOException; import com.google.protobuf.InvalidProtocolBufferException; /** @@ -130,8 +131,8 @@ public class MasterAddressTracker extends ZooKeeperNodeTracker { * @param zkw ZooKeeperWatcher to use * @return ServerName stored in the the master address znode or null if no * znode present. - * @throws KeeperException - * @throws IOException + * @throws KeeperException + * @throws IOException */ public static ServerName getMasterAddress(final ZooKeeperWatcher zkw) throws KeeperException, IOException { diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java index 8e532e5..e4c1e0b 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java @@ -17,19 +17,27 @@ */ package org.apache.hadoop.hbase.zookeeper; -import com.google.common.base.Stopwatch; -import com.google.protobuf.InvalidProtocolBufferException; +import java.io.EOFException; +import java.io.IOException; +import java.net.ConnectException; +import java.net.NoRouteToHostException; +import java.net.SocketException; +import java.net.SocketTimeoutException; +import java.rmi.UnknownHostException; +import java.util.ArrayList; +import java.util.List; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.RetriesExhaustedException; import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.ipc.FailedServerException; import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException; import org.apache.hadoop.hbase.master.RegionState; @@ -45,18 +53,8 @@ import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.ipc.RemoteException; import org.apache.zookeeper.KeeperException; -import java.io.EOFException; -import java.io.IOException; -import java.net.ConnectException; -import java.net.NoRouteToHostException; -import java.net.SocketException; -import java.net.SocketTimeoutException; -import java.rmi.UnknownHostException; - -import java.util.List; -import java.util.ArrayList; - -import javax.annotation.Nullable; +import com.google.common.base.Stopwatch; +import com.google.protobuf.InvalidProtocolBufferException; /** * Utility class to perform operation (get/wait for/verify/set/delete) on znode in ZooKeeper @@ -127,7 +125,6 @@ public class MetaTableLocator { * @param zkw zookeeper connection to use * @return server name or null if we failed to get the data. */ - @Nullable public ServerName getMetaRegionLocation(final ZooKeeperWatcher zkw) { try { RegionState state = getMetaRegionState(zkw); diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java deleted file mode 100644 index 297e96e..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java +++ /dev/null @@ -1,1057 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.zookeeper; - -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.zookeeper.AsyncCallback; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.Code; -import org.apache.zookeeper.data.Stat; - -// We should not be importing this Type here, nor a RegionTransition, etc. This class should be -// about zk and bytes only. - -/** - * Utility class for doing region assignment in ZooKeeper. This class extends - * stuff done in {@link ZKUtil} to cover specific assignment operations. - *

    - * Contains only static methods and constants. - *

    - * Used by both the Master and RegionServer. - *

    - * All valid transitions outlined below: - *

    - * MASTER - *

      - *
    1. - * Master creates an unassigned node as OFFLINE. - * - Cluster startup and table enabling. - *
    2. - *
    3. - * Master forces an existing unassigned node to OFFLINE. - * - RegionServer failure. - * - Allows transitions from all states to OFFLINE. - *
    4. - *
    5. - * Master deletes an unassigned node that was in a OPENED state. - * - Normal region transitions. Besides cluster startup, no other deletions - * of unassigned nodes is allowed. - *
    6. - *
    7. - * Master deletes all unassigned nodes regardless of state. - * - Cluster startup before any assignment happens. - *
    8. - *
    - *

    - * REGIONSERVER - *

      - *
    1. - * RegionServer creates an unassigned node as CLOSING. - * - All region closes will do this in response to a CLOSE RPC from Master. - * - A node can never be transitioned to CLOSING, only created. - *
    2. - *
    3. - * RegionServer transitions an unassigned node from CLOSING to CLOSED. - * - Normal region closes. CAS operation. - *
    4. - *
    5. - * RegionServer transitions an unassigned node from OFFLINE to OPENING. - * - All region opens will do this in response to an OPEN RPC from the Master. - * - Normal region opens. CAS operation. - *
    6. - *
    7. - * RegionServer transitions an unassigned node from OPENING to OPENED. - * - Normal region opens. CAS operation. - *
    8. - *
    - */ -@InterfaceAudience.Private -public class ZKAssign { - private static final Log LOG = LogFactory.getLog(ZKAssign.class); - - /** - * Gets the full path node name for the unassigned node for the specified - * region. - * @param zkw zk reference - * @param regionName region name - * @return full path node name - */ - public static String getNodeName(ZooKeeperWatcher zkw, String regionName) { - return ZKUtil.joinZNode(zkw.assignmentZNode, regionName); - } - - /** - * Gets the region name from the full path node name of an unassigned node. - * @param path full zk path - * @return region name - */ - public static String getRegionName(ZooKeeperWatcher zkw, String path) { - return path.substring(zkw.assignmentZNode.length()+1); - } - - // Master methods - - /** - * Creates a new unassigned node in the OFFLINE state for the specified region. - * - *

    Does not transition nodes from other states. If a node already exists - * for this region, a {@link org.apache.zookeeper.KeeperException.NodeExistsException} - * will be thrown. - * - *

    Sets a watcher on the unassigned region node if the method is successful. - * - *

    This method should only be used during cluster startup and the enabling - * of a table. - * - * @param zkw zk reference - * @param region region to be created as offline - * @param serverName server transition will happen on - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NodeExistsException if node already exists - */ - public static void createNodeOffline(ZooKeeperWatcher zkw, HRegionInfo region, - ServerName serverName) - throws KeeperException, KeeperException.NodeExistsException { - createNodeOffline(zkw, region, serverName, EventType.M_ZK_REGION_OFFLINE); - } - - public static void createNodeOffline(ZooKeeperWatcher zkw, HRegionInfo region, - ServerName serverName, final EventType event) - throws KeeperException, KeeperException.NodeExistsException { - LOG.debug(zkw.prefix("Creating unassigned node " + - region.getEncodedName() + " in OFFLINE state")); - RegionTransition rt = - RegionTransition.createRegionTransition(event, region.getRegionName(), serverName); - String node = getNodeName(zkw, region.getEncodedName()); - ZKUtil.createAndWatch(zkw, node, rt.toByteArray()); - } - - /** - * Creates an unassigned node in the OFFLINE state for the specified region. - *

    - * Runs asynchronously. Depends on no pre-existing znode. - * - *

    Sets a watcher on the unassigned region node. - * - * @param zkw zk reference - * @param region region to be created as offline - * @param serverName server transition will happen on - * @param cb - * @param ctx - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NodeExistsException if node already exists - */ - public static void asyncCreateNodeOffline(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName, - final AsyncCallback.StringCallback cb, final Object ctx) - throws KeeperException { - LOG.debug(zkw.prefix("Async create of unassigned node " + - region.getEncodedName() + " with OFFLINE state")); - RegionTransition rt = - RegionTransition.createRegionTransition( - EventType.M_ZK_REGION_OFFLINE, region.getRegionName(), serverName); - String node = getNodeName(zkw, region.getEncodedName()); - ZKUtil.asyncCreate(zkw, node, rt.toByteArray(), cb, ctx); - } - - /** - * Creates or force updates an unassigned node to the OFFLINE state for the - * specified region. - *

    - * Attempts to create the node but if it exists will force it to transition to - * and OFFLINE state. - * - *

    Sets a watcher on the unassigned region node if the method is - * successful. - * - *

    This method should be used when assigning a region. - * - * @param zkw zk reference - * @param region region to be created as offline - * @param serverName server transition will happen on - * @return the version of the znode created in OFFLINE state, -1 if - * unsuccessful. - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NodeExistsException if node already exists - */ - public static int createOrForceNodeOffline(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName) throws KeeperException { - LOG.debug(zkw.prefix("Creating (or updating) unassigned node " + - region.getEncodedName() + " with OFFLINE state")); - RegionTransition rt = RegionTransition.createRegionTransition(EventType.M_ZK_REGION_OFFLINE, - region.getRegionName(), serverName, HConstants.EMPTY_BYTE_ARRAY); - byte [] data = rt.toByteArray(); - String node = getNodeName(zkw, region.getEncodedName()); - zkw.sync(node); - int version = ZKUtil.checkExists(zkw, node); - if (version == -1) { - return ZKUtil.createAndWatch(zkw, node, data); - } else { - boolean setData = false; - try { - setData = ZKUtil.setData(zkw, node, data, version); - // Setdata throws KeeperException which aborts the Master. So we are - // catching it here. - // If just before setting the znode to OFFLINE if the RS has made any - // change to the - // znode state then we need to return -1. - } catch (KeeperException kpe) { - LOG.info("Version mismatch while setting the node to OFFLINE state."); - return -1; - } - if (!setData) { - return -1; - } else { - // We successfully forced to OFFLINE, reset watch and handle if - // the state changed in between our set and the watch - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - rt = getRegionTransition(bytes); - if (rt.getEventType() != EventType.M_ZK_REGION_OFFLINE) { - // state changed, need to process - return -1; - } - } - } - return version + 1; - } - - /** - * Deletes an existing unassigned node that is in the OPENED state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used during normal region transitions when a region - * finishes successfully opening. This is the Master acknowledging completion - * of the specified regions transition. - * - * @param zkw zk reference - * @param encodedRegionName opened region to be deleted from zk - * @param sn the expected region transition target server name - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteOpenedNode(ZooKeeperWatcher zkw, - String encodedRegionName, ServerName sn) - throws KeeperException, KeeperException.NoNodeException { - return deleteNode(zkw, encodedRegionName, - EventType.RS_ZK_REGION_OPENED, sn); - } - - /** - * Deletes an existing unassigned node that is in the OFFLINE state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used during master failover when the regions on an RS - * that has died are all set to OFFLINE before being processed. - * - * @param zkw zk reference - * @param encodedRegionName closed region to be deleted from zk - * @param sn the expected region transition target server name - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteOfflineNode(ZooKeeperWatcher zkw, - String encodedRegionName, ServerName sn) - throws KeeperException, KeeperException.NoNodeException { - return deleteNode(zkw, encodedRegionName, - EventType.M_ZK_REGION_OFFLINE, sn); - } - - /** - * Deletes an existing unassigned node that is in the CLOSED state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used during table disables when a region finishes - * successfully closing. This is the Master acknowledging completion - * of the specified regions transition to being closed. - * - * @param zkw zk reference - * @param encodedRegionName closed region to be deleted from zk - * @param sn the expected region transition target server name - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteClosedNode(ZooKeeperWatcher zkw, - String encodedRegionName, ServerName sn) - throws KeeperException, KeeperException.NoNodeException { - return deleteNode(zkw, encodedRegionName, - EventType.RS_ZK_REGION_CLOSED, sn); - } - - /** - * Deletes an existing unassigned node that is in the CLOSING state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used during table disables when a region finishes - * successfully closing. This is the Master acknowledging completion - * of the specified regions transition to being closed. - * - * @param zkw zk reference - * @param region closing region to be deleted from zk - * @param sn the expected region transition target server name - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteClosingNode(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName sn) - throws KeeperException, KeeperException.NoNodeException { - String encodedRegionName = region.getEncodedName(); - return deleteNode(zkw, encodedRegionName, - EventType.M_ZK_REGION_CLOSING, sn); - } - - /** - * Deletes an existing unassigned node that is in the specified state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used when a region finishes opening/closing. - * The Master acknowledges completion - * of the specified regions transition to being closed/opened. - * - * @param zkw zk reference - * @param encodedRegionName region to be deleted from zk - * @param expectedState state region must be in for delete to complete - * @param sn the expected region transition target server name - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteNode(ZooKeeperWatcher zkw, String encodedRegionName, - EventType expectedState, ServerName sn) - throws KeeperException, KeeperException.NoNodeException { - return deleteNode(zkw, encodedRegionName, expectedState, sn, -1); - } - - /** - * Deletes an existing unassigned node that is in the specified state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used when a region finishes opening/closing. - * The Master acknowledges completion - * of the specified regions transition to being closed/opened. - * - * @param zkw zk reference - * @param encodedRegionName region to be deleted from zk - * @param expectedState state region must be in for delete to complete - * @param expectedVersion of the znode that is to be deleted. - * If expectedVersion need not be compared while deleting the znode - * pass -1 - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteNode(ZooKeeperWatcher zkw, String encodedRegionName, - EventType expectedState, int expectedVersion) - throws KeeperException, KeeperException.NoNodeException { - return deleteNode(zkw, encodedRegionName, expectedState, null, expectedVersion); - } - - /** - * Deletes an existing unassigned node that is in the specified state for the - * specified region. - * - *

    If a node does not already exist for this region, a - * {@link org.apache.zookeeper.KeeperException.NoNodeException} will be thrown. - * - *

    No watcher is set whether this succeeds or not. - * - *

    Returns false if the node was not in the proper state but did exist. - * - *

    This method is used when a region finishes opening/closing. - * The Master acknowledges completion - * of the specified regions transition to being closed/opened. - * - * @param zkw zk reference - * @param encodedRegionName region to be deleted from zk - * @param expectedState state region must be in for delete to complete - * @param serverName the expected region transition target server name - * @param expectedVersion of the znode that is to be deleted. - * If expectedVersion need not be compared while deleting the znode - * pass -1 - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NoNodeException if node does not exist - */ - public static boolean deleteNode(ZooKeeperWatcher zkw, String encodedRegionName, - EventType expectedState, ServerName serverName, int expectedVersion) - throws KeeperException, KeeperException.NoNodeException { - if (LOG.isTraceEnabled()) { - LOG.trace(zkw.prefix("Deleting existing unassigned " + - "node " + encodedRegionName + " in expected state " + expectedState)); - } - String node = getNodeName(zkw, encodedRegionName); - zkw.sync(node); - Stat stat = new Stat(); - byte [] bytes = ZKUtil.getDataNoWatch(zkw, node, stat); - if (bytes == null) { - // If it came back null, node does not exist. - throw KeeperException.create(Code.NONODE); - } - RegionTransition rt = getRegionTransition(bytes); - EventType et = rt.getEventType(); - if (!et.equals(expectedState)) { - LOG.warn(zkw.prefix("Attempting to delete unassigned node " + encodedRegionName + " in " + - expectedState + " state but node is in " + et + " state")); - return false; - } - // Verify the server transition happens on is not changed - if (serverName != null && !rt.getServerName().equals(serverName)) { - LOG.warn(zkw.prefix("Attempting to delete unassigned node " + encodedRegionName - + " with target " + serverName + " but node has " + rt.getServerName())); - return false; - } - if (expectedVersion != -1 - && stat.getVersion() != expectedVersion) { - LOG.warn("The node " + encodedRegionName + " we are trying to delete is not" + - " the expected one. Got a version mismatch"); - return false; - } - if(!ZKUtil.deleteNode(zkw, node, stat.getVersion())) { - LOG.warn(zkw.prefix("Attempting to delete " + - "unassigned node " + encodedRegionName + " in " + expectedState + - " state but after verifying state, we got a version mismatch")); - return false; - } - LOG.debug(zkw.prefix("Deleted unassigned node " + - encodedRegionName + " in expected state " + expectedState)); - return true; - } - - /** - * Deletes all unassigned nodes regardless of their state. - * - *

    No watchers are set. - * - *

    This method is used by the Master during cluster startup to clear out - * any existing state from other cluster runs. - * - * @param zkw zk reference - * @throws KeeperException if unexpected zookeeper exception - */ - public static void deleteAllNodes(ZooKeeperWatcher zkw) - throws KeeperException { - LOG.debug(zkw.prefix("Deleting any existing unassigned nodes")); - ZKUtil.deleteChildrenRecursively(zkw, zkw.assignmentZNode); - } - - /** - * Creates a new unassigned node in the CLOSING state for the specified - * region. - * - *

    Does not transition nodes from any states. If a node already exists - * for this region, a {@link org.apache.zookeeper.KeeperException.NodeExistsException} - * will be thrown. - * - *

    If creation is successful, returns the version number of the CLOSING - * node created. - * - *

    Set a watch. - * - *

    This method should only be used by a Master when initiating a - * close of a region before sending a close request to the region server. - * - * @param zkw zk reference - * @param region region to be created as closing - * @param serverName server transition will happen on - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - * @throws KeeperException.NodeExistsException if node already exists - */ - public static int createNodeClosing(ZooKeeperWatcher zkw, HRegionInfo region, - ServerName serverName) - throws KeeperException, KeeperException.NodeExistsException { - LOG.debug(zkw.prefix("Creating unassigned node " + - region.getEncodedName() + " in a CLOSING state")); - RegionTransition rt = RegionTransition.createRegionTransition(EventType.M_ZK_REGION_CLOSING, - region.getRegionName(), serverName, HConstants.EMPTY_BYTE_ARRAY); - String node = getNodeName(zkw, region.getEncodedName()); - return ZKUtil.createAndWatch(zkw, node, rt.toByteArray()); - } - - // RegionServer methods - - /** - * Transitions an existing unassigned node for the specified region which is - * currently in the CLOSING state to be in the CLOSED state. - * - *

    Does not transition nodes from other states. If for some reason the - * node could not be transitioned, the method returns -1. If the transition - * is successful, the version of the node after transition is returned. - * - *

    This method can fail and return false for three different reasons: - *

    • Unassigned node for this region does not exist
    • - *
    • Unassigned node for this region is not in CLOSING state
    • - *
    • After verifying CLOSING state, update fails because of wrong version - * (someone else already transitioned the node)
    • - *
    - * - *

    Does not set any watches. - * - *

    This method should only be used by a RegionServer when initiating a - * close of a region after receiving a CLOSE RPC from the Master. - * - * @param zkw zk reference - * @param region region to be transitioned to closed - * @param serverName server transition happens on - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - */ - public static int transitionNodeClosed(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName, int expectedVersion) - throws KeeperException { - return transitionNode(zkw, region, serverName, - EventType.M_ZK_REGION_CLOSING, - EventType.RS_ZK_REGION_CLOSED, expectedVersion); - } - - /** - * Transitions an existing unassigned node for the specified region which is - * currently in the OFFLINE state to be in the OPENING state. - * - *

    Does not transition nodes from other states. If for some reason the - * node could not be transitioned, the method returns -1. If the transition - * is successful, the version of the node written as OPENING is returned. - * - *

    This method can fail and return -1 for three different reasons: - *

    • Unassigned node for this region does not exist
    • - *
    • Unassigned node for this region is not in OFFLINE state
    • - *
    • After verifying OFFLINE state, update fails because of wrong version - * (someone else already transitioned the node)
    • - *
    - * - *

    Does not set any watches. - * - *

    This method should only be used by a RegionServer when initiating an - * open of a region after receiving an OPEN RPC from the Master. - * - * @param zkw zk reference - * @param region region to be transitioned to opening - * @param serverName server transition happens on - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - */ - public static int transitionNodeOpening(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName) - throws KeeperException { - return transitionNodeOpening(zkw, region, serverName, - EventType.M_ZK_REGION_OFFLINE); - } - - public static int transitionNodeOpening(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName, final EventType beginState) - throws KeeperException { - return transitionNode(zkw, region, serverName, beginState, - EventType.RS_ZK_REGION_OPENING, -1); - } - - /** - * Confirm an existing unassigned node for the specified region which is - * currently in the OPENING state to be still in the OPENING state on - * the specified server. - * - *

    If for some reason the check fails, the method returns -1. Otherwise, - * the version of the node (same as the expected version) is returned. - * - *

    This method can fail and return -1 for three different reasons: - *

    • Unassigned node for this region does not exist
    • - *
    • Unassigned node for this region is not in OPENING state
    • - *
    • After verifying OPENING state, the server name or the version of the - * doesn't match)
    • - *
    - * - *

    Does not set any watches. - * - *

    This method should only be used by a RegionServer when initiating an - * open of a region after receiving an OPEN RPC from the Master. - * - * @param zkw zk reference - * @param region region to be transitioned to opening - * @param serverName server transition happens on - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - */ - public static int confirmNodeOpening(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName, int expectedVersion) - throws KeeperException { - - String encoded = region.getEncodedName(); - if(LOG.isDebugEnabled()) { - LOG.debug(zkw.prefix("Attempting to retransition opening state of node " + - HRegionInfo.prettyPrint(encoded))); - } - - String node = getNodeName(zkw, encoded); - zkw.sync(node); - - // Read existing data of the node - Stat stat = new Stat(); - byte [] existingBytes = ZKUtil.getDataNoWatch(zkw, node, stat); - if (existingBytes == null) { - // Node no longer exists. Return -1. It means unsuccessful transition. - return -1; - } - RegionTransition rt = getRegionTransition(existingBytes); - - // Verify it is the expected version - if (expectedVersion != -1 && stat.getVersion() != expectedVersion) { - LOG.warn(zkw.prefix("Attempt to retransition the opening state of the " + - "unassigned node for " + encoded + " failed, " + - "the node existed but was version " + stat.getVersion() + - " not the expected version " + expectedVersion)); - return -1; - } - - // Verify it is in expected state - EventType et = rt.getEventType(); - if (!et.equals(EventType.RS_ZK_REGION_OPENING)) { - String existingServer = (rt.getServerName() == null) - ? "" : rt.getServerName().toString(); - LOG.warn(zkw.prefix("Attempt to retransition the opening state of the unassigned node for " - + encoded + " failed, the node existed but was in the state " + et + - " set by the server " + existingServer)); - return -1; - } - - return expectedVersion; - } - - /** - * Transitions an existing unassigned node for the specified region which is - * currently in the OPENING state to be in the OPENED state. - * - *

    Does not transition nodes from other states. If for some reason the - * node could not be transitioned, the method returns -1. If the transition - * is successful, the version of the node after transition is returned. - * - *

    This method can fail and return false for three different reasons: - *

    • Unassigned node for this region does not exist
    • - *
    • Unassigned node for this region is not in OPENING state
    • - *
    • After verifying OPENING state, update fails because of wrong version - * (this should never actually happen since an RS only does this transition - * following a transition to OPENING. if two RS are conflicting, one would - * fail the original transition to OPENING and not this transition)
    • - *
    - * - *

    Does not set any watches. - * - *

    This method should only be used by a RegionServer when completing the - * open of a region. - * - * @param zkw zk reference - * @param region region to be transitioned to opened - * @param serverName server transition happens on - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - */ - public static int transitionNodeOpened(ZooKeeperWatcher zkw, - HRegionInfo region, ServerName serverName, int expectedVersion) - throws KeeperException { - return transitionNode(zkw, region, serverName, - EventType.RS_ZK_REGION_OPENING, - EventType.RS_ZK_REGION_OPENED, expectedVersion); - } - - /** - * - * @param zkw zk reference - * @param region region to be closed - * @param expectedVersion expected version of the znode - * @return true if the znode exists, has the right version and the right state. False otherwise. - * @throws KeeperException - */ - public static boolean checkClosingState(ZooKeeperWatcher zkw, HRegionInfo region, - int expectedVersion) throws KeeperException { - - final String encoded = getNodeName(zkw, region.getEncodedName()); - zkw.sync(encoded); - - // Read existing data of the node - Stat stat = new Stat(); - byte[] existingBytes = ZKUtil.getDataNoWatch(zkw, encoded, stat); - - if (existingBytes == null) { - LOG.warn(zkw.prefix("Attempt to check the " + - "closing node for " + encoded + - ". The node does not exist")); - return false; - } - - if (expectedVersion != -1 && stat.getVersion() != expectedVersion) { - LOG.warn(zkw.prefix("Attempt to check the " + - "closing node for " + encoded + - ". The node existed but was version " + stat.getVersion() + - " not the expected version " + expectedVersion)); - return false; - } - - RegionTransition rt = getRegionTransition(existingBytes); - - if (!EventType.M_ZK_REGION_CLOSING.equals(rt.getEventType())) { - LOG.warn(zkw.prefix("Attempt to check the " + - "closing node for " + encoded + - ". The node existed but was in an unexpected state: " + rt.getEventType())); - return false; - } - - return true; - } - - /** - * Method that actually performs unassigned node transitions. - * - *

    Attempts to transition the unassigned node for the specified region - * from the expected state to the state in the specified transition data. - * - *

    Method first reads existing data and verifies it is in the expected - * state. If the node does not exist or the node is not in the expected - * state, the method returns -1. If the transition is successful, the - * version number of the node following the transition is returned. - * - *

    If the read state is what is expected, it attempts to write the new - * state and data into the node. When doing this, it includes the expected - * version (determined when the existing state was verified) to ensure that - * only one transition is successful. If there is a version mismatch, the - * method returns -1. - * - *

    If the write is successful, no watch is set and the method returns true. - * - * @param zkw zk reference - * @param region region to be transitioned to opened - * @param serverName server transition happens on - * @param endState state to transition node to if all checks pass - * @param beginState state the node must currently be in to do transition - * @param expectedVersion expected version of data before modification, or -1 - * @return version of node after transition, -1 if unsuccessful transition - * @throws KeeperException if unexpected zookeeper exception - */ - public static int transitionNode(ZooKeeperWatcher zkw, HRegionInfo region, - ServerName serverName, EventType beginState, EventType endState, - int expectedVersion) - throws KeeperException { - return transitionNode(zkw, region, serverName, beginState, endState, expectedVersion, null); - } - - - public static int transitionNode(ZooKeeperWatcher zkw, HRegionInfo region, - ServerName serverName, EventType beginState, EventType endState, - int expectedVersion, final byte [] payload) - throws KeeperException { - String encoded = region.getEncodedName(); - if(LOG.isDebugEnabled()) { - LOG.debug(zkw.prefix("Transitioning " + HRegionInfo.prettyPrint(encoded) + - " from " + beginState.toString() + " to " + endState.toString())); - } - - String node = getNodeName(zkw, encoded); - zkw.sync(node); - - // Read existing data of the node - Stat stat = new Stat(); - byte [] existingBytes = ZKUtil.getDataNoWatch(zkw, node, stat); - if (existingBytes == null) { - // Node no longer exists. Return -1. It means unsuccessful transition. - return -1; - } - - // Verify it is the expected version - if (expectedVersion != -1 && stat.getVersion() != expectedVersion) { - LOG.warn(zkw.prefix("Attempt to transition the " + - "unassigned node for " + encoded + - " from " + beginState + " to " + endState + " failed, " + - "the node existed but was version " + stat.getVersion() + - " not the expected version " + expectedVersion)); - return -1; - } - - if (beginState.equals(EventType.M_ZK_REGION_OFFLINE) - && endState.equals(EventType.RS_ZK_REGION_OPENING) - && expectedVersion == -1 && stat.getVersion() != 0) { - // the below check ensures that double assignment doesnot happen. - // When the node is created for the first time then the expected version - // that is passed will be -1 and the version in znode will be 0. - // In all other cases the version in znode will be > 0. - LOG.warn(zkw.prefix("Attempt to transition the " + "unassigned node for " - + encoded + " from " + beginState + " to " + endState + " failed, " - + "the node existed but was version " + stat.getVersion() - + " not the expected version " + expectedVersion)); - return -1; - } - - RegionTransition rt = getRegionTransition(existingBytes); - - // Verify the server transition happens on is not changed - if (!rt.getServerName().equals(serverName)) { - LOG.warn(zkw.prefix("Attempt to transition the " + - "unassigned node for " + encoded + - " from " + beginState + " to " + endState + " failed, " + - "the server that tried to transition was " + serverName + - " not the expected " + rt.getServerName())); - return -1; - } - - // Verify it is in expected state - EventType et = rt.getEventType(); - if (!et.equals(beginState)) { - String existingServer = (rt.getServerName() == null) - ? "" : rt.getServerName().toString(); - LOG.warn(zkw.prefix("Attempt to transition the unassigned node for " + encoded - + " from " + beginState + " to " + endState + " failed, the node existed but" - + " was in the state " + et + " set by the server " + existingServer)); - return -1; - } - - // Write new data, ensuring data has not changed since we last read it - try { - rt = RegionTransition.createRegionTransition( - endState, region.getRegionName(), serverName, payload); - if(!ZKUtil.setData(zkw, node, rt.toByteArray(), stat.getVersion())) { - LOG.warn(zkw.prefix("Attempt to transition the " + - "unassigned node for " + encoded + - " from " + beginState + " to " + endState + " failed, " + - "the node existed and was in the expected state but then when " + - "setting data we got a version mismatch")); - return -1; - } - if(LOG.isDebugEnabled()) { - LOG.debug(zkw.prefix("Transitioned node " + encoded + - " from " + beginState + " to " + endState)); - } - return stat.getVersion() + 1; - } catch (KeeperException.NoNodeException nne) { - LOG.warn(zkw.prefix("Attempt to transition the " + - "unassigned node for " + encoded + - " from " + beginState + " to " + endState + " failed, " + - "the node existed and was in the expected state but then when " + - "setting data it no longer existed")); - return -1; - } - } - - private static RegionTransition getRegionTransition(final byte [] bytes) throws KeeperException { - try { - return RegionTransition.parseFrom(bytes); - } catch (DeserializationException e) { - // Convert to a zk exception for now. Otherwise have to change API - throw ZKUtil.convert(e); - } - } - - /** - * Gets the current data in the unassigned node for the specified region name - * or fully-qualified path. - * - *

    Returns null if the region does not currently have a node. - * - *

    Sets a watch on the node if the node exists. - * - * @param zkw zk reference - * @param pathOrRegionName fully-specified path or region name - * @return znode content - * @throws KeeperException if unexpected zookeeper exception - */ - public static byte [] getData(ZooKeeperWatcher zkw, - String pathOrRegionName) - throws KeeperException { - String node = getPath(zkw, pathOrRegionName); - return ZKUtil.getDataAndWatch(zkw, node); - } - - /** - * Gets the current data in the unassigned node for the specified region name - * or fully-qualified path. - * - *

    Returns null if the region does not currently have a node. - * - *

    Sets a watch on the node if the node exists. - * - * @param zkw zk reference - * @param pathOrRegionName fully-specified path or region name - * @param stat object to populate the version. - * @return znode content - * @throws KeeperException if unexpected zookeeper exception - */ - public static byte [] getDataAndWatch(ZooKeeperWatcher zkw, - String pathOrRegionName, Stat stat) - throws KeeperException { - String node = getPath(zkw, pathOrRegionName); - return ZKUtil.getDataAndWatch(zkw, node, stat); - } - - /** - * Gets the current data in the unassigned node for the specified region name - * or fully-qualified path. - * - *

    Returns null if the region does not currently have a node. - * - *

    Does not set a watch. - * - * @param zkw zk reference - * @param pathOrRegionName fully-specified path or region name - * @param stat object to store node info into on getData call - * @return znode content - * @throws KeeperException if unexpected zookeeper exception - */ - public static byte [] getDataNoWatch(ZooKeeperWatcher zkw, - String pathOrRegionName, Stat stat) - throws KeeperException { - String node = getPath(zkw, pathOrRegionName); - return ZKUtil.getDataNoWatch(zkw, node, stat); - } - - /** - * @param zkw - * @param pathOrRegionName - * @return Path to znode - */ - public static String getPath(final ZooKeeperWatcher zkw, final String pathOrRegionName) { - return pathOrRegionName.startsWith("/")? pathOrRegionName : getNodeName(zkw, pathOrRegionName); - } - - /** - * Get the version of the specified znode - * @param zkw zk reference - * @param region region's info - * @return the version of the znode, -1 if it doesn't exist - * @throws KeeperException - */ - public static int getVersion(ZooKeeperWatcher zkw, HRegionInfo region) - throws KeeperException { - String znode = getNodeName(zkw, region.getEncodedName()); - return ZKUtil.checkExists(zkw, znode); - } - - /** - * Delete the assignment node regardless of its current state. - *

    - * Fail silent even if the node does not exist at all. - * @param watcher - * @param regionInfo - * @throws KeeperException - */ - public static void deleteNodeFailSilent(ZooKeeperWatcher watcher, - HRegionInfo regionInfo) - throws KeeperException { - String node = getNodeName(watcher, regionInfo.getEncodedName()); - ZKUtil.deleteNodeFailSilent(watcher, node); - } - - /** - * Blocks until there are no node in regions in transition. - *

    - * Used in testing only. - * @param zkw zk reference - * @throws KeeperException - * @throws InterruptedException - */ - public static void blockUntilNoRIT(ZooKeeperWatcher zkw) - throws KeeperException, InterruptedException { - while (ZKUtil.nodeHasChildren(zkw, zkw.assignmentZNode)) { - List znodes = - ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.assignmentZNode); - if (znodes != null && !znodes.isEmpty()) { - LOG.debug("Waiting on RIT: " + znodes); - } - Thread.sleep(100); - } - } - - /** - * Blocks until there is at least one node in regions in transition. - *

    - * Used in testing only. - * @param zkw zk reference - * @throws KeeperException - * @throws InterruptedException - */ - public static void blockUntilRIT(ZooKeeperWatcher zkw) - throws KeeperException, InterruptedException { - while (!ZKUtil.nodeHasChildren(zkw, zkw.assignmentZNode)) { - List znodes = - ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.assignmentZNode); - if (znodes == null || znodes.isEmpty()) { - LOG.debug("No RIT in ZK"); - } - Thread.sleep(100); - } - } - - /** - * Presume bytes are serialized unassigned data structure - * @param znodeBytes - * @return String of the deserialized znode bytes. - */ - static String toString(final byte[] znodeBytes) { - // This method should not exist. Used by ZKUtil stringifying RegionTransition. Have the - // method in here so RegionTransition does not leak into ZKUtil. - try { - RegionTransition rt = RegionTransition.parseFrom(znodeBytes); - return rt.toString(); - } catch (DeserializationException e) { - return ""; - } - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKClusterId.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKClusterId.java index f0c19e3..b603ab2 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKClusterId.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKClusterId.java @@ -21,9 +21,9 @@ package org.apache.hadoop.hbase.zookeeper; import java.util.UUID; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.ClusterId; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.zookeeper.KeeperException; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java index e4aedc4..3dc9aa6 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java @@ -27,9 +27,9 @@ import java.util.Properties; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Utility methods for reading, and building the ZooKeeper configuration. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKLeaderManager.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKLeaderManager.java index 792ed78..495c2bc 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKLeaderManager.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKLeaderManager.java @@ -22,8 +22,8 @@ import java.util.concurrent.atomic.AtomicBoolean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; import org.apache.zookeeper.KeeperException; diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateClientSideReader.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateClientSideReader.java deleted file mode 100644 index e1c4a4f..0000000 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateClientSideReader.java +++ /dev/null @@ -1,168 +0,0 @@ -/** - * Copyright The Apache Software Foundation - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.zookeeper; - -import com.google.protobuf.InvalidProtocolBufferException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.zookeeper.KeeperException; - -import java.util.HashSet; -import java.util.List; -import java.util.Set; - -/** - * Non-instantiable class that provides helper functions to learn - * about HBase table state for code running on client side (hence, not having - * access to consensus context). - * - * Doesn't cache any table state, just goes directly to ZooKeeper. - * TODO: decouple this class from ZooKeeper. - */ -@InterfaceAudience.Private -public class ZKTableStateClientSideReader { - - private ZKTableStateClientSideReader() {} - - /** - * Go to zookeeper and see if state of table is {@code ZooKeeperProtos.Table.State#DISABLED}. - * This method does not use cache. - * This method is for clients other than AssignmentManager - * @param zkw ZooKeeperWatcher instance to use - * @param tableName table we're checking - * @return True if table is enabled. - * @throws KeeperException - */ - public static boolean isDisabledTable(final ZooKeeperWatcher zkw, - final TableName tableName) - throws KeeperException, InterruptedException { - ZooKeeperProtos.Table.State state = getTableState(zkw, tableName); - return isTableState(ZooKeeperProtos.Table.State.DISABLED, state); - } - - /** - * Go to zookeeper and see if state of table is {@code ZooKeeperProtos.Table.State#ENABLED}. - * This method does not use cache. - * This method is for clients other than AssignmentManager - * @param zkw ZooKeeperWatcher instance to use - * @param tableName table we're checking - * @return True if table is enabled. - * @throws KeeperException - */ - public static boolean isEnabledTable(final ZooKeeperWatcher zkw, - final TableName tableName) - throws KeeperException, InterruptedException { - return getTableState(zkw, tableName) == ZooKeeperProtos.Table.State.ENABLED; - } - - /** - * Go to zookeeper and see if state of table is {@code ZooKeeperProtos.Table.State#DISABLING} - * of {@code ZooKeeperProtos.Table.State#DISABLED}. - * This method does not use cache. - * This method is for clients other than AssignmentManager. - * @param zkw ZooKeeperWatcher instance to use - * @param tableName table we're checking - * @return True if table is enabled. - * @throws KeeperException - */ - public static boolean isDisablingOrDisabledTable(final ZooKeeperWatcher zkw, - final TableName tableName) - throws KeeperException, InterruptedException { - ZooKeeperProtos.Table.State state = getTableState(zkw, tableName); - return isTableState(ZooKeeperProtos.Table.State.DISABLING, state) || - isTableState(ZooKeeperProtos.Table.State.DISABLED, state); - } - - /** - * Gets a list of all the tables set as disabled in zookeeper. - * @return Set of disabled tables, empty Set if none - * @throws KeeperException - */ - public static Set getDisabledTables(ZooKeeperWatcher zkw) - throws KeeperException, InterruptedException { - Set disabledTables = new HashSet(); - List children = - ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode); - for (String child: children) { - TableName tableName = - TableName.valueOf(child); - ZooKeeperProtos.Table.State state = getTableState(zkw, tableName); - if (state == ZooKeeperProtos.Table.State.DISABLED) disabledTables.add(tableName); - } - return disabledTables; - } - - /** - * Gets a list of all the tables set as disabled in zookeeper. - * @return Set of disabled tables, empty Set if none - * @throws KeeperException - */ - public static Set getDisabledOrDisablingTables(ZooKeeperWatcher zkw) - throws KeeperException, InterruptedException { - Set disabledTables = new HashSet(); - List children = - ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode); - for (String child: children) { - TableName tableName = - TableName.valueOf(child); - ZooKeeperProtos.Table.State state = getTableState(zkw, tableName); - if (state == ZooKeeperProtos.Table.State.DISABLED || - state == ZooKeeperProtos.Table.State.DISABLING) - disabledTables.add(tableName); - } - return disabledTables; - } - - static boolean isTableState(final ZooKeeperProtos.Table.State expectedState, - final ZooKeeperProtos.Table.State currentState) { - return currentState != null && currentState.equals(expectedState); - } - - /** - * @param zkw ZooKeeperWatcher instance to use - * @param tableName table we're checking - * @return Null or {@link ZooKeeperProtos.Table.State} found in znode. - * @throws KeeperException - */ - static ZooKeeperProtos.Table.State getTableState(final ZooKeeperWatcher zkw, - final TableName tableName) - throws KeeperException, InterruptedException { - String znode = ZKUtil.joinZNode(zkw.tableZNode, tableName.getNameAsString()); - byte [] data = ZKUtil.getData(zkw, znode); - if (data == null || data.length <= 0) return null; - try { - ProtobufUtil.expectPBMagicPrefix(data); - ZooKeeperProtos.Table.Builder builder = ZooKeeperProtos.Table.newBuilder(); - int magicLen = ProtobufUtil.lengthOfPBMagic(); - ZooKeeperProtos.Table t = builder.mergeFrom(data, magicLen, data.length - magicLen).build(); - return t.getState(); - } catch (InvalidProtocolBufferException e) { - KeeperException ke = new KeeperException.DataInconsistencyException(); - ke.initCause(e); - throw ke; - } catch (DeserializationException e) { - throw ZKUtil.convert(e); - } - } -} diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java index d63a206..64f75c4 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java @@ -35,18 +35,18 @@ import java.util.Properties; import javax.security.auth.login.AppConfigurationEntry; import javax.security.auth.login.AppConfigurationEntry.LoginModuleControlFlag; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.lang.StringUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionStoreSequenceIds; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp.CreateAndFailSilent; @@ -61,9 +61,11 @@ import org.apache.zookeeper.KeeperException.NoNodeException; import org.apache.zookeeper.Op; import org.apache.zookeeper.Watcher; import org.apache.zookeeper.ZooDefs.Ids; +import org.apache.zookeeper.ZooDefs.Perms; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.client.ZooKeeperSaslClient; import org.apache.zookeeper.data.ACL; +import org.apache.zookeeper.data.Id; import org.apache.zookeeper.data.Stat; import org.apache.zookeeper.proto.CreateRequest; import org.apache.zookeeper.proto.DeleteRequest; @@ -263,10 +265,6 @@ public class ZKUtil { private final String keytabFile; private final String principal; - public JaasConfiguration(String loginContextName, String principal) { - this(loginContextName, principal, null, true); - } - public JaasConfiguration(String loginContextName, String principal, String keytabFile) { this(loginContextName, principal, keytabFile, keytabFile == null || keytabFile.length() == 0); } @@ -805,6 +803,7 @@ public class ZKUtil { * @throws KeeperException if unexpected zookeeper exception * @deprecated Unused */ + @Deprecated public static List getChildDataAndWatchForNewChildren( ZooKeeperWatcher zkw, String baseNode) throws KeeperException { List nodes = @@ -837,6 +836,7 @@ public class ZKUtil { * @throws KeeperException.BadVersionException if version mismatch * @deprecated Unused */ + @Deprecated public static void updateExistingNodeData(ZooKeeperWatcher zkw, String znode, byte [] data, int expectedVersion) throws KeeperException { @@ -952,7 +952,16 @@ public class ZKUtil { } private static ArrayList createACL(ZooKeeperWatcher zkw, String node) { + if (!node.startsWith(zkw.baseZNode)) { + return Ids.OPEN_ACL_UNSAFE; + } if (isSecureZooKeeper(zkw.getConfiguration())) { + String superUser = zkw.getConfiguration().get("hbase.superuser"); + ArrayList acls = new ArrayList(); + // add permission to hbase supper user + if (superUser != null) { + acls.add(new ACL(Perms.ALL, new Id("auth", superUser))); + } // Certain znodes are accessed directly by the client, // so they must be readable by non-authenticated clients if ((node.equals(zkw.baseZNode) == true) || @@ -961,11 +970,13 @@ public class ZKUtil { (node.equals(zkw.clusterIdZNode) == true) || (node.equals(zkw.rsZNode) == true) || (node.equals(zkw.backupMasterAddressesZNode) == true) || - (node.startsWith(zkw.assignmentZNode) == true) || (node.startsWith(zkw.tableZNode) == true)) { - return ZooKeeperWatcher.CREATOR_ALL_AND_WORLD_READABLE; + acls.addAll(Ids.CREATOR_ALL_ACL); + acls.addAll(Ids.READ_ACL_UNSAFE); + } else { + acls.addAll(Ids.CREATOR_ALL_ACL); } - return Ids.CREATOR_ALL_ACL; + return acls; } else { return Ids.OPEN_ACL_UNSAFE; } @@ -1783,8 +1794,6 @@ public class ZKUtil { " byte(s) of data from znode " + znode + (watcherSet? " and set watcher; ": "; data=") + (data == null? "null": data.length == 0? "empty": ( - znode.startsWith(zkw.assignmentZNode)? - ZKAssign.toString(data): // We should not be doing this reaching into another class znode.startsWith(zkw.metaServerZNode)? getServerNameOrEmptyString(data): znode.startsWith(zkw.backupMasterAddressesZNode)? @@ -1842,41 +1851,6 @@ public class ZKUtil { } } - - public static byte[] blockUntilAvailable( - final ZooKeeperWatcher zkw, final String znode, final long timeout) - throws InterruptedException { - if (timeout < 0) throw new IllegalArgumentException(); - if (zkw == null) throw new IllegalArgumentException(); - if (znode == null) throw new IllegalArgumentException(); - - byte[] data = null; - boolean finished = false; - final long endTime = System.currentTimeMillis() + timeout; - while (!finished) { - try { - data = ZKUtil.getData(zkw, znode); - } catch(KeeperException e) { - if (e instanceof KeeperException.SessionExpiredException - || e instanceof KeeperException.AuthFailedException) { - // non-recoverable errors so stop here - throw new InterruptedException("interrupted due to " + e); - } - LOG.warn("Unexpected exception handling blockUntilAvailable", e); - } - - if (data == null && (System.currentTimeMillis() + - HConstants.SOCKET_RETRY_WAIT_MS < endTime)) { - Thread.sleep(HConstants.SOCKET_RETRY_WAIT_MS); - } else { - finished = true; - } - } - - return data; - } - - /** * Convert a {@link DeserializationException} to a more palatable {@link KeeperException}. * Used when can't let a {@link DeserializationException} out w/o changing public API. diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java index b621200..1ed1e3f 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperNodeTracker.java @@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.zookeeper; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Abortable; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.zookeeper.KeeperException; /** diff --git hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java index af879d7e..f287a0e 100644 --- hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java +++ hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZooKeeperWatcher.java @@ -21,17 +21,18 @@ package org.apache.hadoop.hbase.zookeeper; import java.io.Closeable; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.List; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.Watcher; @@ -55,6 +56,7 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { // Identifier for this watcher (for logging only). It is made of the prefix // passed on construction and the zookeeper sessionid. + private String prefix; private String identifier; // zookeeper quorum @@ -92,9 +94,8 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { public String backupMasterAddressesZNode; // znode containing the current cluster state public String clusterStateZNode; - // znode used for region transitioning and assignment - public String assignmentZNode; // znode used for table disabling/enabling + @Deprecated public String tableZNode; // znode containing the unique cluster ID public String clusterIdZNode; @@ -110,13 +111,6 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { public static String namespaceZNode = "namespace"; - // Certain ZooKeeper nodes need to be world-readable - public static final ArrayList CREATOR_ALL_AND_WORLD_READABLE = - new ArrayList() { { - add(new ACL(ZooDefs.Perms.READ,ZooDefs.Ids.ANYONE_ID_UNSAFE)); - add(new ACL(ZooDefs.Perms.ALL,ZooDefs.Ids.AUTH_IDS)); - }}; - private final Configuration conf; private final Exception constructorCaller; @@ -156,9 +150,10 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { this.constructorCaller = e; } this.quorum = ZKConfig.getZKQuorumServersString(conf); + this.prefix = identifier; // Identifier will get the sessionid appended later below down when we // handle the syncconnect event. - this.identifier = identifier; + this.identifier = identifier + "0x0"; this.abortable = abortable; setNodeNames(conf); this.recoverableZooKeeper = ZKUtil.connect(conf, quorum, this, identifier); @@ -171,9 +166,6 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { try { // Create all the necessary "directories" of znodes ZKUtil.createWithParents(this, baseZNode); - if (conf.getBoolean("hbase.assignment.usezk", true)) { - ZKUtil.createAndFailSilent(this, assignmentZNode); - } ZKUtil.createAndFailSilent(this, rsZNode); ZKUtil.createAndFailSilent(this, drainingZNode); ZKUtil.createAndFailSilent(this, tableZNode); @@ -220,8 +212,6 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { conf.get("zookeeper.znode.backup.masters", "backup-masters")); clusterStateZNode = ZKUtil.joinZNode(baseZNode, conf.get("zookeeper.znode.state", "running")); - assignmentZNode = ZKUtil.joinZNode(baseZNode, - conf.get("zookeeper.znode.unassigned", "region-in-transition")); tableZNode = ZKUtil.joinZNode(baseZNode, conf.get("zookeeper.znode.tableEnableDisable", "table")); clusterIdZNode = ZKUtil.joinZNode(baseZNode, @@ -389,7 +379,7 @@ public class ZooKeeperWatcher implements Watcher, Abortable, Closeable { this.constructorCaller); throw new NullPointerException("ZK is null"); } - this.identifier = this.identifier + "-0x" + + this.identifier = this.prefix + "-0x" + Long.toHexString(this.recoverableZooKeeper.getSessionId()); // Update our identifier. Otherwise ignore. LOG.debug(this.identifier + " connected"); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java hbase-client/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java index 8e23f97..976876cf 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptor.java @@ -25,18 +25,23 @@ import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.BuilderStyleTest; -import org.junit.experimental.categories.Category; import org.junit.Test; +import org.junit.experimental.categories.Category; /** Tests the HColumnDescriptor with appropriate arguments */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHColumnDescriptor { @Test public void testPb() throws DeserializationException { HColumnDescriptor hcd = new HColumnDescriptor( - HTableDescriptor.META_TABLEDESC.getColumnFamilies()[0]); + new HColumnDescriptor(HConstants.CATALOG_FAMILY) + .setInMemory(true) + .setScope(HConstants.REPLICATION_SCOPE_LOCAL) + .setBloomFilterType(BloomType.NONE) + .setCacheDataInL1(true)); final int v = 123; hcd.setBlocksize(v); hcd.setTimeToLive(v); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java index 8dc141b..43d9411 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/TestHTableDescriptor.java @@ -29,6 +29,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.BuilderStyleTest; import org.apache.hadoop.hbase.util.Bytes; @@ -38,13 +39,13 @@ import org.junit.experimental.categories.Category; /** * Test setting values in the descriptor */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHTableDescriptor { final static Log LOG = LogFactory.getLog(TestHTableDescriptor.class); @Test public void testPb() throws DeserializationException, IOException { - HTableDescriptor htd = new HTableDescriptor(HTableDescriptor.META_TABLEDESC); + HTableDescriptor htd = new HTableDescriptor(TableName.META_TABLE_NAME); final int v = 123; htd.setMaxFileSize(v); htd.setDurability(Durability.ASYNC_WAL); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java index 1c27f45..7331b4d 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/TestRegionLocations.java @@ -23,11 +23,12 @@ import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestRegionLocations { ServerName sn0 = ServerName.valueOf("host0", 10, 10); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java index 97c3c37..88a95fb 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAsyncProcess.java @@ -20,19 +20,39 @@ package org.apache.hadoop.hbase.client; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.TreeSet; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.RejectedExecutionException; +import java.util.concurrent.SynchronousQueue; +import java.util.concurrent.ThreadFactory; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; + import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.RegionLocations; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.AsyncProcess.AsyncRequestFuture; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.junit.Assert; @@ -43,26 +63,7 @@ import org.junit.experimental.categories.Category; import org.junit.rules.Timeout; import org.mockito.Mockito; -import java.io.IOException; -import java.io.InterruptedIOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.TreeSet; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.RejectedExecutionException; -import java.util.concurrent.SynchronousQueue; -import java.util.concurrent.ThreadFactory; -import java.util.concurrent.ThreadPoolExecutor; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; - -@Category(MediumTests.class) +@Category({ClientTests.class, MediumTests.class}) public class TestAsyncProcess { private static final TableName DUMMY_TABLE = TableName.valueOf("DUMMY_TABLE"); @@ -189,7 +190,7 @@ public class TestAsyncProcess { } }); - return new RpcRetryingCaller(100, 10, 9) { + return new RpcRetryingCallerImpl(100, 10, 9) { @Override public MultiResponse callWithoutRetries(RetryingCallable callable, int callTimeout) @@ -207,7 +208,7 @@ public class TestAsyncProcess { } } - static class CallerWithFailure extends RpcRetryingCaller{ + static class CallerWithFailure extends RpcRetryingCallerImpl{ public CallerWithFailure() { super(100, 100, 9); @@ -293,7 +294,7 @@ public class TestAsyncProcess { replicaCalls.incrementAndGet(); } - return new RpcRetryingCaller(100, 10, 9) { + return new RpcRetryingCallerImpl(100, 10, 9) { @Override public MultiResponse callWithoutRetries(RetryingCallable callable, int callTimeout) throws IOException, RuntimeException { @@ -710,9 +711,7 @@ public class TestAsyncProcess { HTable ht = new HTable(); MyAsyncProcess ap = new MyAsyncProcess(createHConnection(), conf, true); ht.ap = ap; - // This is deprecated method. Using it here only because the new HTable above is a bit of a - // perversion skirting a bunch of setup. Fix the HTable test-only constructor to do more. - ht.setAutoFlush(false, true); + ht.setAutoFlushTo(false); ht.setWriteBufferSize(0); Put p = createPut(1, false); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java index e7250dd..6656a83 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestAttributes.java @@ -21,13 +21,14 @@ package org.apache.hadoop.hbase.client; import java.util.Arrays; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestAttributes { private static final byte [] ROW = new byte [] {'r'}; @Test diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientExponentialBackoff.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientExponentialBackoff.java new file mode 100644 index 0000000..88e409d --- /dev/null +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientExponentialBackoff.java @@ -0,0 +1,110 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.client.backoff.ExponentialClientBackoffPolicy; +import org.apache.hadoop.hbase.client.backoff.ServerStatistics; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; +import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Test; +import org.mockito.Mockito; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; + +public class TestClientExponentialBackoff { + + ServerName server = Mockito.mock(ServerName.class); + byte[] regionname = Bytes.toBytes("region"); + + @Test + public void testNulls() { + Configuration conf = new Configuration(false); + ExponentialClientBackoffPolicy backoff = new ExponentialClientBackoffPolicy(conf); + assertEquals(0, backoff.getBackoffTime(null, null, null)); + + // server name doesn't matter to calculation, but check it now anyways + assertEquals(0, backoff.getBackoffTime(server, null, null)); + assertEquals(0, backoff.getBackoffTime(server, regionname, null)); + + // check when no stats for the region yet + ServerStatistics stats = new ServerStatistics(); + assertEquals(0, backoff.getBackoffTime(server, regionname, stats)); + } + + @Test + public void testMaxLoad() { + Configuration conf = new Configuration(false); + ExponentialClientBackoffPolicy backoff = new ExponentialClientBackoffPolicy(conf); + + ServerStatistics stats = new ServerStatistics(); + update(stats, 100); + assertEquals(ExponentialClientBackoffPolicy.DEFAULT_MAX_BACKOFF, backoff.getBackoffTime(server, + regionname, stats)); + + // another policy with a different max timeout + long max = 100; + conf.setLong(ExponentialClientBackoffPolicy.MAX_BACKOFF_KEY, max); + ExponentialClientBackoffPolicy backoffShortTimeout = new ExponentialClientBackoffPolicy(conf); + assertEquals(max, backoffShortTimeout.getBackoffTime(server, regionname, stats)); + + // test beyond 100 still doesn't exceed the max + update(stats, 101); + assertEquals(ExponentialClientBackoffPolicy.DEFAULT_MAX_BACKOFF, backoff.getBackoffTime(server, + regionname, stats)); + assertEquals(max, backoffShortTimeout.getBackoffTime(server, regionname, stats)); + + // and that when we are below 100, its less than the max timeout + update(stats, 99); + assertTrue(backoff.getBackoffTime(server, + regionname, stats) < ExponentialClientBackoffPolicy.DEFAULT_MAX_BACKOFF); + assertTrue(backoffShortTimeout.getBackoffTime(server, regionname, stats) < max); + } + + /** + * Make sure that we get results in the order that we expect - backoff for a load of 1 should + * less than backoff for 10, which should be less than that for 50. + */ + @Test + public void testResultOrdering() { + Configuration conf = new Configuration(false); + // make the max timeout really high so we get differentiation between load factors + conf.setLong(ExponentialClientBackoffPolicy.MAX_BACKOFF_KEY, Integer.MAX_VALUE); + ExponentialClientBackoffPolicy backoff = new ExponentialClientBackoffPolicy(conf); + + ServerStatistics stats = new ServerStatistics(); + long previous = backoff.getBackoffTime(server, regionname, stats); + for (int i = 1; i <= 100; i++) { + update(stats, i); + long next = backoff.getBackoffTime(server, regionname, stats); + assertTrue( + "Previous backoff time" + previous + " >= " + next + ", the next backoff time for " + + "load " + i, previous < next); + previous = next; + } + } + + private void update(ServerStatistics stats, int load) { + ClientProtos.RegionLoadStats stat = ClientProtos.RegionLoadStats.newBuilder() + .setMemstoreLoad + (load).build(); + stats.update(regionname, stat); + } +} \ No newline at end of file diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java index 9582405..da643fc 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestClientNoCluster.java @@ -33,7 +33,6 @@ import java.util.concurrent.Executors; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.lang.NotImplementedException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -43,11 +42,10 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.RegionTooBusyException; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.protobuf.generated.CellProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; @@ -71,6 +69,9 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType; import org.apache.hadoop.hbase.regionserver.RegionServerStoppedException; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.Threads; @@ -91,7 +92,7 @@ import com.google.protobuf.ServiceException; * Test client behavior w/o setting up a cluster. * Mock up cluster emissions. */ -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestClientNoCluster extends Configured implements Tool { private static final Log LOG = LogFactory.getLog(TestClientNoCluster.class); private Configuration conf; @@ -129,12 +130,6 @@ public class TestClientNoCluster extends Configured implements Tool { } @Override - public boolean isTableOnlineState(TableName tableName, boolean enabled) - throws IOException { - return enabled; - } - - @Override public int getCurrentNrHRS() throws IOException { return 1; } @@ -175,7 +170,7 @@ public class TestClientNoCluster extends Configured implements Tool { * @throws IOException */ @Test - public void testRocTimeout() throws IOException { + public void testRpcTimeout() throws IOException { Configuration localConfig = HBaseConfiguration.create(this.conf); // This override mocks up our exists/get call to throw a RegionServerStoppedException. localConfig.set("hbase.client.connection.impl", RpcTimeoutConnection.class.getName()); @@ -209,7 +204,9 @@ public class TestClientNoCluster extends Configured implements Tool { public void testDoNotRetryMetaScanner() throws IOException { this.conf.set("hbase.client.connection.impl", RegionServerStoppedOnScannerOpenConnection.class.getName()); - MetaScanner.metaScan(this.conf, null); + try (Connection connection = ConnectionFactory.createConnection(conf)) { + MetaScanner.metaScan(connection, null); + } } @Test diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java index 62b4972..e3582c1 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestDeleteTimeStamp.java @@ -16,13 +16,14 @@ import java.util.Map.Entry; import java.util.NavigableMap; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestDeleteTimeStamp { private static final byte[] ROW = Bytes.toBytes("testRow"); private static final byte[] FAMILY = Bytes.toBytes("testFamily"); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestFastFailWithoutTestUtil.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestFastFailWithoutTestUtil.java index 7cb0be6..e82e59d 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestFastFailWithoutTestUtil.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestFastFailWithoutTestUtil.java @@ -46,12 +46,13 @@ import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.exceptions.ConnectionClosingException; import org.apache.hadoop.hbase.exceptions.PreemptiveFastFailException; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.ipc.RemoteException; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category({ SmallTests.class }) +@Category({ SmallTests.class, ClientTests.class }) public class TestFastFailWithoutTestUtil { private static final Log LOG = LogFactory.getLog(TestFastFailWithoutTestUtil.class); @@ -340,55 +341,55 @@ public class TestFastFailWithoutTestUtil { } /*** - * This test tries to create a thread interleaving of the 2 threads trying to do a + * This test tries to create a thread interleaving of the 2 threads trying to do a * Retrying operation using a {@link PreemptiveFastFailInterceptor}. The goal here is to make sure * that the second thread will be attempting the operation while the first thread is in the - * process of making an attempt after it has marked the server in fast fail. - * + * process of making an attempt after it has marked the server in fast fail. + * * The thread execution is as follows : * The PreemptiveFastFailInterceptor is extended in this test to achieve a good interleaving * behavior without using any thread sleeps. - * + * * Privileged Thread 1 NonPrivileged Thread 2 - * - * Retry 0 : intercept - * + * + * Retry 0 : intercept + * * Retry 0 : handleFailure * latches[0].countdown * latches2[0].await * latches[0].await * intercept : Retry 0 - * + * * handleFailure : Retry 0 - * + * * updateFailureinfo : Retry 0 * latches2[0].countdown - * + * * Retry 0 : updateFailureInfo - * + * * Retry 1 : intercept - * + * * Retry 1 : handleFailure * latches[1].countdown * latches2[1].await - * + * * latches[1].await * intercept : Retry 1 * (throws PFFE) * handleFailure : Retry 1 - * + * * updateFailureinfo : Retry 1 * latches2[1].countdown * Retry 1 : updateFailureInfo - * - * + * + * * See getInterceptor() for more details on the interceptor implementation to make sure this * thread interleaving is achieved. - * + * * We need 2 sets of latches of size MAX_RETRIES. We use an AtomicInteger done to make sure that * we short circuit the Thread 1 after we hit the PFFE on Thread 2 - * - * + * + * * @throws InterruptedException * @throws ExecutionException */ @@ -469,7 +470,7 @@ public class TestFastFailWithoutTestUtil { } ExecutorService executor = Executors.newCachedThreadPool(); - + /** * Some timeouts to make the test execution resonable. */ @@ -477,7 +478,7 @@ public class TestFastFailWithoutTestUtil { final int RETRIES = 3; final int CLEANUP_TIMEOUT = 10000; final long FAST_FAIL_THRESHOLD = PAUSE_TIME / 1; - + /** * The latches necessary to make the thread interleaving possible. */ @@ -563,7 +564,7 @@ public class TestFastFailWithoutTestUtil { public RpcRetryingCaller getRpcRetryingCaller(int pauseTime, int retries, RetryingCallerInterceptor interceptor) { - return new RpcRetryingCaller(pauseTime, retries, interceptor, 9) { + return new RpcRetryingCallerImpl(pauseTime, retries, interceptor, 9) { @Override public Void callWithRetries(RetryingCallable callable, int callTimeout) throws IOException, RuntimeException { @@ -597,12 +598,12 @@ public class TestFastFailWithoutTestUtil { protected HRegionLocation getLocation() { return new HRegionLocation(null, serverName); } - + @Override public void throwable(Throwable t, boolean retrying) { // Do nothing } - + @Override public long sleep(long pause, int tries) { return ConnectionUtils.getPauseTime(pause, tries + 1); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java index 335103c..23e538c 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestGet.java @@ -34,13 +34,14 @@ import java.util.Set; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.FilterList; import org.apache.hadoop.hbase.filter.KeyOnlyFilter; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Base64; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; @@ -48,7 +49,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; // TODO: cover more test cases -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestGet { private static final byte [] ROW = new byte [] {'r'}; diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestIncrement.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestIncrement.java index 8a2c447..4b9f113 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestIncrement.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestIncrement.java @@ -22,11 +22,12 @@ import static org.junit.Assert.assertEquals; import java.util.Map; import java.util.NavigableMap; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestIncrement { @Test public void test() { diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestOperation.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestOperation.java index 1e81f28..96c4190d 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestOperation.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestOperation.java @@ -20,13 +20,6 @@ package org.apache.hadoop.hbase.client; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; -import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.CellUtil; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; -import org.junit.Assert; -import org.junit.Test; import java.io.IOException; import java.nio.ByteBuffer; @@ -35,6 +28,10 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.filter.BinaryComparator; import org.apache.hadoop.hbase.filter.ColumnCountGetFilter; import org.apache.hadoop.hbase.filter.ColumnPaginationFilter; @@ -54,22 +51,26 @@ import org.apache.hadoop.hbase.filter.PageFilter; import org.apache.hadoop.hbase.filter.PrefixFilter; import org.apache.hadoop.hbase.filter.QualifierFilter; import org.apache.hadoop.hbase.filter.RowFilter; -import org.apache.hadoop.hbase.filter.SingleColumnValueFilter; import org.apache.hadoop.hbase.filter.SingleColumnValueExcludeFilter; +import org.apache.hadoop.hbase.filter.SingleColumnValueFilter; import org.apache.hadoop.hbase.filter.SkipFilter; import org.apache.hadoop.hbase.filter.TimestampsFilter; import org.apache.hadoop.hbase.filter.ValueFilter; import org.apache.hadoop.hbase.filter.WhileMatchFilter; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.BuilderStyleTest; import org.apache.hadoop.hbase.util.Bytes; import org.codehaus.jackson.map.ObjectMapper; +import org.junit.Assert; +import org.junit.Test; import org.junit.experimental.categories.Category; /** * Run tests that use the functionality of the Operation superclass for * Puts, Gets, Deletes, Scans, and MultiPuts. */ -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestOperation { private static byte [] ROW = Bytes.toBytes("testRow"); private static byte [] FAMILY = Bytes.toBytes("testFamily"); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPutDotHas.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPutDotHas.java index 0fec0eb..c269e62 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPutDotHas.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestPutDotHas.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; @@ -24,7 +25,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) /** * Addresses HBASE-6047 * We test put.has call with all of its polymorphic magic diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestScan.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestScan.java index d843723..129914f 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestScan.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestScan.java @@ -25,17 +25,18 @@ import java.io.IOException; import java.util.Arrays; import java.util.Set; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.security.visibility.Authorizations; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; // TODO: cover more test cases -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestScan { @Test public void testAttributesSerialization() throws IOException { diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java index 30060b2..78d718e 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromAdmin.java @@ -27,13 +27,14 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsSnapshotDoneRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsSnapshotDoneResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotResponse; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; @@ -43,7 +44,7 @@ import com.google.protobuf.RpcController; /** * Test snapshot logic from the client */ -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestSnapshotFromAdmin { private static final Log LOG = LogFactory.getLog(TestSnapshotFromAdmin.class); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java index 308b8e2..3eab225 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestIPCUtil.java @@ -33,10 +33,11 @@ import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.Codec; import org.apache.hadoop.hbase.codec.KeyValueCodec; import org.apache.hadoop.hbase.io.SizedCellScanner; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.io.compress.CompressionCodec; @@ -47,7 +48,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestIPCUtil { public static final Log LOG = LogFactory.getLog(IPCUtil.class); diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestPayloadCarryingRpcController.java hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestPayloadCarryingRpcController.java index b506b88..e6d6f43 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestPayloadCarryingRpcController.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/ipc/TestPayloadCarryingRpcController.java @@ -28,13 +28,14 @@ import java.util.List; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScannable; import org.apache.hadoop.hbase.CellScanner; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestPayloadCarryingRpcController { @Test public void testListOfCellScannerables() throws IOException { diff --git hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java index e5e7b78..ed6f49b 100644 --- hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java +++ hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java @@ -17,7 +17,9 @@ */ package org.apache.hadoop.hbase.security; -import static org.junit.Assert.*; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; import java.security.Key; import java.security.KeyException; @@ -27,15 +29,15 @@ import javax.crypto.spec.SecretKeySpec; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.io.crypto.aes.AES; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; - import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) public class TestEncryptionUtil { @Test diff --git hbase-common/pom.xml hbase-common/pom.xml index 8373d89..3ae4565 100644 --- hbase-common/pom.xml +++ hbase-common/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -41,13 +41,32 @@ - - org.apache.maven.plugins - maven-site-plugin - - true - - + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + + + + default-testCompile + + ${java.default.compiler} + true + + + + + + org.apache.maven.plugins + maven-site-plugin + + true + + maven-assembly-plugin @@ -183,6 +202,10 @@ org.apache.hbase + hbase-protocol + + + org.apache.hbase hbase-annotations test-jar test diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java index 3ad717b..d760aa2 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparator.java @@ -252,7 +252,7 @@ public class CellComparator implements Comparator, Serializable { * Returns a hash code that is always the same for two Cells having a matching * equals(..) result. Currently does not guard against nulls, but it could if * necessary. Note : Ignore mvcc while calculating the hashcode - * + * * @param cell * @return hashCode */ @@ -481,4 +481,4 @@ public class CellComparator implements Comparator, Serializable { } return minimumMidpointArray; } -} +} \ No newline at end of file diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java index 3b5cdb9..f337122 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java @@ -22,7 +22,6 @@ import java.io.IOException; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; /** * An interface for iterating through a sequence of cells. Similar to Java's Iterator, but without diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java index 0e5dfcd..fefe626 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java @@ -26,9 +26,9 @@ import java.util.List; import java.util.Map.Entry; import java.util.NavigableMap; +import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.ByteRange; @@ -41,6 +41,11 @@ import org.apache.hadoop.hbase.util.Bytes; @InterfaceStability.Evolving public final class CellUtil { + /** + * Private constructor to keep this class from being instantiated. + */ + private CellUtil(){} + /******************* ByteRange *******************************/ public static ByteRange fillRowRange(Cell cell, ByteRange range) { @@ -187,7 +192,8 @@ public final class CellUtil { } public static Cell createCell(final byte[] row, final byte[] family, final byte[] qualifier, - final long timestamp, final byte type, final byte[] value, byte[] tags, final long memstoreTS) { + final long timestamp, final byte type, final byte[] value, byte[] tags, + final long memstoreTS) { KeyValue keyValue = new KeyValue(row, family, qualifier, timestamp, KeyValue.Type.codeToType(type), value, tags); keyValue.setSequenceId(memstoreTS); @@ -241,7 +247,8 @@ public final class CellUtil { * @param cellScannerables * @return CellScanner interface over cellIterables */ - public static CellScanner createCellScanner(final List cellScannerables) { + public static CellScanner createCellScanner( + final List cellScannerables) { return new CellScanner() { private final Iterator iterator = cellScannerables.iterator(); private CellScanner cellScanner = null; @@ -544,7 +551,7 @@ public final class CellUtil { /** * This is an estimate of the heap space occupied by a cell. When the cell is of type * {@link HeapSize} we call {@link HeapSize#heapSize()} so cell can give a correct value. In other - * cases we just consider the byte occupied by the cell components ie. row, CF, qualifier, + * cases we just consider the bytes occupied by the cell components ie. row, CF, qualifier, * timestamp, type, value and tags. * @param cell * @return estimate of the heap space @@ -847,4 +854,33 @@ public final class CellUtil { } return commonPrefix; } + + /** Returns a string representation of the cell */ + public static String toString(Cell cell, boolean verbose) { + if (cell == null) { + return ""; + } + StringBuilder builder = new StringBuilder(); + String keyStr = getCellKeyAsString(cell); + + String tag = null; + String value = null; + if (verbose) { + // TODO: pretty print tags as well + tag = Bytes.toStringBinary(cell.getTagsArray(), cell.getTagsOffset(), cell.getTagsLength()); + value = Bytes.toStringBinary(cell.getValueArray(), cell.getValueOffset(), + cell.getValueLength()); + } + + builder + .append(keyStr); + if (tag != null && !tag.isEmpty()) { + builder.append("/").append(tag); + } + if (value != null) { + builder.append("/").append(value); + } + + return builder.toString(); + } } diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/Chore.java hbase-common/src/main/java/org/apache/hadoop/hbase/Chore.java index c2c7964..42d9d37 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/Chore.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/Chore.java @@ -38,7 +38,7 @@ import org.apache.hadoop.hbase.util.Sleeper; public abstract class Chore extends HasThread { private final Log LOG = LogFactory.getLog(this.getClass()); private final Sleeper sleeper; - protected final Stoppable stopper; + private final Stoppable stopper; /** * @param p Period at which we should run. Will be adjusted appropriately @@ -65,6 +65,13 @@ public abstract class Chore extends HasThread { } /** + * @return the sleep period in milliseconds + */ + public final int getPeriod() { + return sleeper.getPeriod(); + } + + /** * @see java.lang.Thread#run() */ @Override @@ -139,4 +146,12 @@ public abstract class Chore extends HasThread { */ protected void cleanup() { } + + protected Stoppable getStopper() { + return stopper; + } + + protected Sleeper getSleeper() { + return sleeper; + } } diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/CompoundConfiguration.java hbase-common/src/main/java/org/apache/hadoop/hbase/CompoundConfiguration.java index 6b2c8b2..0eda1e5 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/CompoundConfiguration.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/CompoundConfiguration.java @@ -20,7 +20,7 @@ package org.apache.hadoop.hbase; import java.io.DataOutput; -import java.io.IOException; +import java.io.IOException; import java.io.OutputStream; import java.util.ArrayList; import java.util.HashMap; @@ -29,9 +29,8 @@ import java.util.List; import java.util.Map; import org.apache.commons.collections.iterators.UnmodifiableIterator; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; /** @@ -73,11 +72,11 @@ public class CompoundConfiguration extends Configuration { int size(); } - protected List configs + private final List configs = new ArrayList(); static class ImmutableConfWrapper implements ImmutableConfigMap { - Configuration c; + private final Configuration c; ImmutableConfWrapper(Configuration conf) { c = conf; @@ -149,27 +148,27 @@ public class CompoundConfiguration extends Configuration { } /** - * Add ImmutableBytesWritable map to config list. This map is generally + * Add Bytes map to config list. This map is generally * created by HTableDescriptor or HColumnDescriptor, but can be abstractly * used. The added configuration overrides the previous ones if there are * name collisions. * * @param map - * ImmutableBytesWritable map + * Bytes map * @return this, for builder pattern */ - public CompoundConfiguration addWritableMap( - final Map map) { + public CompoundConfiguration addBytesMap( + final Map map) { freezeMutableConf(); // put new map at the front of the list (top priority) this.configs.add(0, new ImmutableConfigMap() { - Map m = map; + private final Map m = map; @Override public Iterator> iterator() { Map ret = new HashMap(); - for (Map.Entry entry : map.entrySet()) { + for (Map.Entry entry : map.entrySet()) { String key = Bytes.toString(entry.getKey().get()); String val = entry.getValue() == null ? null : Bytes.toString(entry.getValue().get()); ret.put(key, val); @@ -179,11 +178,11 @@ public class CompoundConfiguration extends Configuration { @Override public String get(String key) { - ImmutableBytesWritable ibw = new ImmutableBytesWritable(Bytes + Bytes ibw = new Bytes(Bytes .toBytes(key)); if (!m.containsKey(ibw)) return null; - ImmutableBytesWritable value = m.get(ibw); + Bytes value = m.get(ibw); if (value == null || value.get() == null) return null; return Bytes.toString(value.get()); @@ -225,7 +224,7 @@ public class CompoundConfiguration extends Configuration { // put new map at the front of the list (top priority) this.configs.add(0, new ImmutableConfigMap() { - Map m = map; + private final Map m = map; @Override public Iterator> iterator() { diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java index 7779399..53e9392 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java @@ -24,9 +24,9 @@ import java.util.Map.Entry; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.io.util.HeapMemorySizeUtil; import org.apache.hadoop.hbase.util.VersionInfo; @@ -42,6 +42,7 @@ public class HBaseConfiguration extends Configuration { /** * Instantinating HBaseConfiguration() is deprecated. Please use * HBaseConfiguration#create() to construct a plain Configuration + * @deprecated Please use create() instead. */ @Deprecated public HBaseConfiguration() { @@ -55,6 +56,7 @@ public class HBaseConfiguration extends Configuration { /** * Instantiating HBaseConfiguration() is deprecated. Please use * HBaseConfiguration#create(conf) to construct a plain Configuration + * @deprecated Please user create(conf) instead. */ @Deprecated public HBaseConfiguration(final Configuration c) { @@ -167,8 +169,9 @@ public class HBaseConfiguration extends Configuration { * Get the password from the Configuration instance using the * getPassword method if it exists. If not, then fall back to the * general get method for configuration elements. - * @param conf configuration instance for accessing the passwords - * @param alias the name of the password element + * + * @param conf configuration instance for accessing the passwords + * @param alias the name of the password element * @param defPass the default password * @return String password or default password * @throws IOException @@ -183,8 +186,7 @@ public class HBaseConfiguration extends Configuration { LOG.debug(String.format("Config option \"%s\" was found through" + " the Configuration getPassword method.", alias)); passwd = new String(p); - } - else { + } else { LOG.debug(String.format( "Config option \"%s\" was not found. Using provided default value", alias)); @@ -195,7 +197,7 @@ public class HBaseConfiguration extends Configuration { //provider API doesn't exist yet LOG.debug(String.format( "Credential.getPassword method is not available." + - " Falling back to configuration.")); + " Falling back to configuration.")); passwd = conf.get(alias, defPass); } catch (SecurityException e) { throw new IOException(e.getMessage(), e); @@ -209,7 +211,8 @@ public class HBaseConfiguration extends Configuration { return passwd; } - /** For debugging. Dump configurations to system output as xml format. + /** + * For debugging. Dump configurations to system output as xml format. * Master and RS configurations can also be dumped using * http services. e.g. "curl http://master:16010/dump" */ diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseIOException.java hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseIOException.java index e789b7e..9c3367e 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseIOException.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseIOException.java @@ -17,11 +17,11 @@ */ package org.apache.hadoop.hbase; +import java.io.IOException; + import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import java.io.IOException; - /** * All hbase specific IOExceptions should be subclasses of HBaseIOException */ diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseInterfaceAudience.java hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseInterfaceAudience.java index 4ec84e9..2e58913 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseInterfaceAudience.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseInterfaceAudience.java @@ -25,7 +25,13 @@ import org.apache.hadoop.hbase.classification.InterfaceStability; */ @InterfaceAudience.Public @InterfaceStability.Evolving -public class HBaseInterfaceAudience { +public final class HBaseInterfaceAudience { + + /** + * Can't create this class. + */ + private HBaseInterfaceAudience(){} + public static final String COPROC = "Coprocesssor"; public static final String REPLICATION = "Replication"; public static final String PHOENIX = "Phoenix"; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java index 7700812..33b71ad 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java @@ -151,7 +151,9 @@ public final class HConstants { /** Parameter name for the master type being backup (waits for primary to go inactive). */ public static final String MASTER_TYPE_BACKUP = "hbase.master.backup"; - /** by default every master is a possible primary master unless the conf explicitly overrides it */ + /** + * by default every master is a possible primary master unless the conf explicitly overrides it + */ public static final boolean DEFAULT_MASTER_TYPE_BACKUP = false; /** Name of ZooKeeper quorum configuration parameter. */ @@ -180,8 +182,11 @@ public final class HConstants { /** Default client port that the zookeeper listens on */ public static final int DEFAULT_ZOOKEPER_CLIENT_PORT = 2181; - /** Parameter name for the wait time for the recoverable zookeeper */ - public static final String ZOOKEEPER_RECOVERABLE_WAITTIME = "hbase.zookeeper.recoverable.waittime"; + /** + * Parameter name for the wait time for the recoverable zookeeper + */ + public static final String ZOOKEEPER_RECOVERABLE_WAITTIME = + "hbase.zookeeper.recoverable.waittime"; /** Default wait time for the recoverable zookeeper */ public static final long DEFAULT_ZOOKEPER_RECOVERABLE_WAITIME = 10000; @@ -381,7 +386,10 @@ public final class HConstants { // should go down. - /** The hbase:meta table's name. */ + /** + * The hbase:meta table's name. + * + */ @Deprecated // for compat from 0.94 -> 0.96. public static final byte[] META_TABLE_NAME = TableName.META_TABLE_NAME.getName(); @@ -684,37 +692,37 @@ public final class HConstants { public static final int DEFAULT_HBASE_CLIENT_SCANNER_CACHING = 100; /** - * Parameter name for number of versions, kept by meta table. + * Parameter name for number of rows that will be fetched when calling next on + * a scanner if it is not served from memory. Higher caching values will + * enable faster scanners but will eat up more memory and some calls of next + * may take longer and longer times when the cache is empty. */ - public static String HBASE_META_VERSIONS = "hbase.meta.versions"; + public static final String HBASE_META_SCANNER_CACHING = "hbase.meta.scanner.caching"; /** - * Default value of {@link #HBASE_META_VERSIONS}. + * Default value of {@link #HBASE_META_SCANNER_CACHING}. */ - public static int DEFAULT_HBASE_META_VERSIONS = 10; + public static final int DEFAULT_HBASE_META_SCANNER_CACHING = 100; /** * Parameter name for number of versions, kept by meta table. */ - public static String HBASE_META_BLOCK_SIZE = "hbase.meta.blocksize"; + public static final String HBASE_META_VERSIONS = "hbase.meta.versions"; /** - * Default value of {@link #HBASE_META_BLOCK_SIZE}. + * Default value of {@link #HBASE_META_VERSIONS}. */ - public static int DEFAULT_HBASE_META_BLOCK_SIZE = 8 * 1024; + public static final int DEFAULT_HBASE_META_VERSIONS = 3; /** - * Parameter name for number of rows that will be fetched when calling next on - * a scanner if it is not served from memory. Higher caching values will - * enable faster scanners but will eat up more memory and some calls of next - * may take longer and longer times when the cache is empty. + * Parameter name for number of versions, kept by meta table. */ - public static final String HBASE_META_SCANNER_CACHING = "hbase.meta.scanner.caching"; + public static final String HBASE_META_BLOCK_SIZE = "hbase.meta.blocksize"; /** - * Default value of {@link #HBASE_META_SCANNER_CACHING}. + * Default value of {@link #HBASE_META_BLOCK_SIZE}. */ - public static final int DEFAULT_HBASE_META_SCANNER_CACHING = 100; + public static final int DEFAULT_HBASE_META_BLOCK_SIZE = 8 * 1024; /** * Parameter name for unique identifier for this {@link org.apache.hadoop.conf.Configuration} @@ -849,9 +857,9 @@ public final class HConstants { /** Conf key that enables unflushed WAL edits directly being replayed to region servers */ public static final String DISTRIBUTED_LOG_REPLAY_KEY = "hbase.master.distributed.log.replay"; /** - * Default 'distributed log replay' as true since hbase 1.1 (HBASE-12577) + * Default 'distributed log replay' as true since hbase 0.99.0 */ - public static final boolean DEFAULT_DISTRIBUTED_LOG_REPLAY_CONFIG = false; + public static final boolean DEFAULT_DISTRIBUTED_LOG_REPLAY_CONFIG = true; public static final String DISALLOW_WRITES_IN_RECOVERING = "hbase.regionserver.disallow.writes.when.recovering"; public static final boolean DEFAULT_DISALLOW_WRITES_IN_RECOVERING_CONFIG = false; @@ -1078,7 +1086,7 @@ public final class HConstants { public static final String HBASE_CLIENT_FAST_FAIL_THREASHOLD_MS = "hbase.client.fastfail.threshold"; - + public static final long HBASE_CLIENT_FAST_FAIL_THREASHOLD_MS_DEFAULT = 60000; @@ -1089,7 +1097,13 @@ public final class HConstants { 600000; public static final String HBASE_CLIENT_FAST_FAIL_INTERCEPTOR_IMPL = - "hbase.client.fast.fail.interceptor.impl"; + "hbase.client.fast.fail.interceptor.impl"; + + /** Config key for if the server should send backpressure and if the client should listen to + * that backpressure from the server */ + public static final String ENABLE_CLIENT_BACKPRESSURE = "hbase.client.backpressure.enabled"; + public static final boolean DEFAULT_ENABLE_CLIENT_BACKPRESSURE = false; + private HConstants() { // Can't be instantiated with this ctor. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java index b4b5755..8566a88 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java @@ -750,65 +750,6 @@ public class KeyValue implements Cell, HeapSize, Cloneable, SettableSequenceId, } /** - * Create a KeyValue that is smaller than all other possible KeyValues - * for the given row. That is any (valid) KeyValue on 'row' would sort - * _after_ the result. - * - * @param row - row key (arbitrary byte array) - * @return First possible KeyValue on passed row - * @deprecated Since 0.99.2. Use {@link KeyValueUtil#createFirstOnRow(byte [])} instead - */ - @Deprecated - public static KeyValue createFirstOnRow(final byte [] row) { - return KeyValueUtil.createFirstOnRow(row, HConstants.LATEST_TIMESTAMP); - } - - /** - * Create a KeyValue for the specified row, family and qualifier that would be - * smaller than all other possible KeyValues that have the same row,family,qualifier. - * Used for seeking. - * @param row - row key (arbitrary byte array) - * @param family - family name - * @param qualifier - column qualifier - * @return First possible key on passed row, and column. - * @deprecated Since 0.99.2. Use {@link KeyValueUtil#createFirstOnRow(byte[], byte[], byte[])} - * instead - */ - @Deprecated - public static KeyValue createFirstOnRow(final byte [] row, final byte [] family, - final byte [] qualifier) { - return KeyValueUtil.createFirstOnRow(row, family, qualifier); - } - - /** - * Create a KeyValue for the specified row, family and qualifier that would be - * smaller than all other possible KeyValues that have the same row, - * family, qualifier. - * Used for seeking. - * @param row row key - * @param roffset row offset - * @param rlength row length - * @param family family name - * @param foffset family offset - * @param flength family length - * @param qualifier column qualifier - * @param qoffset qualifier offset - * @param qlength qualifier length - * @return First possible key on passed Row, Family, Qualifier. - * @deprecated Since 0.99.2. Use {@link KeyValueUtil#createFirstOnRow(byte[], int, int, - * byte[], int, int, byte[], int, int)} instead - */ - @Deprecated - public static KeyValue createFirstOnRow(final byte [] row, - final int roffset, final int rlength, final byte [] family, - final int foffset, final int flength, final byte [] qualifier, - final int qoffset, final int qlength) { - return new KeyValue(row, roffset, rlength, family, - foffset, flength, qualifier, qoffset, qlength, - HConstants.LATEST_TIMESTAMP, Type.Maximum, null, 0, 0); - } - - /** * Create an empty byte[] representing a KeyValue * All lengths are preset and can be filled in later. * @param rlength diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java index 9e969e7..dde15bc 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java @@ -24,8 +24,8 @@ import java.nio.ByteBuffer; import java.util.ArrayList; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.KeyValue.Type; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.util.StreamUtils; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/NamespaceDescriptor.java hbase-common/src/main/java/org/apache/hadoop/hbase/NamespaceDescriptor.java index 4f0e296..e1ceace 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/NamespaceDescriptor.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/NamespaceDescriptor.java @@ -18,10 +18,6 @@ package org.apache.hadoop.hbase; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.util.Bytes; - import java.util.Collections; import java.util.Comparator; import java.util.HashSet; @@ -30,6 +26,10 @@ import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Bytes; + /** * Namespace POJO class. Used to represent and define namespaces. * diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/ServerName.java hbase-common/src/main/java/org/apache/hadoop/hbase/ServerName.java new file mode 100644 index 0000000..f6f89b4 --- /dev/null +++ hbase-common/src/main/java/org/apache/hadoop/hbase/ServerName.java @@ -0,0 +1,402 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import java.io.Serializable; +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.protobuf.ProtobufMagic; +import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.util.Addressing; +import org.apache.hadoop.hbase.util.Bytes; + +import com.google.common.net.InetAddresses; +import com.google.protobuf.InvalidProtocolBufferException; + +/** + * Instance of an HBase ServerName. + * A server name is used uniquely identifying a server instance in a cluster and is made + * of the combination of hostname, port, and startcode. The startcode distingushes restarted + * servers on same hostname and port (startcode is usually timestamp of server startup). The + * {@link #toString()} format of ServerName is safe to use in the filesystem and as znode name + * up in ZooKeeper. Its format is: + * <hostname> '{@link #SERVERNAME_SEPARATOR}' <port> '{@link #SERVERNAME_SEPARATOR}' <startcode>. + * For example, if hostname is www.example.org, port is 1234, + * and the startcode for the regionserver is 1212121212, then + * the {@link #toString()} would be www.example.org,1234,1212121212. + * + *

    You can obtain a versioned serialized form of this class by calling + * {@link #getVersionedBytes()}. To deserialize, call {@link #parseVersionedServerName(byte[])} + * + *

    Immutable. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class ServerName implements Comparable, Serializable { + private static final long serialVersionUID = 1367463982557264981L; + + /** + * Version for this class. + * Its a short rather than a byte so I can for sure distinguish between this + * version of this class and the version previous to this which did not have + * a version. + */ + private static final short VERSION = 0; + static final byte [] VERSION_BYTES = Bytes.toBytes(VERSION); + + /** + * What to use if no startcode supplied. + */ + public static final int NON_STARTCODE = -1; + + /** + * This character is used as separator between server hostname, port and + * startcode. + */ + public static final String SERVERNAME_SEPARATOR = ","; + + public static final Pattern SERVERNAME_PATTERN = + Pattern.compile("[^" + SERVERNAME_SEPARATOR + "]+" + + SERVERNAME_SEPARATOR + Addressing.VALID_PORT_REGEX + + SERVERNAME_SEPARATOR + Addressing.VALID_PORT_REGEX + "$"); + + /** + * What to use if server name is unknown. + */ + public static final String UNKNOWN_SERVERNAME = "#unknown#"; + + private final String servername; + private final String hostnameOnly; + private final int port; + private final long startcode; + + /** + * Cached versioned bytes of this ServerName instance. + * @see #getVersionedBytes() + */ + private byte [] bytes; + public static final List EMPTY_SERVER_LIST = new ArrayList(0); + + private ServerName(final String hostname, final int port, final long startcode) { + // Drop the domain is there is one; no need of it in a local cluster. With it, we get long + // unwieldy names. + this.hostnameOnly = hostname; + this.port = port; + this.startcode = startcode; + this.servername = getServerName(this.hostnameOnly, port, startcode); + } + + /** + * @param hostname + * @return hostname minus the domain, if there is one (will do pass-through on ip addresses) + */ + static String getHostNameMinusDomain(final String hostname) { + if (InetAddresses.isInetAddress(hostname)) return hostname; + String [] parts = hostname.split("\\."); + if (parts == null || parts.length == 0) return hostname; + return parts[0]; + } + + private ServerName(final String serverName) { + this(parseHostname(serverName), parsePort(serverName), + parseStartcode(serverName)); + } + + private ServerName(final String hostAndPort, final long startCode) { + this(Addressing.parseHostname(hostAndPort), + Addressing.parsePort(hostAndPort), startCode); + } + + public static String parseHostname(final String serverName) { + if (serverName == null || serverName.length() <= 0) { + throw new IllegalArgumentException("Passed hostname is null or empty"); + } + if (!Character.isLetterOrDigit(serverName.charAt(0))) { + throw new IllegalArgumentException("Bad passed hostname, serverName=" + serverName); + } + int index = serverName.indexOf(SERVERNAME_SEPARATOR); + return serverName.substring(0, index); + } + + public static int parsePort(final String serverName) { + String [] split = serverName.split(SERVERNAME_SEPARATOR); + return Integer.parseInt(split[1]); + } + + public static long parseStartcode(final String serverName) { + int index = serverName.lastIndexOf(SERVERNAME_SEPARATOR); + return Long.parseLong(serverName.substring(index + 1)); + } + + /** + * Retrieve an instance of ServerName. + * Callers should use the equals method to compare returned instances, though we may return + * a shared immutable object as an internal optimization. + */ + public static ServerName valueOf(final String hostname, final int port, final long startcode) { + return new ServerName(hostname, port, startcode); + } + + /** + * Retrieve an instance of ServerName. + * Callers should use the equals method to compare returned instances, though we may return + * a shared immutable object as an internal optimization. + */ + public static ServerName valueOf(final String serverName) { + return new ServerName(serverName); + } + + /** + * Retrieve an instance of ServerName. + * Callers should use the equals method to compare returned instances, though we may return + * a shared immutable object as an internal optimization. + */ + public static ServerName valueOf(final String hostAndPort, final long startCode) { + return new ServerName(hostAndPort, startCode); + } + + @Override + public String toString() { + return getServerName(); + } + + /** + * @return Return a SHORT version of {@link ServerName#toString()}, one that has the host only, + * minus the domain, and the port only -- no start code; the String is for us internally mostly + * tying threads to their server. Not for external use. It is lossy and will not work in + * in compares, etc. + */ + public String toShortString() { + return Addressing.createHostAndPortStr(getHostNameMinusDomain(this.hostnameOnly), this.port); + } + + /** + * @return {@link #getServerName()} as bytes with a short-sized prefix with + * the ServerName#VERSION of this class. + */ + public synchronized byte [] getVersionedBytes() { + if (this.bytes == null) { + this.bytes = Bytes.add(VERSION_BYTES, Bytes.toBytes(getServerName())); + } + return this.bytes; + } + + public String getServerName() { + return servername; + } + + public String getHostname() { + return hostnameOnly; + } + + public int getPort() { + return port; + } + + public long getStartcode() { + return startcode; + } + + /** + * For internal use only. + * @param hostName + * @param port + * @param startcode + * @return Server name made of the concatenation of hostname, port and + * startcode formatted as <hostname> ',' <port> ',' <startcode> + */ + static String getServerName(String hostName, int port, long startcode) { + final StringBuilder name = new StringBuilder(hostName.length() + 1 + 5 + 1 + 13); + name.append(hostName); + name.append(SERVERNAME_SEPARATOR); + name.append(port); + name.append(SERVERNAME_SEPARATOR); + name.append(startcode); + return name.toString(); + } + + /** + * @param hostAndPort String in form of <hostname> ':' <port> + * @param startcode + * @return Server name made of the concatenation of hostname, port and + * startcode formatted as <hostname> ',' <port> ',' <startcode> + */ + public static String getServerName(final String hostAndPort, + final long startcode) { + int index = hostAndPort.indexOf(":"); + if (index <= 0) throw new IllegalArgumentException("Expected ':' "); + return getServerName(hostAndPort.substring(0, index), + Integer.parseInt(hostAndPort.substring(index + 1)), startcode); + } + + /** + * @return Hostname and port formatted as described at + * {@link Addressing#createHostAndPortStr(String, int)} + */ + public String getHostAndPort() { + return Addressing.createHostAndPortStr(this.hostnameOnly, this.port); + } + + /** + * @param serverName ServerName in form specified by {@link #getServerName()} + * @return The server start code parsed from servername + */ + public static long getServerStartcodeFromServerName(final String serverName) { + int index = serverName.lastIndexOf(SERVERNAME_SEPARATOR); + return Long.parseLong(serverName.substring(index + 1)); + } + + /** + * Utility method to excise the start code from a server name + * @param inServerName full server name + * @return server name less its start code + */ + public static String getServerNameLessStartCode(String inServerName) { + if (inServerName != null && inServerName.length() > 0) { + int index = inServerName.lastIndexOf(SERVERNAME_SEPARATOR); + if (index > 0) { + return inServerName.substring(0, index); + } + } + return inServerName; + } + + @Override + public int compareTo(ServerName other) { + int compare = this.getHostname().compareToIgnoreCase(other.getHostname()); + if (compare != 0) return compare; + compare = this.getPort() - other.getPort(); + if (compare != 0) return compare; + return (int)(this.getStartcode() - other.getStartcode()); + } + + @Override + public int hashCode() { + return getServerName().hashCode(); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null) return false; + if (!(o instanceof ServerName)) return false; + return this.compareTo((ServerName)o) == 0; + } + + /** + * @param left + * @param right + * @return True if other has same hostname and port. + */ + public static boolean isSameHostnameAndPort(final ServerName left, + final ServerName right) { + if (left == null) return false; + if (right == null) return false; + return left.getHostname().equals(right.getHostname()) && + left.getPort() == right.getPort(); + } + + /** + * Use this method instantiating a {@link ServerName} from bytes + * gotten from a call to {@link #getVersionedBytes()}. Will take care of the + * case where bytes were written by an earlier version of hbase. + * @param versionedBytes Pass bytes gotten from a call to {@link #getVersionedBytes()} + * @return A ServerName instance. + * @see #getVersionedBytes() + */ + public static ServerName parseVersionedServerName(final byte [] versionedBytes) { + // Version is a short. + short version = Bytes.toShort(versionedBytes); + if (version == VERSION) { + int length = versionedBytes.length - Bytes.SIZEOF_SHORT; + return valueOf(Bytes.toString(versionedBytes, Bytes.SIZEOF_SHORT, length)); + } + // Presume the bytes were written with an old version of hbase and that the + // bytes are actually a String of the form "'' ':' ''". + return valueOf(Bytes.toString(versionedBytes), NON_STARTCODE); + } + + /** + * @param str Either an instance of {@link ServerName#toString()} or a + * "'' ':' ''". + * @return A ServerName instance. + */ + public static ServerName parseServerName(final String str) { + return SERVERNAME_PATTERN.matcher(str).matches()? valueOf(str) : + valueOf(str, NON_STARTCODE); + } + + + /** + * @return true if the String follows the pattern of {@link ServerName#toString()}, false + * otherwise. + */ + public static boolean isFullServerName(final String str){ + if (str == null ||str.isEmpty()) return false; + return SERVERNAME_PATTERN.matcher(str).matches(); + } + + /** + * Get a ServerName from the passed in data bytes. + * @param data Data with a serialize server name in it; can handle the old style + * servername where servername was host and port. Works too with data that + * begins w/ the pb 'PBUF' magic and that is then followed by a protobuf that + * has a serialized {@link ServerName} in it. + * @return Returns null if data is null else converts passed data + * to a ServerName instance. + * @throws DeserializationException + */ + public static ServerName parseFrom(final byte [] data) throws DeserializationException { + if (data == null || data.length <= 0) return null; + if (ProtobufMagic.isPBMagicPrefix(data)) { + int prefixLen = ProtobufMagic.lengthOfPBMagic(); + try { + ZooKeeperProtos.Master rss = + ZooKeeperProtos.Master.PARSER.parseFrom(data, prefixLen, data.length - prefixLen); + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName sn = rss.getMaster(); + return valueOf(sn.getHostName(), sn.getPort(), sn.getStartCode()); + } catch (InvalidProtocolBufferException e) { + // A failed parse of the znode is pretty catastrophic. Rather than loop + // retrying hoping the bad bytes will changes, and rather than change + // the signature on this method to add an IOE which will send ripples all + // over the code base, throw a RuntimeException. This should "never" happen. + // Fail fast if it does. + throw new DeserializationException(e); + } + } + // The str returned could be old style -- pre hbase-1502 -- which was + // hostname and port seperated by a colon rather than hostname, port and + // startcode delimited by a ','. + String str = Bytes.toString(data); + int index = str.indexOf(ServerName.SERVERNAME_SEPARATOR); + if (index != -1) { + // Presume its ServerName serialized with versioned bytes. + return ServerName.parseVersionedServerName(data); + } + // Presume it a hostname:port format. + String hostname = Addressing.parseHostname(str); + int port = Addressing.parsePort(str); + return valueOf(hostname, port, -1L); + } +} diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java index 802319e..c560a43 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/TableName.java @@ -18,16 +18,16 @@ package org.apache.hadoop.hbase; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.KeyValue.KVComparator; -import org.apache.hadoop.hbase.util.Bytes; - import java.nio.ByteBuffer; import java.util.Arrays; import java.util.Set; import java.util.concurrent.CopyOnWriteArraySet; +import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.util.Bytes; + /** * Immutable POJO class for representing a table name. * Which is of the form: diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java index f3137ae..f93d7f2 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/VersionAnnotation.java @@ -17,7 +17,10 @@ */ package org.apache.hadoop.hbase; -import java.lang.annotation.*; +import java.lang.annotation.ElementType; +import java.lang.annotation.Retention; +import java.lang.annotation.RetentionPolicy; +import java.lang.annotation.Target; import org.apache.hadoop.hbase.classification.InterfaceAudience; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java index 3776c08..51801a8 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseDecoder.java @@ -23,8 +23,8 @@ import java.io.InputStream; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * TODO javadoc diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseEncoder.java index 6f2231c..7a96abe 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/BaseEncoder.java @@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.codec; import java.io.IOException; import java.io.OutputStream; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * TODO javadoc diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java index 77cf80a..9d03d89 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodec.java @@ -22,10 +22,10 @@ import java.io.InputStream; import java.io.OutputStream; import org.apache.commons.io.IOUtils; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java index b4efaf8..a614026 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CellCodecWithTags.java @@ -22,10 +22,10 @@ import java.io.InputStream; import java.io.OutputStream; import org.apache.commons.io.IOUtils; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/Codec.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/Codec.java index 4c8aad1..de44ec6 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/Codec.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/Codec.java @@ -20,17 +20,18 @@ package org.apache.hadoop.hbase.codec; import java.io.InputStream; import java.io.OutputStream; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.CellOutputStream; /** * Encoder/Decoder for Cell. * *

    Like {@link org.apache.hadoop.hbase.io.encoding.DataBlockEncoder} - * only Cell-based rather than KeyValue version 1 based and without presuming - * an hfile context. Intent is an Interface that will work for hfile and rpc. + * only Cell-based rather than KeyValue version 1 based + * and without presuming an hfile context. Intent is an Interface that will work for hfile and + * rpc. */ @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.PHOENIX}) public interface Codec { diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CodecException.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CodecException.java index 440ae78..8ebe25a 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CodecException.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/CodecException.java @@ -18,8 +18,8 @@ package org.apache.hadoop.hbase.codec; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HBaseIOException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Thrown when problems in the codec whether setup or context. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java index cfe9742..f41d6b0 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodec.java @@ -21,11 +21,11 @@ import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Codec that does KeyValue version 1 serialization. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java index 02158f4..664fcac 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/codec/KeyValueCodecWithTags.java @@ -21,11 +21,11 @@ import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Codec that does KeyValue version 1 serialization with serializing tags also. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java new file mode 100644 index 0000000..0ce0219 --- /dev/null +++ hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/DeserializationException.java @@ -0,0 +1,43 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.exceptions; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * Failed deserialization. + */ +@InterfaceAudience.Private +@SuppressWarnings("serial") +public class DeserializationException extends HBaseException { + public DeserializationException() { + super(); + } + + public DeserializationException(final String message) { + super(message); + } + + public DeserializationException(final String message, final Throwable t) { + super(message, t); + } + + public DeserializationException(final Throwable t) { + super(t); + } +} diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java new file mode 100644 index 0000000..fe0d7d7 --- /dev/null +++ hbase-common/src/main/java/org/apache/hadoop/hbase/exceptions/HBaseException.java @@ -0,0 +1,44 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.exceptions; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * Base checked exception in HBase. + * @see HBASE-5796 + */ +@SuppressWarnings("serial") +@InterfaceAudience.Private +public class HBaseException extends Exception { + public HBaseException() { + super(); + } + + public HBaseException(final String message) { + super(message); + } + + public HBaseException(final String message, final Throwable t) { + super(message, t); + } + + public HBaseException(final Throwable t) { + super(t); + } +} diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/CellOutputStream.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/CellOutputStream.java index cdd74dd..34f1bf7 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/CellOutputStream.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/CellOutputStream.java @@ -20,9 +20,9 @@ package org.apache.hadoop.hbase.io; import java.io.IOException; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.Cell; /** * Accepts a stream of Cells. This can be used to build a block of cells during compactions diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java index d74a5d6..f658210 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java @@ -18,9 +18,9 @@ package org.apache.hadoop.hbase.io; -import java.io.IOException; import java.io.DataInput; import java.io.DataOutput; +import java.io.IOException; import java.util.Arrays; import java.util.List; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/LimitInputStream.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/LimitInputStream.java index 1497fcb..68e3ad4 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/LimitInputStream.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/LimitInputStream.java @@ -19,12 +19,13 @@ package org.apache.hadoop.hbase.io; +import static com.google.common.base.Preconditions.checkArgument; +import static com.google.common.base.Preconditions.checkNotNull; + import java.io.FilterInputStream; import java.io.IOException; import java.io.InputStream; -import static com.google.common.base.Preconditions.checkArgument; -import static com.google.common.base.Preconditions.checkNotNull; import org.apache.hadoop.hbase.classification.InterfaceAudience; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/SizedCellScanner.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/SizedCellScanner.java index 9fc7033..ed272ef 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/SizedCellScanner.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/SizedCellScanner.java @@ -16,11 +16,10 @@ * limitations under the License. */ package org.apache.hadoop.hbase.io; +import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.CellScanner; - /** * A CellScanner that knows its size in memory in bytes. * Used playing the CellScanner into an in-memory buffer; knowing the size ahead of time saves diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java index 537437d..26e7b50 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java @@ -25,8 +25,8 @@ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.util.Dictionary; import org.apache.hadoop.hbase.io.util.StreamUtils; import org.apache.hadoop.hbase.util.ByteBufferUtils; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java index 8a349db..edb4dfa 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java @@ -25,10 +25,10 @@ import java.io.OutputStream; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.compress.CodecPool; import org.apache.hadoop.io.compress.CompressionCodec; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java index 6deb365..5a475cc 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/CipherProvider.java @@ -16,9 +16,9 @@ */ package org.apache.hadoop.hbase.io.crypto; +import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configurable; /** * An CipherProvider contributes support for various cryptographic diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java index 31cca0e..1e2881e 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java @@ -18,11 +18,11 @@ package org.apache.hadoop.hbase.io.crypto; import java.security.Key; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.util.MD5Hash; import com.google.common.base.Preconditions; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java index 961fbae..4f2aebe 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DefaultCipherProvider.java @@ -16,10 +16,10 @@ */ package org.apache.hadoop.hbase.io.crypto; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.io.crypto.aes.AES; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java index 9c20f3b..3420d0a 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java @@ -35,11 +35,11 @@ import javax.crypto.spec.SecretKeySpec; import org.apache.commons.io.IOUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.util.ReflectionUtils; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java index 0e5f36e..62167d6 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyStoreKeyProvider.java @@ -19,8 +19,8 @@ package org.apache.hadoop.hbase.io.crypto; import java.io.BufferedInputStream; import java.io.File; import java.io.FileInputStream; -import java.io.InputStream; import java.io.IOException; +import java.io.InputStream; import java.net.URI; import java.net.URISyntaxException; import java.net.URLDecoder; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java index 48ae788..5ac5d2e 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CompressionState.java @@ -18,8 +18,8 @@ package org.apache.hadoop.hbase.io.encoding; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java index 1c757f5..6b87c77 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java @@ -21,12 +21,12 @@ import java.io.DataOutputStream; import java.io.IOException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.WritableUtils; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java index 0e85380..872c22c 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java @@ -21,9 +21,9 @@ import java.io.DataOutputStream; import java.io.IOException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.hfile.HFileContext; /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java index 14048e4..4182dc4 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java @@ -21,12 +21,12 @@ import java.io.DataOutputStream; import java.io.IOException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java index d83c0b7..1f7f8fa 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java @@ -26,10 +26,10 @@ import java.nio.ByteBuffer; import java.util.Iterator; import org.apache.commons.lang.NotImplementedException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.hfile.HFileContext; import org.apache.hadoop.hbase.util.ByteBufferUtils; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodingState.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodingState.java index 3adc548..a333a15 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodingState.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodingState.java @@ -18,8 +18,8 @@ */ package org.apache.hadoop.hbase.io.encoding; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Keeps track of the encoding state. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java index 5e28479..a6f43d0 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java @@ -21,12 +21,12 @@ import java.io.DataOutputStream; import java.io.IOException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java index 8c17102..0286eca 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java @@ -21,12 +21,12 @@ import java.io.DataOutputStream; import java.io.IOException; import java.nio.ByteBuffer; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.hbase.util.Bytes; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java index 83fe701..ce8b71a 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContext.java @@ -16,8 +16,8 @@ * limitations under the License. */ package org.apache.hadoop.hbase.io.hfile; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.crypto.Encryption; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java index 9a4234a..5c5d75f 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileContextBuilder.java @@ -17,8 +17,8 @@ */ package org.apache.hadoop.hbase.io.hfile; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.crypto.Encryption; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/HeapMemorySizeUtil.java hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/HeapMemorySizeUtil.java index f464db9..250a984 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/HeapMemorySizeUtil.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/HeapMemorySizeUtil.java @@ -22,9 +22,9 @@ import java.lang.management.MemoryUsage; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; @InterfaceAudience.Private public class HeapMemorySizeUtil { diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java index 66df645..eced0ff 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/security/UserProvider.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.security; import java.io.IOException; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.BaseConfigurable; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.ReflectionUtils; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/types/PBType.java hbase-common/src/main/java/org/apache/hadoop/hbase/types/PBType.java index 89109c3..3d545f6 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/types/PBType.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/types/PBType.java @@ -25,8 +25,6 @@ import org.apache.hadoop.hbase.util.PositionedByteRange; import com.google.protobuf.CodedInputStream; import com.google.protobuf.CodedOutputStream; import com.google.protobuf.Message; -import org.apache.hadoop.hbase.util.Order; -import org.apache.hadoop.hbase.util.PositionedByteRange; /** * A base-class for {@link DataType} implementations backed by protobuf. See diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesFixedLength.java hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesFixedLength.java index 334b42f..bfd6416 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesFixedLength.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesFixedLength.java @@ -24,8 +24,9 @@ import org.apache.hadoop.hbase.util.PositionedByteRange; /** * An {@code DataType} that encodes fixed-length values encoded using - * {@link org.apache.hadoop.hbase.util.Bytes#putBytes(byte[], int, byte[], int, int)}. - * Intended to make it easier to transition away from direct use of + * {@link org.apache.hadoop.hbase.util.Bytes#putBytes( + * byte[], int, byte[], int, int)}. Intended to make it + * easier to transition away from direct use of * {@link org.apache.hadoop.hbase.util.Bytes}. * @see org.apache.hadoop.hbase.util.Bytes#putBytes(byte[], int, byte[], int, int) * @see RawBytes diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesTerminated.java hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesTerminated.java index 54a4c63..8bc4c20 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesTerminated.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawBytesTerminated.java @@ -25,9 +25,8 @@ import org.apache.hadoop.hbase.util.PositionedByteRange; /** * An {@code DataType} that encodes variable-length values encoded using * {@link org.apache.hadoop.hbase.util.Bytes#putBytes(byte[], int, byte[], int, int)}. - * Includes a termination marker following the raw {@code byte[]} value. Intended to - * make it easier to transition away from direct use of - * {@link org.apache.hadoop.hbase.io.ImmutableBytesWritable}. + * Includes a termination marker following the raw {@code byte[]} value. Intended to make it easier + * to transition away from direct use of {@link org.apache.hadoop.hbase.util.Bytes}. * @see org.apache.hadoop.hbase.util.Bytes#putBytes(byte[], int, byte[], int, int) * @see RawBytes * @see OrderedBlob diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java index c96a0b9..4d89d5b 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/types/RawStringTerminated.java @@ -24,9 +24,9 @@ import org.apache.hadoop.hbase.util.Order; /** * An {@code DataType} that encodes variable-length values encoded using * {@link org.apache.hadoop.hbase.util.Bytes#toBytes(String)}. - * Includes a termination marker following the raw {@code byte[]} value. - * Intended to make it easier to transition away from direct use of - * {@link org.apache.hadoop.hbase.util.Bytes}. + * Includes a termination marker following the + * raw {@code byte[]} value. Intended to make it easier to transition + * away from direct use of {@link org.apache.hadoop.hbase.util.Bytes}. * @see org.apache.hadoop.hbase.util.Bytes#toBytes(String) * @see org.apache.hadoop.hbase.util.Bytes#toString(byte[], int, int) * @see RawString diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java index a7c929f..fce0d40 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/Addressing.java @@ -83,6 +83,34 @@ public class Addressing { } public static InetAddress getIpAddress() throws SocketException { + return getIpAddress(new AddressSelectionCondition() { + @Override + public boolean isAcceptableAddress(InetAddress addr) { + return addr instanceof Inet4Address || addr instanceof Inet6Address; + } + }); + } + + public static InetAddress getIp4Address() throws SocketException { + return getIpAddress(new AddressSelectionCondition() { + @Override + public boolean isAcceptableAddress(InetAddress addr) { + return addr instanceof Inet4Address; + } + }); + } + + public static InetAddress getIp6Address() throws SocketException { + return getIpAddress(new AddressSelectionCondition() { + @Override + public boolean isAcceptableAddress(InetAddress addr) { + return addr instanceof Inet6Address; + } + }); + } + + private static InetAddress getIpAddress(AddressSelectionCondition condition) throws + SocketException { // Before we connect somewhere, we cannot be sure about what we'd be bound to; however, // we only connect when the message where client ID is, is long constructed. Thus, // just use whichever IP address we can find. @@ -94,7 +122,7 @@ public class Addressing { while (addresses.hasMoreElements()) { InetAddress addr = addresses.nextElement(); if (addr.isLoopbackAddress()) continue; - if (addr instanceof Inet4Address || addr instanceof Inet6Address) { + if (condition.isAcceptableAddress(addr)) { return addr; } } @@ -123,4 +151,16 @@ public class Addressing { } return local; } + + /** + * Interface for AddressSelectionCondition to check if address is acceptable + */ + public interface AddressSelectionCondition{ + /** + * Condition on which to accept inet address + * @param address to check + * @return true to accept this address + */ + public boolean isAcceptableAddress(InetAddress address); + } } diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/AtomicUtils.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/AtomicUtils.java new file mode 100644 index 0000000..35391ee --- /dev/null +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/AtomicUtils.java @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import java.util.concurrent.atomic.AtomicLong; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * Utilities related to atomic operations. + */ +@InterfaceAudience.Private +public class AtomicUtils { + /** + * Updates a AtomicLong which is supposed to maintain the minimum values. This method is not + * synchronized but is thread-safe. + */ + public static void updateMin(AtomicLong min, long value) { + while (true) { + long cur = min.get(); + if (value >= cur) { + break; + } + + if (min.compareAndSet(cur, value)) { + break; + } + } + } + + /** + * Updates a AtomicLong which is supposed to maintain the maximum values. This method is not + * synchronized but is thread-safe. + */ + public static void updateMax(AtomicLong max, long value) { + while (true) { + long cur = max.get(); + if (value <= cur) { + break; + } + + if (max.compareAndSet(cur, value)) { + break; + } + } + } + +} diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java index 6677520..d1f4f20 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/Base64.java @@ -19,11 +19,6 @@ package org.apache.hadoop.hbase.util; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; - import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.ByteArrayInputStream; @@ -43,6 +38,11 @@ import java.io.UnsupportedEncodingException; import java.util.zip.GZIPInputStream; import java.util.zip.GZIPOutputStream; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + /** * Encodes and decodes to and from Base64 notation. * diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java index a342a48..97c2c36 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java @@ -39,6 +39,8 @@ import java.util.Comparator; import java.util.Iterator; import java.util.List; +import com.google.protobuf.ByteString; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; @@ -58,11 +60,14 @@ import org.apache.hadoop.hbase.util.Bytes.LexicographicalComparerHolder.UnsafeCo /** * Utility class that handles byte arrays, conversions to/from other types, * comparisons, hash code generation, manufacturing keys for HashMaps or - * HashSets, etc. + * HashSets, and can be used as key in maps or trees. */ @InterfaceAudience.Public @InterfaceStability.Stable -public class Bytes { +@edu.umd.cs.findbugs.annotations.SuppressWarnings( + value="EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS", + justification="It has been like this forever") +public class Bytes implements Comparable { //HConstants.UTF8_ENCODING should be updated if this changed /** When we encode strings, we always specify UTF8 encoding */ private static final String UTF8_ENCODING = "UTF-8"; @@ -125,7 +130,7 @@ public class Bytes { // SizeOf which uses java.lang.instrument says 24 bytes. (3 longs?) public static final int ESTIMATED_HEAP_TAX = 16; - + /** * Returns length of the byte array, returning 0 if the array is null. * Useful for calculating sizes. @@ -136,6 +141,190 @@ public class Bytes { return b == null ? 0 : b.length; } + private byte[] bytes; + private int offset; + private int length; + + /** + * Create a zero-size sequence. + */ + public Bytes() { + super(); + } + + /** + * Create a Bytes using the byte array as the initial value. + * @param bytes This array becomes the backing storage for the object. + */ + public Bytes(byte[] bytes) { + this(bytes, 0, bytes.length); + } + + /** + * Set the new Bytes to the contents of the passed + * ibw. + * @param ibw the value to set this Bytes to. + */ + public Bytes(final Bytes ibw) { + this(ibw.get(), ibw.getOffset(), ibw.getLength()); + } + + /** + * Set the value to a given byte range + * @param bytes the new byte range to set to + * @param offset the offset in newData to start at + * @param length the number of bytes in the range + */ + public Bytes(final byte[] bytes, final int offset, + final int length) { + this.bytes = bytes; + this.offset = offset; + this.length = length; + } + + /** + * Copy bytes from ByteString instance. + * @param byteString copy from + */ + public Bytes(final ByteString byteString) { + this(byteString.toByteArray()); + } + + /** + * Get the data from the Bytes. + * @return The data is only valid between offset and offset+length. + */ + public byte [] get() { + if (this.bytes == null) { + throw new IllegalStateException("Uninitialiized. Null constructor " + + "called w/o accompaying readFields invocation"); + } + return this.bytes; + } + + /** + * @param b Use passed bytes as backing array for this instance. + */ + public void set(final byte [] b) { + set(b, 0, b.length); + } + + /** + * @param b Use passed bytes as backing array for this instance. + * @param offset + * @param length + */ + public void set(final byte [] b, final int offset, final int length) { + this.bytes = b; + this.offset = offset; + this.length = length; + } + + /** + * @return the number of valid bytes in the buffer + * @deprecated use {@link #getLength()} instead + */ + @Deprecated + public int getSize() { + if (this.bytes == null) { + throw new IllegalStateException("Uninitialiized. Null constructor " + + "called w/o accompaying readFields invocation"); + } + return this.length; + } + + /** + * @return the number of valid bytes in the buffer + */ + public int getLength() { + if (this.bytes == null) { + throw new IllegalStateException("Uninitialiized. Null constructor " + + "called w/o accompaying readFields invocation"); + } + return this.length; + } + + /** + * @return offset + */ + public int getOffset(){ + return this.offset; + } + + public ByteString toByteString() { + return ByteString.copyFrom(this.bytes, this.offset, this.length); + } + + @Override + public int hashCode() { + return Bytes.hashCode(bytes, offset, length); + } + + /** + * Define the sort order of the Bytes. + * @param that The other bytes writable + * @return Positive if left is bigger than right, 0 if they are equal, and + * negative if left is smaller than right. + */ + public int compareTo(Bytes that) { + return BYTES_RAWCOMPARATOR.compare( + this.bytes, this.offset, this.length, + that.bytes, that.offset, that.length); + } + + /** + * Compares the bytes in this object to the specified byte array + * @param that + * @return Positive if left is bigger than right, 0 if they are equal, and + * negative if left is smaller than right. + */ + public int compareTo(final byte [] that) { + return BYTES_RAWCOMPARATOR.compare( + this.bytes, this.offset, this.length, + that, 0, that.length); + } + + /** + * @see Object#equals(Object) + */ + @Override + public boolean equals(Object right_obj) { + if (right_obj instanceof byte []) { + return compareTo((byte [])right_obj) == 0; + } + if (right_obj instanceof Bytes) { + return compareTo((Bytes)right_obj) == 0; + } + return false; + } + + /** + * @see Object#toString() + */ + @Override + public String toString() { + return Bytes.toString(bytes, offset, length); + } + + /** + * @param array List of byte []. + * @return Array of byte []. + */ + public static byte [][] toArray(final List array) { + // List#toArray doesn't work on lists of byte []. + byte[][] results = new byte[array.size()][]; + for (int i = 0; i < array.size(); i++) { + results[i] = array.get(i); + } + return results; + } + + /** + * Returns a copy of the bytes referred to by this writable + */ + public byte[] copyBytes() { + return Arrays.copyOfRange(bytes, offset, offset+length); + } /** * Byte array comparator class. */ @@ -369,6 +558,25 @@ public class Bytes { * * @param b Presumed UTF-8 encoded byte array. * @param off offset into array + * @return String made from b or null + */ + public static String toString(final byte [] b, int off) { + if (b == null) { + return null; + } + int len = b.length - off; + if (len <= 0) { + return ""; + } + return new String(b, off, len, UTF8_CHARSET); + } + + /** + * This method will convert utf8 encoded bytes into a string. If + * the given byte array is null, this method will return null. + * + * @param b Presumed UTF-8 encoded byte array. + * @param off offset into array * @param len length of utf-8 sequence * @return String made from b or null */ @@ -1508,8 +1716,8 @@ public class Bytes { /** * @param b bytes to hash * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the - * passed in array. This method is what {@link org.apache.hadoop.io.Text} and - * {@link org.apache.hadoop.hbase.io.ImmutableBytesWritable} use calculating hash code. + * passed in array. This method is what {@link org.apache.hadoop.io.Text} + * use calculating hash code. */ public static int hashCode(final byte [] b) { return hashCode(b, b.length); @@ -1519,8 +1727,8 @@ public class Bytes { * @param b value * @param length length of the value * @return Runs {@link WritableComparator#hashBytes(byte[], int)} on the - * passed in array. This method is what {@link org.apache.hadoop.io.Text} and - * {@link org.apache.hadoop.hbase.io.ImmutableBytesWritable} use calculating hash code. + * passed in array. This method is what {@link org.apache.hadoop.io.Text} + * use calculating hash code. */ public static int hashCode(final byte [] b, final int length) { return WritableComparator.hashBytes(b, length); @@ -1701,8 +1909,19 @@ public class Bytes { diffBI = diffBI.add(BigInteger.ONE); } final BigInteger splitsBI = BigInteger.valueOf(num + 1); + //when diffBI < splitBI, use an additional byte to increase diffBI if(diffBI.compareTo(splitsBI) < 0) { - return null; + byte[] aPaddedAdditional = new byte[aPadded.length+1]; + byte[] bPaddedAdditional = new byte[bPadded.length+1]; + for (int i = 0; i < aPadded.length; i++){ + aPaddedAdditional[i] = aPadded[i]; + } + for (int j = 0; j < bPadded.length; j++){ + bPaddedAdditional[j] = bPadded[j]; + } + aPaddedAdditional[aPadded.length] = 0; + bPaddedAdditional[bPadded.length] = 0; + return iterateOnSplits(aPaddedAdditional, bPaddedAdditional, inclusive, num); } final BigInteger intervalBI; try { @@ -2250,7 +2469,7 @@ public class Bytes { } return result; } - + /** * Convert a byte array into a hex string * @param b diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java index ed86919..414832d 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumFactory.java @@ -20,9 +20,8 @@ package org.apache.hadoop.hbase.util; import java.io.IOException; -import java.lang.ClassNotFoundException; -import java.util.zip.Checksum; import java.lang.reflect.Constructor; +import java.util.zip.Checksum; import org.apache.hadoop.hbase.classification.InterfaceAudience; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcatenatedLists.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcatenatedLists.java index 8a3f6c5..0f00132 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcatenatedLists.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcatenatedLists.java @@ -21,7 +21,6 @@ package org.apache.hadoop.hbase.util; import java.lang.reflect.Array; import java.util.ArrayList; import java.util.Collection; -import java.util.Iterator; import java.util.List; import java.util.NoSuchElementException; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcurrentIndex.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcurrentIndex.java index 5a889f8..3b4a1f1 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcurrentIndex.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/ConcurrentIndex.java @@ -20,8 +20,6 @@ package org.apache.hadoop.hbase.util; -import com.google.common.base.Supplier; - import java.util.Comparator; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; @@ -31,6 +29,8 @@ import java.util.concurrent.ConcurrentSkipListSet; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; +import com.google.common.base.Supplier; + /** * A simple concurrent map of sets. This is similar in concept to * {@link com.google.common.collect.Multiset}, with the following exceptions: diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/CoprocessorClassLoader.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/CoprocessorClassLoader.java index 7b837af..a3aabf7 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/CoprocessorClassLoader.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/CoprocessorClassLoader.java @@ -35,10 +35,10 @@ import java.util.regex.Pattern; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.io.IOUtils; import com.google.common.base.Preconditions; diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java index e434558..81e4483 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/DynamicClassLoader.java @@ -25,11 +25,11 @@ import java.util.HashMap; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * This is a class loader that can load classes dynamically from new diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/FastLongHistogram.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/FastLongHistogram.java new file mode 100644 index 0000000..623cbdb --- /dev/null +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/FastLongHistogram.java @@ -0,0 +1,233 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicLongArray; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * FastLongHistogram is a thread-safe class that estimate distribution of data and computes the + * quantiles. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class FastLongHistogram { + /** + * Bins is a class containing a list of buckets(or bins) for estimation histogram of some data. + */ + private static class Bins { + private final AtomicLongArray counts; + // inclusive + private final long binsMin; + // exclusive + private final long binsMax; + private final long bins10XMax; + private final AtomicLong min = new AtomicLong(Long.MAX_VALUE); + private final AtomicLong max = new AtomicLong(0L); + // set to true when any of data has been inserted to the Bins. It is set after the counts are + // updated. + private final AtomicBoolean hasData = new AtomicBoolean(false); + + /** + * The constructor for creating a Bins without any prior data. + */ + public Bins() { + this.counts = new AtomicLongArray(4); + this.binsMin = 0L; + this.binsMax = Long.MAX_VALUE; + this.bins10XMax = Long.MAX_VALUE; + } + + /** + * The constructor for creating a Bins with last Bins. + * @param last the last Bins instance. + * @param quantiles the quantiles for creating the bins of the histogram. + */ + public Bins(Bins last, int numOfBins, double minQ, double maxQ) { + long[] values = last.getQuantiles(new double[] { minQ, maxQ }); + long wd = values[1] - values[0] + 1; + // expand minQ and maxQ in two ends back assuming uniform distribution + this.binsMin = Math.max(0L, (long) (values[0] - wd * minQ)); + long binsMax = (long) (values[1] + wd * (1 - maxQ)) + 1; + // make sure each of bins is at least of width 1 + this.binsMax = Math.max(binsMax, this.binsMin + numOfBins); + this.bins10XMax = Math.max((long) (values[1] + (binsMax - 1) * 9), this.binsMax + 1); + + this.counts = new AtomicLongArray(numOfBins + 3); + } + + /** + * Adds a value to the histogram. + */ + public void add(long value, long count) { + AtomicUtils.updateMin(min, value); + AtomicUtils.updateMax(max, value); + + if (value < this.binsMin) { + this.counts.addAndGet(0, count); + } else if (value > this.bins10XMax) { + this.counts.addAndGet(this.counts.length() - 1, count); + } else if (value >= this.binsMax) { + this.counts.addAndGet(this.counts.length() - 2, count); + } else { + // compute the position + int pos = + 1 + (int) ((value - this.binsMin) * (this.counts.length() - 3) / (this.binsMax - this.binsMin)); + this.counts.addAndGet(pos, count); + } + + // hasData needs to be updated as last + this.hasData.set(true); + } + + /** + * Computes the quantiles give the ratios. + * @param smooth set to true to have a prior on the distribution. Used for recreating the bins. + */ + public long[] getQuantiles(double[] quantiles) { + if (!this.hasData.get()) { + // No data yet. + return new long[quantiles.length]; + } + + // Make a snapshot of lowerCounter, higherCounter and bins.counts to counts. + // This is not synchronized, but since the counter are accumulating, the result is a good + // estimation of a snapshot. + long[] counts = new long[this.counts.length()]; + long total = 0L; + for (int i = 0; i < this.counts.length(); i++) { + counts[i] = this.counts.get(i); + total += counts[i]; + } + + int rIndex = 0; + double qCount = total * quantiles[0]; + long cum = 0L; + + long[] res = new long[quantiles.length]; + countsLoop: for (int i = 0; i < counts.length; i++) { + // mn and mx define a value range + long mn, mx; + if (i == 0) { + mn = this.min.get(); + mx = this.binsMin; + } else if (i == counts.length - 1) { + mn = this.bins10XMax; + mx = this.max.get(); + } else if (i == counts.length - 2) { + mn = this.binsMax; + mx = this.bins10XMax; + } else { + mn = this.binsMin + (i - 1) * (this.binsMax - this.binsMin) / (this.counts.length() - 3); + mx = this.binsMin + i * (this.binsMax - this.binsMin) / (this.counts.length() - 3); + } + + if (mx < this.min.get()) { + continue; + } + if (mn > this.max.get()) { + break; + } + mn = Math.max(mn, this.min.get()); + mx = Math.min(mx, this.max.get()); + + // lastCum/cum are the corresponding counts to mn/mx + double lastCum = cum; + cum += counts[i]; + + // fill the results for qCount is within current range. + while (qCount <= cum) { + if (cum == lastCum) { + res[rIndex] = mn; + } else { + res[rIndex] = (long) ((qCount - lastCum) * (mx - mn) / (cum - lastCum) + mn); + } + + // move to next quantile + rIndex++; + if (rIndex >= quantiles.length) { + break countsLoop; + } + qCount = total * quantiles[rIndex]; + } + } + // In case quantiles contains values >= 100% + for (; rIndex < quantiles.length; rIndex++) { + res[rIndex] = this.max.get(); + } + + return res; + } + } + + // The bins counting values. It is replaced with a new one in calling of reset(). + private volatile Bins bins = new Bins(); + // The quantiles for creating a Bins with last Bins. + private final int numOfBins; + + /** + * Constructor. + * @param numOfBins the number of bins for the histogram. A larger value results in more precise + * results but with lower efficiency, and vice versus. + */ + public FastLongHistogram(int numOfBins) { + this.numOfBins = numOfBins; + } + + /** + * Constructor setting the bins assuming a uniform distribution within a range. + * @param numOfBins the number of bins for the histogram. A larger value results in more precise + * results but with lower efficiency, and vice versus. + * @param min lower bound of the region, inclusive. + * @param max higher bound of the region, inclusive. + */ + public FastLongHistogram(int numOfBins, long min, long max) { + this(numOfBins); + Bins bins = new Bins(); + bins.add(min, 1); + bins.add(max, 1); + this.bins = new Bins(bins, numOfBins, 0.01, 0.99); + } + + /** + * Adds a value to the histogram. + */ + public void add(long value, long count) { + this.bins.add(value, count); + } + + /** + * Computes the quantiles give the ratios. + */ + public long[] getQuantiles(double[] quantiles) { + return this.bins.getQuantiles(quantiles); + } + + /** + * Resets the histogram for new counting. + */ + public void reset() { + if (this.bins.hasData.get()) { + this.bins = new Bins(this.bins, numOfBins, 0.01, 0.99); + } + } +} diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java index 4457fe0..1738a49 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/HasThread.java @@ -17,10 +17,10 @@ */ package org.apache.hadoop.hbase.util; -import org.apache.hadoop.hbase.classification.InterfaceAudience; - import java.lang.Thread.UncaughtExceptionHandler; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + /** * Abstract class which contains a Thread and delegates the common Thread * methods to that instance. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java index a7d5843..82cf5c4 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.util; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; /** * This class represents a common API for hashing functions. diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java index 2b59967..8c8f618 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java @@ -19,8 +19,8 @@ package org.apache.hadoop.hbase.util; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.classification.InterfaceAudience; @InterfaceAudience.Private public class PrettyPrinter { diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java index 071250b..4822b0e 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/Sleeper.java @@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.util; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * Sleeper for current thread. @@ -114,4 +114,11 @@ public class Sleeper { triggerWake = false; } } + + /** + * @return the sleep period in milliseconds + */ + public final int getPeriod() { + return period; + } } diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java index aadad2e..d02c5e9 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/VersionInfo.java @@ -18,13 +18,13 @@ package org.apache.hadoop.hbase.util; -import org.apache.commons.logging.LogFactory; import java.io.PrintWriter; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.VersionAnnotation; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.hbase.VersionAnnotation; -import org.apache.commons.logging.Log; /** * This class finds the package info for hbase and the VersionAnnotation diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java index 068fbf5..9e9b507 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/LoadTestKVGenerator.java @@ -18,6 +18,8 @@ package org.apache.hadoop.hbase.util.test; import java.util.Random; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.MD5Hash; @@ -32,6 +34,9 @@ import org.apache.hadoop.hbase.util.MD5Hash; @InterfaceAudience.Private public class LoadTestKVGenerator { + private static final Log LOG = LogFactory.getLog(LoadTestKVGenerator.class); + private static int logLimit = 10; + /** A random number generator for determining value size */ private Random randomForValueSize = new Random(); @@ -56,7 +61,13 @@ public class LoadTestKVGenerator { */ public static boolean verify(byte[] value, byte[]... seedStrings) { byte[] expectedData = getValueForRowColumn(value.length, seedStrings); - return Bytes.equals(expectedData, value); + boolean equals = Bytes.equals(expectedData, value); + if (!equals && LOG.isDebugEnabled() && logLimit > 0) { + LOG.debug("verify failed, expected value: " + Bytes.toStringBinary(expectedData) + + " actual value: "+ Bytes.toStringBinary(value)); + logLimit--; // this is not thread safe, but at worst we will have more logging + } + return equals; } /** diff --git hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java index 859c07b..52bc4e0 100644 --- hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java +++ hbase-common/src/main/java/org/apache/hadoop/hbase/util/test/RedundantKVGenerator.java @@ -24,9 +24,9 @@ import java.util.List; import java.util.Map; import java.util.Random; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.ByteBufferUtils; import org.apache.hadoop.io.WritableUtils; diff --git hbase-common/src/main/resources/hbase-default.xml hbase-common/src/main/resources/hbase-default.xml index 9730560..5edf452 100644 --- hbase-common/src/main/resources/hbase-default.xml +++ hbase-common/src/main/resources/hbase-default.xml @@ -187,7 +187,7 @@ possible configurations would overwhelm and obscure the important. A value of 0 means a single queue shared between all the handlers. A value of 1 means that each handler has its own queue. - + hbase.ipc.server.callqueue.read.ratio 0 Split the call queues into read and write queues. @@ -300,14 +300,6 @@ possible configurations would overwhelm and obscure the important. blocked due to memstore limiting. - hbase.regionserver.global.memstore.size - 0.4 - Maximum size of all memstores in a region server before new updates are blocked - and flushes are forced. Defaults to 40% of heap (0.4). Updates are blocked and region level - flushes are forced until size of all memstores in a region server hits - hbase.regionserver.global.memstore.lowerLimit. - - hbase.regionserver.optionalcacheflushinterval 3600000 @@ -337,8 +329,8 @@ possible configurations would overwhelm and obscure the important. org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy A split policy determines when a region should be split. The various other split policies that - are available currently are ConstantSizeRegionSplitPolicy, DisabledRegionSplitPolicy, - DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc. + are available currently are ConstantSizeRegionSplitPolicy, DisabledRegionSplitPolicy, + DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc. @@ -596,6 +588,19 @@ possible configurations would overwhelm and obscure the important. every hbase.server.thread.wakefrequency. + hbase.hregion.percolumnfamilyflush.size.lower.bound + 16777216 + + If FlushLargeStoresPolicy is used, then every time that we hit the + total memstore limit, we find out all the column families whose memstores + exceed this value, and only flush them, while retaining the others whose + memstores are lower than this limit. If none of the families have their + memstore size more than this, all the memstores will be flushed + (just as usual). This value should be less than half of the total memstore + threshold (hbase.hregion.memstore.flush.size). + + + hbase.hregion.preclose.flush.size 5242880 @@ -633,77 +638,124 @@ possible configurations would overwhelm and obscure the important. hbase.hregion.max.filesize 10737418240 - Maximum HStoreFile size. If any one of a column families' HStoreFiles has - grown to exceed this value, the hosting HRegion is split in two. + Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this + value, the region is split in two. hbase.hregion.majorcompaction 604800000 - The time (in miliseconds) between 'major' compactions of all - HStoreFiles in a region. Default: Set to 7 days. Major compactions tend to - happen exactly when you need them least so enable them such that they run at - off-peak for your deploy; or, since this setting is on a periodicity that is - unlikely to match your loading, run the compactions via an external - invocation out of a cron job or some such. + Time between major compactions, expressed in milliseconds. Set to 0 to disable + time-based automatic major compactions. User-requested and size-based major compactions will + still run. This value is multiplied by hbase.hregion.majorcompaction.jitter to cause + compaction to start at a somewhat-random time during a given window of time. The default value + is 7 days, expressed in milliseconds. If major compactions are causing disruption in your + environment, you can configure them to run at off-peak times for your deployment, or disable + time-based major compactions by setting this parameter to 0, and run major compactions in a + cron job or by another external mechanism. hbase.hregion.majorcompaction.jitter 0.50 - Jitter outer bound for major compactions. - On each regionserver, we multiply the hbase.region.majorcompaction - interval by some random fraction that is inside the bounds of this - maximum. We then add this + or - product to when the next - major compaction is to run. The idea is that major compaction - does happen on every regionserver at exactly the same time. The - smaller this number, the closer the compactions come together. + A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occur + a given amount of time either side of hbase.hregion.majorcompaction. The smaller the number, + the closer the compactions will happen to the hbase.hregion.majorcompaction + interval. hbase.hstore.compactionThreshold 3 - - If more than this number of HStoreFiles in any one HStore - (one HStoreFile is written per flush of memstore) then a compaction - is run to rewrite all HStoreFiles files as one. Larger numbers - put off compaction but when it runs, it takes longer to complete. + If more than this number of StoreFiles exist in any one Store + (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all + StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does + occur, it takes longer to complete. hbase.hstore.flusher.count 2 - - The number of flush threads. With less threads, the memstore flushes will be queued. With - more threads, the flush will be executed in parallel, increasing the hdfs load. This can - lead as well to more compactions. - + The number of flush threads. With fewer threads, the MemStore flushes will be + queued. With more threads, the flushes will be executed in parallel, increasing the load on + HDFS, and potentially causing more compactions. hbase.hstore.blockingStoreFiles 10 - - If more than this number of StoreFiles in any one Store - (one StoreFile is written per flush of MemStore) then updates are - blocked for this HRegion until a compaction is completed, or - until hbase.hstore.blockingWaitTime has been exceeded. + If more than this number of StoreFiles exist in any one Store (one StoreFile + is written per flush of MemStore), updates are blocked for this region until a compaction is + completed, or until hbase.hstore.blockingWaitTime has been exceeded. hbase.hstore.blockingWaitTime 90000 - - The time an HRegion will block updates for after hitting the StoreFile - limit defined by hbase.hstore.blockingStoreFiles. - After this time has elapsed, the HRegion will stop blocking updates even - if a compaction has not been completed. + The time for which a region will block updates after reaching the StoreFile limit + defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop + blocking updates even if a compaction has not been completed. + + + hbase.hstore.compaction.min + 3 + The minimum number of StoreFiles which must be eligible for compaction before + compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with + too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction + each time you have two StoreFiles in a Store, and this is probably not appropriate. If you + set this value too high, all the other values will need to be adjusted accordingly. For most + cases, the default value is appropriate. In previous versions of HBase, the parameter + hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold. hbase.hstore.compaction.max 10 - Max number of HStoreFiles to compact per 'minor' compaction. + The maximum number of StoreFiles which will be selected for a single minor + compaction, regardless of the number of eligible StoreFiles. Effectively, the value of + hbase.hstore.compaction.max controls the length of time it takes a single compaction to + complete. Setting it larger means that more StoreFiles are included in a compaction. For most + cases, the default value is appropriate. - hbase.hstore.compaction.kv.max - 10 - How many KeyValues to read and then write in a batch when flushing - or compacting. Do less if big KeyValues and problems with OOME. - Do more if wide, small rows. + hbase.hstore.compaction.min.size + 134217728 + A StoreFile smaller than this size will always be eligible for minor compaction. + HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if + they are eligible. Because this limit represents the "automatic include"limit for all + StoreFiles smaller than this value, this value may need to be reduced in write-heavy + environments where many StoreFiles in the 1-2 MB range are being flushed, because every + StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the + minimum size and require further compaction. If this parameter is lowered, the ratio check is + triggered more quickly. This addressed some issues seen in earlier versions of HBase but + changing this parameter is no longer necessary in most situations. Default: 128 MB expressed + in bytes. + + + hbase.hstore.compaction.max.size + 9223372036854775807 + A StoreFile larger than this size will be excluded from compaction. The effect of + raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get + compacted often. If you feel that compaction is happening too often without much benefit, you + can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes. + + + hbase.hstore.compaction.ratio + 1.2F + For minor compaction, this ratio is used to determine whether a given StoreFile + which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its + effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio + is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single + giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the + BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and + 1.4 is recommended. When tuning this value, you are balancing write costs with read costs. + Raising the value (to something like 1.4) will have more write costs, because you will + compact larger StoreFiles. However, during reads, HBase will need to seek through fewer + StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of + Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the + background cost of writes, and use Bloom filters to control the number of StoreFiles touched + during reads. For most cases, the default value is appropriate. + + + hbase.hstore.compaction.ratio.offpeak + 5.0F + Allows you to set a different (by default, more aggressive) ratio for determining + whether larger StoreFiles are included in compactions during off-peak hours. Works in the + same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and + hbase.offpeak.end.hour are also enabled. hbase.hstore.time.to.purge.deletes @@ -715,6 +767,36 @@ possible configurations would overwhelm and obscure the important. + hbase.offpeak.start.hour + -1 + The start of off-peak hours, expressed as an integer between 0 and 23, inclusive. + Set to -1 to disable off-peak. + + + hbase.offpeak.end.hour + -1 + The end of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set + to -1 to disable off-peak. + + + hbase.regionserver.thread.compaction.throttle + 2684354560 + There are two different thread pools for compactions, one for large compactions and + the other for small compactions. This helps to keep compaction of lean tables (such as + hbase:meta) fast. If a compaction is larger than this threshold, it + goes into the large compaction pool. In most cases, the default value is appropriate. Default: + 2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size (which defaults to 128MB). + The value field assumes that the value of hbase.hregion.memstore.flush.size is unchanged from + the default. + + + hbase.hstore.compaction.kv.max + 10 + The maximum number of KeyValues to read and then write in a batch when flushing or + compacting. Set this lower if you have big KeyValues and problems with Out Of Memory + Exceptions Set this higher if you have wide, small rows. + + hbase.storescanner.parallel.seek.enable false @@ -731,7 +813,7 @@ possible configurations would overwhelm and obscure the important. hfile.block.cache.size 0.4 Percentage of maximum heap (-Xmx setting) to allocate to block cache - used by HFile/StoreFile. Default of 0.4 means allocate 40%. + used by a StoreFile. Default of 0.4 means allocate 40%. Set to 0 to disable but it's not recommended; you need at least enough cache to hold the storefile indices. @@ -748,6 +830,33 @@ possible configurations would overwhelm and obscure the important. index block in a multi-level block index grows to this size, the block is written out and a new block is started. + + hbase.bucketcache.ioengine + + Where to store the contents of the bucketcache. One of: onheap, + offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information. + + + + hbase.bucketcache.combinedcache.enabled + true + Whether or not the bucketcache is used in league with the LRU + on-heap block cache. In this mode, indices and blooms are kept in the LRU + blockcache and the data blocks are kept in the bucketcache. + + + hbase.bucketcache.size + 65536 + The size of the buckets for the bucketcache if you only use a single size. + Defaults to the default blocksize, which is 64 * 1024. + + + hbase.bucketcache.sizes + + A comma-separated list of sizes for buckets for the bucketcache + if you use multiple sizes. Should be a list of block sizes in order from smallest + to largest. The sizes you use will depend on your data access patterns. + hfile.format.version 3 @@ -869,6 +978,13 @@ possible configurations would overwhelm and obscure the important. authentication, and will abort the connection. + hbase.display.keys + true + When this is set to true the webUI and such will display all start/end keys + as part of the table details, region names, etc. When this is set to false, + the keys are hidden. + + hbase.coprocessor.region.classes A comma-separated list of Coprocessors that are loaded by @@ -1273,6 +1389,20 @@ possible configurations would overwhelm and obscure the important. + hbase.region.replica.replication.enabled + false + + Whether asynchronous WAL replication to the secondary region replicas is enabled or not. + If this is enabled, a replication peer named "region_replica_replication" will be created + which will tail the logs and replicate the mutatations to region replicas for tables that + have region replication > 1. If this is enabled once, disabling this replication also + requires disabling the replication peer using shell or ReplicationAdmin java class. + Replication to secondary region replicas works over standard inter-cluster replication. + So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication" + to true for this feature to work. + + + hbase.http.filter.initializers org.apache.hadoop.hbase.http.lib.StaticUserWebFilter diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java index c63f78d..d46537c 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/ClassFinder.java @@ -144,7 +144,7 @@ public class ClassFinder { resourcePath = isJar ? matcher.group(1) : resourcePath; if (null == this.resourcePathFilter || this.resourcePathFilter.isCandidatePath(resourcePath, isJar)) { - LOG.debug("Will look for classes in " + resourcePath); + LOG.debug("Looking in " + resourcePath + "; isJar=" + isJar); if (isJar) { jars.add(resourcePath); } else { @@ -223,7 +223,7 @@ public class ClassFinder { boolean proceedOnExceptions) throws ClassNotFoundException, LinkageError { Set> classes = new HashSet>(); if (!baseDirectory.exists()) { - LOG.warn("Failed to find " + baseDirectory.getAbsolutePath()); + LOG.warn(baseDirectory.getAbsolutePath() + " does not exist"); return classes; } diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/ClassTestFinder.java hbase-common/src/test/java/org/apache/hadoop/hbase/ClassTestFinder.java index 18368a4..5d4d941 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/ClassTestFinder.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/ClassTestFinder.java @@ -22,8 +22,6 @@ import java.lang.reflect.Method; import java.lang.reflect.Modifier; import java.util.regex.Pattern; -import org.apache.hadoop.hbase.ClassFinder.ClassFilter; -import org.apache.hadoop.hbase.ClassFinder.FileNameFilter; import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.runners.Suite; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java index 18ca35c..3cae4d2 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/HBaseCommonTestingUtility.java @@ -25,10 +25,10 @@ import java.util.UUID; import org.apache.commons.io.FileUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; /** * Common helpers for testing HBase that do not depend on specific server/etc. things. diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java index e56dea8..539aea3 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java @@ -19,11 +19,12 @@ package org.apache.hadoop.hbase; +import java.util.ArrayList; +import java.util.List; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import java.util.*; - /** * Utility class to check the resources: * - log them before and after each test method diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java index 26f63f8..6264a5e 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/ResourceCheckerJUnitListener.java @@ -29,9 +29,8 @@ import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import org.apache.hadoop.hbase.ResourceChecker.Phase; -import org.junit.runner.notification.RunListener; - import org.apache.hadoop.hbase.util.JVM; +import org.junit.runner.notification.RunListener; /** * Listen to the test progress and check the usage of: diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java index 0dc64ec..d6a2f72 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellComparator.java @@ -21,11 +21,12 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.KeyValue.Type; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCellComparator { byte[] row1 = Bytes.toBytes("row1"); @@ -130,4 +131,4 @@ public class TestCellComparator { assertTrue(CellComparator.compare(mid, right, true) <= 0); assertEquals(1, (int)mid.getQualifierLength()); } -} +} \ No newline at end of file diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java index fe32680..fea517f 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/TestCellUtil.java @@ -27,13 +27,14 @@ import java.util.NavigableMap; import java.util.TreeMap; import org.apache.hadoop.hbase.KeyValue.Type; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCellUtil { /** * CellScannable used in test. Returns a {@link TestCellScanner} @@ -394,6 +395,40 @@ public class TestCellUtil { HConstants.EMPTY_BYTE_ARRAY); cellToString = CellUtil.getCellKeyAsString(cell); assertEquals(kv.toString(), cellToString); + + } + @Test + public void testToString1() { + String row = "test.row"; + String family = "test.family"; + String qualifier = "test.qualifier"; + long timestamp = 42; + Type type = Type.Put; + String value = "test.value"; + long seqId = 1042; + + Cell cell = CellUtil.createCell(Bytes.toBytes(row), Bytes.toBytes(family), + Bytes.toBytes(qualifier), timestamp, type.getCode(), Bytes.toBytes(value), seqId); + + String nonVerbose = CellUtil.toString(cell, false); + String verbose = CellUtil.toString(cell, true); + + System.out.println("nonVerbose=" + nonVerbose); + System.out.println("verbose=" + verbose); + + Assert.assertEquals( + String.format("%s/%s:%s/%d/%s/vlen=%s/seqid=%s", + row, family, qualifier, timestamp, type.toString(), + Bytes.toBytes(value).length, seqId), + nonVerbose); + + Assert.assertEquals( + String.format("%s/%s:%s/%d/%s/vlen=%s/seqid=%s/%s", + row, family, qualifier, timestamp, type.toString(), Bytes.toBytes(value).length, + seqId, value), + verbose); + + // TODO: test with tags } -} +} \ No newline at end of file diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/TestClassFinder.java hbase-common/src/test/java/org/apache/hadoop/hbase/TestClassFinder.java index 1ee5ce0..0b83d05 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/TestClassFinder.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/TestClassFinder.java @@ -42,6 +42,7 @@ import java.util.jar.Manifest; import javax.tools.JavaCompiler; import javax.tools.ToolProvider; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -51,7 +52,7 @@ import org.junit.experimental.categories.Category; import org.junit.rules.TestName; import org.mortbay.log.Log; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestClassFinder { @Rule public TestName name = new TestName(); private static final HBaseCommonTestingUtility testUtil = new HBaseCommonTestingUtility(); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/TestCompoundConfiguration.java hbase-common/src/test/java/org/apache/hadoop/hbase/TestCompoundConfiguration.java index a84b9fb..57409b6 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/TestCompoundConfiguration.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/TestCompoundConfiguration.java @@ -25,13 +25,13 @@ import java.util.Map; import junit.framework.TestCase; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCompoundConfiguration extends TestCase { private Configuration baseConf; private int baseConfSize; @@ -115,23 +115,23 @@ public class TestCompoundConfiguration extends TestCase { assertEquals(baseConfSize + 1, cnt); } - private ImmutableBytesWritable strToIbw(String s) { - return new ImmutableBytesWritable(Bytes.toBytes(s)); + private Bytes strToIb(String s) { + return new Bytes(Bytes.toBytes(s)); } @Test public void testWithIbwMap() { - Map map = - new HashMap(); - map.put(strToIbw("B"), strToIbw("2b")); - map.put(strToIbw("C"), strToIbw("33")); - map.put(strToIbw("D"), strToIbw("4")); + Map map = + new HashMap(); + map.put(strToIb("B"), strToIb("2b")); + map.put(strToIb("C"), strToIb("33")); + map.put(strToIb("D"), strToIb("4")); // unlike config, note that IBW Maps can accept null values - map.put(strToIbw("G"), null); + map.put(strToIb("G"), null); CompoundConfiguration compoundConf = new CompoundConfiguration() .add(baseConf) - .addWritableMap(map); + .addBytesMap(map); assertEquals("1", compoundConf.get("A")); assertEquals("2b", compoundConf.get("B")); assertEquals(33, compoundConf.getInt("C", 0)); @@ -156,7 +156,7 @@ public class TestCompoundConfiguration extends TestCase { conf2.set("D", "not4"); assertEquals("modification", conf2.get("X")); assertEquals("not4", conf2.get("D")); - conf2.addWritableMap(map); + conf2.addBytesMap(map); assertEquals("4", conf2.get("D")); // map overrides } diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java index b739f36..99e4a33 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java @@ -29,11 +29,12 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHBaseConfiguration { private static final Log LOG = LogFactory.getLog(TestHBaseConfiguration.class); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/Waiter.java hbase-common/src/test/java/org/apache/hadoop/hbase/Waiter.java index 468333d..3453baf 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/Waiter.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/Waiter.java @@ -25,8 +25,8 @@ import junit.framework.Assert; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; /** * A class that provides a standard waitFor pattern diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java index 54ee7a6..922de6f 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodec.java @@ -17,7 +17,9 @@ */ package org.apache.hadoop.hbase.codec; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; @@ -28,6 +30,7 @@ import java.io.IOException; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -36,7 +39,7 @@ import org.junit.experimental.categories.Category; import com.google.common.io.CountingInputStream; import com.google.common.io.CountingOutputStream; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCellCodec { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java index 2a1569b..30f2f00 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestCellCodecWithTags.java @@ -32,8 +32,9 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category; import com.google.common.io.CountingInputStream; import com.google.common.io.CountingOutputStream; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCellCodecWithTags { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java index 6c18dc0..e3366fe 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodec.java @@ -28,6 +28,7 @@ import java.io.DataOutputStream; import java.io.IOException; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -36,7 +37,7 @@ import org.junit.experimental.categories.Category; import com.google.common.io.CountingInputStream; import com.google.common.io.CountingOutputStream; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestKeyValueCodec { @Test public void testEmptyWorks() throws IOException { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java index c217cfa..007647a 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/codec/TestKeyValueCodecWithTags.java @@ -32,8 +32,9 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category; import com.google.common.io.CountingInputStream; import com.google.common.io.CountingOutputStream; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestKeyValueCodecWithTags { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestByteBufferInputStream.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestByteBufferInputStream.java index abd588d..30fb71e 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestByteBufferInputStream.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestByteBufferInputStream.java @@ -24,12 +24,13 @@ import java.io.DataInputStream; import java.io.DataOutputStream; import java.nio.ByteBuffer; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ IOTests.class, SmallTests.class }) public class TestByteBufferInputStream { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java index ea1e380..841c468 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/TestTagCompressionContext.java @@ -27,14 +27,15 @@ import java.util.ArrayList; import java.util.List; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.util.LRUDictionary; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestTagCompressionContext { private static final byte[] ROW = Bytes.toBytes("r1"); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java index 9b45d09..781924b 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeyProviderForTesting.java @@ -17,6 +17,7 @@ package org.apache.hadoop.hbase.io.crypto; import java.security.Key; + import javax.crypto.spec.SecretKeySpec; /** diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java index 126d7f6..fdb9448 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestCipherProvider.java @@ -16,7 +16,9 @@ */ package org.apache.hadoop.hbase.io.crypto; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; import java.io.IOException; import java.io.InputStream; @@ -27,13 +29,13 @@ import java.util.Arrays; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.crypto.aes.AES; - +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCipherProvider { public static class MyCipherProvider implements CipherProvider { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java index d9e51c1..d36333e 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestEncryption.java @@ -16,7 +16,8 @@ */ package org.apache.hadoop.hbase.io.crypto; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; @@ -28,12 +29,13 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestEncryption { private static final Log LOG = LogFactory.getLog(TestEncryption.class); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java index 9c98272..dab03f2 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyProvider.java @@ -25,13 +25,13 @@ import java.security.Key; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.crypto.aes.AES; - +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestKeyProvider { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java index 5b193a4..ddd5d45 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestKeyStoreKeyProvider.java @@ -32,12 +32,13 @@ import javax.crypto.spec.SecretKeySpec; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseCommonTestingUtility; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestKeyStoreKeyProvider { static final Log LOG = LogFactory.getLog(TestKeyStoreKeyProvider.class); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java index 65260ea..ea8879b 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/aes/TestAES.java @@ -16,7 +16,9 @@ */ package org.apache.hadoop.hbase.io.crypto.aes; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; @@ -34,17 +36,17 @@ import javax.crypto.spec.SecretKeySpec; import org.apache.commons.io.IOUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.crypto.Cipher; import org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider; import org.apache.hadoop.hbase.io.crypto.Encryption; import org.apache.hadoop.hbase.io.crypto.Encryptor; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; - import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestAES { // Validation for AES in CTR mode with a 128 bit key diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java index 6d16ec2..9569ba8 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/io/util/TestLRUDictionary.java @@ -18,13 +18,16 @@ package org.apache.hadoop.hbase.io.util; -import static org.junit.Assert.*; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; import java.math.BigInteger; import java.util.Arrays; import java.util.Random; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; @@ -34,7 +37,7 @@ import org.junit.experimental.categories.Category; /** * Tests LRUDictionary */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestLRUDictionary { LRUDictionary testee; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java index 15c51b1..b259429 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestFixedLengthWrapper.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Order; @@ -28,7 +29,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestFixedLengthWrapper { static final byte[][] VALUES = new byte[][] { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlob.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlob.java index ad6c611..c796fea 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlob.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlob.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.PositionedByteRange; @@ -26,7 +27,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestOrderedBlob { static final byte[][] VALUES = new byte[][] { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlobVar.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlobVar.java index 10a53f2..d9c40e5 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlobVar.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedBlobVar.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.PositionedByteRange; @@ -26,7 +27,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestOrderedBlobVar { static final byte[][] VALUES = new byte[][] { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedString.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedString.java index aca3b59..6e9e9d0 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedString.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestOrderedString.java @@ -19,13 +19,14 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.PositionedByteRange; import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestOrderedString { static final String[] VALUES = diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestRawString.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestRawString.java index 43dd2db..90f7e21 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestRawString.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestRawString.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Order; @@ -28,7 +29,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestRawString { static final String[] VALUES = new String[] { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java index 71b4cd1..8dc239b 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStruct.java @@ -25,6 +25,7 @@ import java.util.Arrays; import java.util.Collection; import java.util.Comparator; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Order; @@ -43,7 +44,7 @@ import org.junit.runners.Parameterized.Parameters; * custom data type extension for an application POJO. */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestStruct { private Struct generic; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStructNullExtension.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStructNullExtension.java index b2d50d0..e87438d 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStructNullExtension.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestStructNullExtension.java @@ -24,13 +24,14 @@ import static org.junit.Assert.assertNull; import java.math.BigDecimal; import java.util.Arrays; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.PositionedByteRange; import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestStructNullExtension { /** diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestTerminatedWrapper.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestTerminatedWrapper.java index 5a2d11c..e36a141 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestTerminatedWrapper.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestTerminatedWrapper.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Order; @@ -28,7 +29,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestTerminatedWrapper { static final String[] VALUES_STRINGS = new String[] { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestUnion2.java hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestUnion2.java index c9cf4f4..932be95 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestUnion2.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestUnion2.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.types; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Order; import org.apache.hadoop.hbase.util.PositionedByteRange; @@ -27,7 +28,7 @@ import org.apache.hadoop.hbase.util.SimplePositionedMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestUnion2 { /** diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java index 145b2f7..09ef707 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBase64.java @@ -24,13 +24,15 @@ import java.util.Map; import java.util.TreeMap; import junit.framework.TestCase; + +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.experimental.categories.Category; /** * Test order preservation characteristics of ordered Base64 dialect */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestBase64 extends TestCase { // Note: uris is sorted. We need to prove that the ordered Base64 // preserves that ordering diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteRangeWithKVSerialization.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteRangeWithKVSerialization.java index 833436c..a6b7cc5 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteRangeWithKVSerialization.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestByteRangeWithKVSerialization.java @@ -21,13 +21,14 @@ import java.util.ArrayList; import java.util.List; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestByteRangeWithKVSerialization { static void writeCell(PositionedByteRange pbr, KeyValue kv) throws Exception { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java index 6d2c1ae..d948a2b 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestBytes.java @@ -29,12 +29,13 @@ import java.util.Random; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestBytes extends TestCase { public void testNullHashCode() { byte [] b = null; @@ -101,9 +102,10 @@ public class TestBytes extends TestCase { } assertTrue("Returned split should have 3 parts but has " + parts.length, parts.length == 3); - // If split more than once, this should fail + // If split more than once, use additional byte to split parts = Bytes.split(low, high, 2); - assertTrue("Returned split but should have failed", parts == null); + assertTrue("Split with an additional byte", parts != null); + assertEquals(parts.length, low.length + 1); // Split 0 times should throw IAE try { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestConcatenatedLists.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestConcatenatedLists.java index 54638d6..fd4baf5 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestConcatenatedLists.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestConcatenatedLists.java @@ -18,19 +18,23 @@ */ package org.apache.hadoop.hbase.util; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; import java.util.List; import java.util.NoSuchElementException; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -import static org.junit.Assert.*; - -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestConcatenatedLists { @Test public void testUnsupportedOps() { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorClassLoader.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorClassLoader.java index f4b2002..daba459 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorClassLoader.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorClassLoader.java @@ -30,6 +30,7 @@ import java.io.FileOutputStream; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseCommonTestingUtility; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.io.IOUtils; import org.junit.Test; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; /** * Test TestCoprocessorClassLoader. More tests are in TestClassLoading */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCoprocessorClassLoader { private static final HBaseCommonTestingUtility TEST_UTIL = new HBaseCommonTestingUtility(); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCounter.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCounter.java index d70f5df..1c25ee3 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCounter.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestCounter.java @@ -19,11 +19,12 @@ package org.apache.hadoop.hbase.util; import java.util.concurrent.CountDownLatch; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestCounter { private static final int[] THREAD_COUNTS = {1, 10, 100}; private static final int DATA_COUNT = 1000000; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDrainBarrier.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDrainBarrier.java index 7a8aa2b..4542cbd 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDrainBarrier.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDrainBarrier.java @@ -18,13 +18,16 @@ */ package org.apache.hadoop.hbase.util; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -import static org.junit.Assert.*; - -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestDrainBarrier { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java index 2f26f4b..9269f2f 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestDynamicClassLoader.java @@ -27,6 +27,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseCommonTestingUtility; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -34,7 +35,7 @@ import org.junit.experimental.categories.Category; /** * Test TestDynamicClassLoader */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestDynamicClassLoader { private static final Log LOG = LogFactory.getLog(TestDynamicClassLoader.class); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java index 9850190..3c7a8dd 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestEnvironmentEdgeManager.java @@ -18,20 +18,20 @@ */ package org.apache.hadoop.hbase.util; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.when; -import static org.mockito.Mockito.verify; - import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.junit.Test; +import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestEnvironmentEdgeManager { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestFastLongHistogram.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestFastLongHistogram.java new file mode 100644 index 0000000..f5848f3 --- /dev/null +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestFastLongHistogram.java @@ -0,0 +1,100 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.util; + +import java.util.Arrays; +import java.util.Random; + +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.junit.Assert; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +/** + * Testcases for FastLongHistogram. + */ +@Category({MiscTests.class, SmallTests.class}) +public class TestFastLongHistogram { + + private static void doTestUniform(FastLongHistogram hist) { + long[] VALUES = { 0, 10, 20, 30, 40, 50 }; + double[] qs = new double[VALUES.length]; + for (int i = 0; i < qs.length; i++) { + qs[i] = (double) VALUES[i] / VALUES[VALUES.length - 1]; + } + + for (int i = 0; i < 10; i++) { + for (long v : VALUES) { + hist.add(v, 1); + } + long[] vals = hist.getQuantiles(qs); + System.out.println(Arrays.toString(vals)); + for (int j = 0; j < qs.length; j++) { + Assert.assertTrue(j + "-th element org: " + VALUES[j] + ", act: " + vals[j], + Math.abs(vals[j] - VALUES[j]) <= 10); + } + hist.reset(); + } + } + + @Test + public void testUniform() { + FastLongHistogram hist = new FastLongHistogram(100, 0, 50); + doTestUniform(hist); + } + + @Test + public void testAdaptionOfChange() { + // assumes the uniform distribution + FastLongHistogram hist = new FastLongHistogram(100, 0, 100); + + Random rand = new Random(); + + for (int n = 0; n < 10; n++) { + for (int i = 0; i < 900; i++) { + hist.add(rand.nextInt(100), 1); + } + + // add 10% outliers, this breaks the assumption, hope bin10xMax works + for (int i = 0; i < 100; i++) { + hist.add(1000 + rand.nextInt(100), 1); + } + + long[] vals = hist.getQuantiles(new double[] { 0.25, 0.75, 0.95 }); + System.out.println(Arrays.toString(vals)); + if (n == 0) { + Assert.assertTrue("Out of possible value", vals[0] >= 0 && vals[0] <= 50); + Assert.assertTrue("Out of possible value", vals[1] >= 50 && vals[1] <= 100); + Assert.assertTrue("Out of possible value", vals[2] >= 900 && vals[2] <= 1100); + } + + hist.reset(); + } + } + + @Test + public void testSameValues() { + FastLongHistogram hist = new FastLongHistogram(100); + + hist.add(50, 100); + + hist.reset(); + doTestUniform(hist); + } +} diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java index c7eb755..9bb8a04 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestKeyLocker.java @@ -18,14 +18,15 @@ package org.apache.hadoop.hbase.util; +import java.util.concurrent.locks.ReentrantLock; + +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.util.concurrent.locks.ReentrantLock; - -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestKeyLocker { @Test public void testLocker(){ diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java index 39f91ea..120f2b6 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestLoadTestKVGenerator.java @@ -23,13 +23,13 @@ import java.util.HashSet; import java.util.Random; import java.util.Set; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.test.LoadTestKVGenerator; import org.junit.Test; import org.junit.experimental.categories.Category; -import org.apache.hadoop.hbase.testclassification.SmallTests; - -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestLoadTestKVGenerator { private static final int MIN_LEN = 10; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrder.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrder.java index 9adf95d..8029e44 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrder.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrder.java @@ -24,11 +24,12 @@ import static org.junit.Assert.assertArrayEquals; import java.util.Arrays; import java.util.Collections; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestOrder { byte[][] VALS = { Bytes.toBytes("foo"), Bytes.toBytes("bar"), Bytes.toBytes("baz") }; diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java index 37a30e1..7e7c3aa 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestOrderedBytes.java @@ -25,11 +25,12 @@ import java.math.BigDecimal; import java.util.Arrays; import java.util.Collections; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestOrderedBytes { // integer constants for testing Numeric code paths diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestShowProperties.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestShowProperties.java index 3291963..2f16ee8 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestShowProperties.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestShowProperties.java @@ -18,20 +18,21 @@ package org.apache.hadoop.hbase.util; +import java.util.Properties; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.util.Properties; - /** * This test is there to dump the properties. It allows to detect possible env issues when * executing the tests on various environment. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestShowProperties { private static final Log LOG = LogFactory.getLog(TestShowProperties.class); diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimpleMutableByteRange.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimpleMutableByteRange.java index a024afc..88d4829 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimpleMutableByteRange.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimpleMutableByteRange.java @@ -17,12 +17,13 @@ */ package org.apache.hadoop.hbase.util; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestSimpleMutableByteRange { @Test diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimplePositionedMutableByteRange.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimplePositionedMutableByteRange.java index 4d9f680..ecc8c60 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimplePositionedMutableByteRange.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestSimplePositionedMutableByteRange.java @@ -19,12 +19,13 @@ package org.apache.hadoop.hbase.util; import java.nio.ByteBuffer; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestSimplePositionedMutableByteRange { @Test public void testPosition() { diff --git hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java index f5005a2..a628e98 100644 --- hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java +++ hbase-common/src/test/java/org/apache/hadoop/hbase/util/TestThreads.java @@ -20,15 +20,16 @@ package org.apache.hadoop.hbase.util; import static org.junit.Assert.assertTrue; +import java.util.concurrent.atomic.AtomicBoolean; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.util.concurrent.atomic.AtomicBoolean; - -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestThreads { private static final Log LOG = LogFactory.getLog(TestThreads.class); diff --git hbase-examples/pom.xml hbase-examples/pom.xml index 96cc6f3..a149f52 100644 --- hbase-examples/pom.xml +++ hbase-examples/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. hbase-examples @@ -31,13 +31,34 @@ Examples of HBase usage - - org.apache.maven.plugins - maven-site-plugin - - true - - + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + false + + + + default-testCompile + + ${java.default.compiler} + true + false + + + + + + org.apache.maven.plugins + maven-site-plugin + + true + + maven-assembly-plugin diff --git hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java new file mode 100644 index 0000000..9da79ac --- /dev/null +++ hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java @@ -0,0 +1,290 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import sun.misc.BASE64Encoder; + +import java.io.UnsupportedEncodingException; +import java.nio.ByteBuffer; +import java.nio.charset.CharacterCodingException; +import java.nio.charset.Charset; +import java.nio.charset.CharsetDecoder; +import java.security.PrivilegedExceptionAction; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.SortedMap; +import java.util.TreeMap; + +import javax.security.auth.Subject; +import javax.security.auth.login.AppConfigurationEntry; +import javax.security.auth.login.Configuration; +import javax.security.auth.login.LoginContext; + +import org.apache.hadoop.hbase.thrift.generated.AlreadyExists; +import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor; +import org.apache.hadoop.hbase.thrift.generated.Hbase; +import org.apache.hadoop.hbase.thrift.generated.TCell; +import org.apache.hadoop.hbase.thrift.generated.TRowResult; +import org.apache.thrift.protocol.TBinaryProtocol; +import org.apache.thrift.protocol.TProtocol; +import org.apache.thrift.transport.THttpClient; +import org.apache.thrift.transport.TSocket; +import org.apache.thrift.transport.TTransport; +import org.ietf.jgss.GSSContext; +import org.ietf.jgss.GSSCredential; +import org.ietf.jgss.GSSException; +import org.ietf.jgss.GSSManager; +import org.ietf.jgss.GSSName; +import org.ietf.jgss.Oid; + +/** + * See the instructions under hbase-examples/README.txt + */ +public class HttpDoAsClient { + + static protected int port; + static protected String host; + CharsetDecoder decoder = null; + private static boolean secure = false; + + public static void main(String[] args) throws Exception { + + if (args.length < 2 || args.length > 3) { + + System.out.println("Invalid arguments!"); + System.out.println("Usage: DemoClient host port [secure=false]"); + + System.exit(-1); + } + + port = Integer.parseInt(args[1]); + host = args[0]; + if (args.length > 2) { + secure = Boolean.parseBoolean(args[2]); + } + + final HttpDoAsClient client = new HttpDoAsClient(); + Subject.doAs(getSubject(), + new PrivilegedExceptionAction() { + @Override + public Void run() throws Exception { + client.run(); + return null; + } + }); + } + + HttpDoAsClient() { + decoder = Charset.forName("UTF-8").newDecoder(); + } + + // Helper to translate byte[]'s to UTF8 strings + private String utf8(byte[] buf) { + try { + return decoder.decode(ByteBuffer.wrap(buf)).toString(); + } catch (CharacterCodingException e) { + return "[INVALID UTF-8]"; + } + } + + // Helper to translate strings to UTF8 bytes + private byte[] bytes(String s) { + try { + return s.getBytes("UTF-8"); + } catch (UnsupportedEncodingException e) { + e.printStackTrace(); + return null; + } + } + + private void run() throws Exception { + TTransport transport = new TSocket(host, port); + + transport.open(); + String url = "http://" + host + ":" + port; + THttpClient httpClient = new THttpClient(url); + httpClient.open(); + TProtocol protocol = new TBinaryProtocol(httpClient); + Hbase.Client client = new Hbase.Client(protocol); + + byte[] t = bytes("demo_table"); + + // + // Scan all tables, look for the demo table and delete it. + // + System.out.println("scanning tables..."); + for (ByteBuffer name : refresh(client, httpClient).getTableNames()) { + System.out.println(" found: " + utf8(name.array())); + if (utf8(name.array()).equals(utf8(t))) { + if (client.isTableEnabled(name)) { + System.out.println(" disabling table: " + utf8(name.array())); + refresh(client, httpClient).disableTable(name); + } + System.out.println(" deleting table: " + utf8(name.array())); + refresh(client, httpClient).deleteTable(name); + } + } + + + + // + // Create the demo table with two column families, entry: and unused: + // + ArrayList columns = new ArrayList(); + ColumnDescriptor col; + col = new ColumnDescriptor(); + col.name = ByteBuffer.wrap(bytes("entry:")); + col.timeToLive = Integer.MAX_VALUE; + col.maxVersions = 10; + columns.add(col); + col = new ColumnDescriptor(); + col.name = ByteBuffer.wrap(bytes("unused:")); + col.timeToLive = Integer.MAX_VALUE; + columns.add(col); + + System.out.println("creating table: " + utf8(t)); + try { + + refresh(client, httpClient).createTable(ByteBuffer.wrap(t), columns); + } catch (AlreadyExists ae) { + System.out.println("WARN: " + ae.message); + } + + System.out.println("column families in " + utf8(t) + ": "); + Map columnMap = refresh(client, httpClient) + .getColumnDescriptors(ByteBuffer.wrap(t)); + for (ColumnDescriptor col2 : columnMap.values()) { + System.out.println(" column: " + utf8(col2.name.array()) + ", maxVer: " + Integer.toString(col2.maxVersions)); + } + + transport.close(); + httpClient.close(); + } + + private Hbase.Client refresh(Hbase.Client client, THttpClient httpClient) { + if(secure) { + httpClient.setCustomHeader("doAs", "hbase"); + try { + httpClient.setCustomHeader("Authorization", generateTicket()); + } catch (GSSException e) { + e.printStackTrace(); + } + } + return client; + } + + private String generateTicket() throws GSSException { + final GSSManager manager = GSSManager.getInstance(); + // Oid for kerberos principal name + Oid krb5PrincipalOid = new Oid("1.2.840.113554.1.2.2.1"); + Oid KERB_V5_OID = new Oid("1.2.840.113554.1.2.2"); + final GSSName clientName = manager.createName("hbase/node-1.internal@INTERNAL", + krb5PrincipalOid); + final GSSCredential clientCred = manager.createCredential(clientName, + 8 * 3600, + KERB_V5_OID, + GSSCredential.INITIATE_ONLY); + + final GSSName serverName = manager.createName("hbase/node-1.internal@INTERNAL", krb5PrincipalOid); + + final GSSContext context = manager.createContext(serverName, + KERB_V5_OID, + clientCred, + GSSContext.DEFAULT_LIFETIME); + context.requestMutualAuth(true); + context.requestConf(false); + context.requestInteg(true); + + final byte[] outToken = context.initSecContext(new byte[0], 0, 0); + StringBuffer outputBuffer = new StringBuffer(); + outputBuffer.append("Negotiate "); + outputBuffer.append(new BASE64Encoder().encode(outToken).replace("\n", "")); + System.out.print("Ticket is: " + outputBuffer); + return outputBuffer.toString(); + } + + private void printVersions(ByteBuffer row, List versions) { + StringBuilder rowStr = new StringBuilder(); + for (TCell cell : versions) { + rowStr.append(utf8(cell.value.array())); + rowStr.append("; "); + } + System.out.println("row: " + utf8(row.array()) + ", values: " + rowStr); + } + + private void printRow(TRowResult rowResult) { + // copy values into a TreeMap to get them in sorted order + + TreeMap sorted = new TreeMap(); + for (Map.Entry column : rowResult.columns.entrySet()) { + sorted.put(utf8(column.getKey().array()), column.getValue()); + } + + StringBuilder rowStr = new StringBuilder(); + for (SortedMap.Entry entry : sorted.entrySet()) { + rowStr.append(entry.getKey()); + rowStr.append(" => "); + rowStr.append(utf8(entry.getValue().value.array())); + rowStr.append("; "); + } + System.out.println("row: " + utf8(rowResult.row.array()) + ", cols: " + rowStr); + } + + private void printRow(List rows) { + for (TRowResult rowResult : rows) { + printRow(rowResult); + } + } + + static Subject getSubject() throws Exception { + if (!secure) return new Subject(); + /* + * To authenticate the DemoClient, kinit should be invoked ahead. + * Here we try to get the Kerberos credential from the ticket cache. + */ + LoginContext context = new LoginContext("", new Subject(), null, + new Configuration() { + @Override + public AppConfigurationEntry[] getAppConfigurationEntry(String name) { + Map options = new HashMap(); + options.put("useKeyTab", "false"); + options.put("storeKey", "false"); + options.put("doNotPrompt", "true"); + options.put("useTicketCache", "true"); + options.put("renewTGT", "true"); + options.put("refreshKrb5Config", "true"); + options.put("isInitiator", "true"); + String ticketCache = System.getenv("KRB5CCNAME"); + if (ticketCache != null) { + options.put("ticketCache", ticketCache); + } + options.put("debug", "true"); + + return new AppConfigurationEntry[]{ + new AppConfigurationEntry("com.sun.security.auth.module.Krb5LoginModule", + AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, + options)}; + } + }); + context.login(); + return context.getSubject(); + } +} diff --git hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java index 84b72cfae..87e655e 100644 --- hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java +++ hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestBulkDeleteProtocol.java @@ -31,6 +31,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; @@ -55,7 +56,7 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestBulkDeleteProtocol { private static final byte[] FAMILY1 = Bytes.toBytes("cf1"); private static final byte[] FAMILY2 = Bytes.toBytes("cf2"); diff --git hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java index a350271..ddc5847 100644 --- hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java +++ hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestRowCountEndpoint.java @@ -20,7 +20,6 @@ package org.apache.hadoop.hbase.coprocessor.example; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; @@ -30,6 +29,8 @@ import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.example.generated.ExampleProtos; import org.apache.hadoop.hbase.ipc.BlockingRpcCallback; import org.apache.hadoop.hbase.ipc.ServerRpcController; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; @@ -43,7 +44,7 @@ import static junit.framework.Assert.*; * Test case demonstrating client interactions with the {@link RowCountEndpoint} * sample coprocessor Service implementation. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRowCountEndpoint { private static final TableName TEST_TABLE = TableName.valueOf("testrowcounter"); private static final byte[] TEST_FAMILY = Bytes.toBytes("f"); diff --git hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestZooKeeperScanPolicyObserver.java hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestZooKeeperScanPolicyObserver.java index 8237324..7691586 100644 --- hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestZooKeeperScanPolicyObserver.java +++ hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestZooKeeperScanPolicyObserver.java @@ -27,13 +27,14 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -41,7 +42,7 @@ import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.ZooKeeper; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestZooKeeperScanPolicyObserver { private static final Log LOG = LogFactory.getLog(TestZooKeeperScanPolicyObserver.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-examples/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMapReduceExamples.java hbase-examples/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMapReduceExamples.java index 7c1690d..1f10cb9 100644 --- hbase-examples/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMapReduceExamples.java +++ hbase-examples/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMapReduceExamples.java @@ -19,6 +19,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; @@ -43,7 +44,7 @@ import java.io.PrintStream; import static org.junit.Assert.*; import static org.mockito.Mockito.*; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestMapReduceExamples { private static HBaseTestingUtility util = new HBaseTestingUtility(); diff --git hbase-examples/src/test/java/org/apache/hadoop/hbase/types/TestPBCell.java hbase-examples/src/test/java/org/apache/hadoop/hbase/types/TestPBCell.java index 952a319..a548b8a 100644 --- hbase-examples/src/test/java/org/apache/hadoop/hbase/types/TestPBCell.java +++ hbase-examples/src/test/java/org/apache/hadoop/hbase/types/TestPBCell.java @@ -23,13 +23,17 @@ import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.CellProtos; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.PositionedByteRange; import org.apache.hadoop.hbase.util.SimplePositionedByteRange; import org.junit.Test; +import org.junit.experimental.categories.Category; +@Category({SmallTests.class, MiscTests.class}) public class TestPBCell { private static final PBCell CODEC = new PBCell(); diff --git hbase-hadoop-compat/pom.xml hbase-hadoop-compat/pom.xml index 72fb5e9..0c3c2bf 100644 --- hbase-hadoop-compat/pom.xml +++ hbase-hadoop-compat/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -35,7 +35,28 @@ - + + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + false + + + + default-testCompile + + ${java.default.compiler} + true + false + + + + org.apache.maven.plugins maven-site-plugin diff --git hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java index c64cc88..b27696c 100644 --- hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java +++ hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java @@ -246,6 +246,12 @@ public interface MetricsRegionServerSource extends BaseSource { String MAJOR_COMPACTED_CELLS_SIZE_DESC = "The total amount of data processed during major compactions, in bytes"; + String HEDGED_READS = "hedgedReads"; + String HEDGED_READS_DESC = "The number of times we started a hedged read"; + String HEDGED_READ_WINS = "hedgedReadWins"; + String HEDGED_READ_WINS_DESC = + "The number of times we started a hedged read and a hedged read won"; + String BLOCKED_REQUESTS_COUNT = "blockedRequestCount"; String BLOCKED_REQUESTS_COUNT_DESC = "The number of blocked requests because of memstore size is " + "larger than blockingMemStoreSize"; diff --git hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java index dea2440..0f62dc6 100644 --- hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java +++ hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java @@ -248,6 +248,16 @@ public interface MetricsRegionServerWrapper { long getMajorCompactedCellsSize(); /** + * @return Count of hedged read operations + */ + public long getHedgedReadOps(); + + /** + * @return Count of times a hedged read beat out the primary read. + */ + public long getHedgedReadWins(); + + /** * @return Count of requests blocked because the memstore size is larger than blockingMemStoreSize */ public long getBlockedRequestsCount(); diff --git hbase-hadoop2-compat/pom.xml hbase-hadoop2-compat/pom.xml index a37c5cf..73a32f3 100644 --- hbase-hadoop2-compat/pom.xml +++ hbase-hadoop2-compat/pom.xml @@ -21,7 +21,7 @@ limitations under the License. hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -34,18 +34,39 @@ limitations under the License. - - org.apache.maven.plugins - maven-site-plugin - - true - - - - - org.apache.maven.plugins - maven-source-plugin - + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + false + + + + default-testCompile + + ${java.default.compiler} + true + false + + + + + + org.apache.maven.plugins + maven-site-plugin + + true + + + + + org.apache.maven.plugins + maven-source-plugin + maven-assembly-plugin diff --git hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java index cb12aa1..4cd83382 100644 --- hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java +++ hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSourceImpl.java @@ -233,6 +233,10 @@ public class MetricsRegionServerSourceImpl .addCounter(Interns.info(MAJOR_COMPACTED_CELLS_SIZE, MAJOR_COMPACTED_CELLS_SIZE_DESC), rsWrap.getMajorCompactedCellsSize()) + .addCounter(Interns.info(HEDGED_READS, HEDGED_READS_DESC), rsWrap.getHedgedReadOps()) + .addCounter(Interns.info(HEDGED_READ_WINS, HEDGED_READ_WINS_DESC), + rsWrap.getHedgedReadWins()) + .addCounter(Interns.info(BLOCKED_REQUESTS_COUNT, BLOCKED_REQUESTS_COUNT_DESC), rsWrap.getBlockedRequestsCount()) diff --git hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/TestMetricsMasterSourceImpl.java hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/TestMetricsMasterSourceImpl.java index 4cdd606..0a784eb 100644 --- hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/TestMetricsMasterSourceImpl.java +++ hbase-hadoop2-compat/src/test/java/org/apache/hadoop/hbase/master/TestMetricsMasterSourceImpl.java @@ -19,9 +19,6 @@ package org.apache.hadoop.hbase.master; import org.apache.hadoop.hbase.CompatibilitySingletonFactory; -import org.apache.hadoop.hbase.master.MetricsMasterSource; -import org.apache.hadoop.hbase.master.MetricsMasterSourceFactory; -import org.apache.hadoop.hbase.master.MetricsMasterSourceImpl; import org.junit.Test; import static org.junit.Assert.assertSame; diff --git hbase-it/pom.xml hbase-it/pom.xml index 678a9f4..95b47ce 100644 --- hbase-it/pom.xml +++ hbase-it/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -117,6 +117,25 @@ + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + + + + default-testCompile + + ${java.default.compiler} + true + + + + org.apache.maven.plugins diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/ClusterManager.java hbase-it/src/test/java/org/apache/hadoop/hbase/ClusterManager.java index dd96e43..2d46279 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/ClusterManager.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/ClusterManager.java @@ -61,38 +61,38 @@ interface ClusterManager extends Configurable { /** * Start the service on the given host */ - void start(ServiceType service, String hostname) throws IOException; + void start(ServiceType service, String hostname, int port) throws IOException; /** * Stop the service on the given host */ - void stop(ServiceType service, String hostname) throws IOException; + void stop(ServiceType service, String hostname, int port) throws IOException; /** - * Restarts the service on the given host + * Restart the service on the given host */ - void restart(ServiceType service, String hostname) throws IOException; + void restart(ServiceType service, String hostname, int port) throws IOException; /** * Kills the service running on the given host */ - void kill(ServiceType service, String hostname) throws IOException; + void kill(ServiceType service, String hostname, int port) throws IOException; /** * Suspends the service running on the given host */ - void suspend(ServiceType service, String hostname) throws IOException; + void suspend(ServiceType service, String hostname, int port) throws IOException; /** * Resumes the services running on the given host */ - void resume(ServiceType service, String hostname) throws IOException; + void resume(ServiceType service, String hostname, int port) throws IOException; /** * Returns whether the service is running on the remote host. This only checks whether the * service still has a pid. */ - boolean isRunning(ServiceType service, String hostname) throws IOException; + boolean isRunning(ServiceType service, String hostname, int port) throws IOException; /* TODO: further API ideas: * diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java index 6bc4143..4a3a64a 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/DistributedHBaseCluster.java @@ -19,8 +19,12 @@ package org.apache.hadoop.hbase; import java.io.IOException; import java.util.ArrayList; +import java.util.Comparator; import java.util.HashMap; +import java.util.HashSet; import java.util.List; +import java.util.Set; +import java.util.TreeSet; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; @@ -105,21 +109,25 @@ public class DistributedHBaseCluster extends HBaseCluster { } @Override - public void startRegionServer(String hostname) throws IOException { + public void startRegionServer(String hostname, int port) throws IOException { LOG.info("Starting RS on: " + hostname); - clusterManager.start(ServiceType.HBASE_REGIONSERVER, hostname); + clusterManager.start(ServiceType.HBASE_REGIONSERVER, hostname, port); } @Override public void killRegionServer(ServerName serverName) throws IOException { LOG.info("Aborting RS: " + serverName.getServerName()); - clusterManager.kill(ServiceType.HBASE_REGIONSERVER, serverName.getHostname()); + clusterManager.kill(ServiceType.HBASE_REGIONSERVER, + serverName.getHostname(), + serverName.getPort()); } @Override public void stopRegionServer(ServerName serverName) throws IOException { LOG.info("Stopping RS: " + serverName.getServerName()); - clusterManager.stop(ServiceType.HBASE_REGIONSERVER, serverName.getHostname()); + clusterManager.stop(ServiceType.HBASE_REGIONSERVER, + serverName.getHostname(), + serverName.getPort()); } @Override @@ -133,7 +141,7 @@ public class DistributedHBaseCluster extends HBaseCluster { long start = System.currentTimeMillis(); while ((System.currentTimeMillis() - start) < timeout) { - if (!clusterManager.isRunning(service, serverName.getHostname())) { + if (!clusterManager.isRunning(service, serverName.getHostname(), serverName.getPort())) { return; } Threads.sleep(1000); @@ -148,21 +156,21 @@ public class DistributedHBaseCluster extends HBaseCluster { } @Override - public void startMaster(String hostname) throws IOException { - LOG.info("Starting Master on: " + hostname); - clusterManager.start(ServiceType.HBASE_MASTER, hostname); + public void startMaster(String hostname, int port) throws IOException { + LOG.info("Starting Master on: " + hostname + ":" + port); + clusterManager.start(ServiceType.HBASE_MASTER, hostname, port); } @Override public void killMaster(ServerName serverName) throws IOException { LOG.info("Aborting Master: " + serverName.getServerName()); - clusterManager.kill(ServiceType.HBASE_MASTER, serverName.getHostname()); + clusterManager.kill(ServiceType.HBASE_MASTER, serverName.getHostname(), serverName.getPort()); } @Override public void stopMaster(ServerName serverName) throws IOException { LOG.info("Stopping Master: " + serverName.getServerName()); - clusterManager.stop(ServiceType.HBASE_MASTER, serverName.getHostname()); + clusterManager.stop(ServiceType.HBASE_MASTER, serverName.getHostname(), serverName.getPort()); } @Override @@ -207,13 +215,13 @@ public class DistributedHBaseCluster extends HBaseCluster { @Override public void waitUntilShutDown() { - //Simply wait for a few seconds for now (after issuing serverManager.kill + // Simply wait for a few seconds for now (after issuing serverManager.kill throw new RuntimeException("Not implemented yet"); } @Override public void shutdown() throws IOException { - //not sure we want this + // not sure we want this throw new RuntimeException("Not implemented yet"); } @@ -241,30 +249,35 @@ public class DistributedHBaseCluster extends HBaseCluster { protected boolean restoreMasters(ClusterStatus initial, ClusterStatus current) { List deferred = new ArrayList(); //check whether current master has changed - if (!ServerName.isSameHostnameAndPort(initial.getMaster(), current.getMaster())) { - LOG.info("Restoring cluster - Initial active master : " + initial.getMaster().getHostname() - + " has changed to : " + current.getMaster().getHostname()); + final ServerName initMaster = initial.getMaster(); + if (!ServerName.isSameHostnameAndPort(initMaster, current.getMaster())) { + LOG.info("Restoring cluster - Initial active master : " + + initMaster.getHostAndPort() + + " has changed to : " + + current.getMaster().getHostAndPort()); // If initial master is stopped, start it, before restoring the state. // It will come up as a backup master, if there is already an active master. try { - if (!clusterManager.isRunning(ServiceType.HBASE_MASTER, initial.getMaster().getHostname())) { - LOG.info("Restoring cluster - starting initial active master at:" + initial.getMaster().getHostname()); - startMaster(initial.getMaster().getHostname()); + if (!clusterManager.isRunning(ServiceType.HBASE_MASTER, + initMaster.getHostname(), initMaster.getPort())) { + LOG.info("Restoring cluster - starting initial active master at:" + + initMaster.getHostAndPort()); + startMaster(initMaster.getHostname(), initMaster.getPort()); } - //master has changed, we would like to undo this. - //1. Kill the current backups - //2. Stop current master - //3. Start backup masters + // master has changed, we would like to undo this. + // 1. Kill the current backups + // 2. Stop current master + // 3. Start backup masters for (ServerName currentBackup : current.getBackupMasters()) { - if (!ServerName.isSameHostnameAndPort(currentBackup, initial.getMaster())) { + if (!ServerName.isSameHostnameAndPort(currentBackup, initMaster)) { LOG.info("Restoring cluster - stopping backup master: " + currentBackup); stopMaster(currentBackup); } } LOG.info("Restoring cluster - stopping active master: " + current.getMaster()); stopMaster(current.getMaster()); - waitForActiveAndReadyMaster(); //wait so that active master takes over + waitForActiveAndReadyMaster(); // wait so that active master takes over } catch (IOException ex) { // if we fail to start the initial active master, we do not want to continue stopping // backup masters. Just keep what we have now @@ -275,9 +288,12 @@ public class DistributedHBaseCluster extends HBaseCluster { for (ServerName backup : initial.getBackupMasters()) { try { //these are not started in backup mode, but we should already have an active master - if(!clusterManager.isRunning(ServiceType.HBASE_MASTER, backup.getHostname())) { - LOG.info("Restoring cluster - starting initial backup master: " + backup.getHostname()); - startMaster(backup.getHostname()); + if (!clusterManager.isRunning(ServiceType.HBASE_MASTER, + backup.getHostname(), + backup.getPort())) { + LOG.info("Restoring cluster - starting initial backup master: " + + backup.getHostAndPort()); + startMaster(backup.getHostname(), backup.getPort()); } } catch (IOException ex) { deferred.add(ex); @@ -285,32 +301,34 @@ public class DistributedHBaseCluster extends HBaseCluster { } } else { //current master has not changed, match up backup masters - HashMap initialBackups = new HashMap(); - HashMap currentBackups = new HashMap(); + Set toStart = new TreeSet(new ServerNameIgnoreStartCodeComparator()); + Set toKill = new TreeSet(new ServerNameIgnoreStartCodeComparator()); + toStart.addAll(initial.getBackupMasters()); + toKill.addAll(current.getBackupMasters()); - for (ServerName server : initial.getBackupMasters()) { - initialBackups.put(server.getHostname(), server); - } for (ServerName server : current.getBackupMasters()) { - currentBackups.put(server.getHostname(), server); + toStart.remove(server); + } + for (ServerName server: initial.getBackupMasters()) { + toKill.remove(server); } - for (String hostname : Sets.difference(initialBackups.keySet(), currentBackups.keySet())) { + for (ServerName sn:toStart) { try { - if(!clusterManager.isRunning(ServiceType.HBASE_MASTER, hostname)) { - LOG.info("Restoring cluster - starting initial backup master: " + hostname); - startMaster(hostname); + if(!clusterManager.isRunning(ServiceType.HBASE_MASTER, sn.getHostname(), sn.getPort())) { + LOG.info("Restoring cluster - starting initial backup master: " + sn.getHostAndPort()); + startMaster(sn.getHostname(), sn.getPort()); } } catch (IOException ex) { deferred.add(ex); } } - for (String hostname : Sets.difference(currentBackups.keySet(), initialBackups.keySet())) { + for (ServerName sn:toKill) { try { - if(clusterManager.isRunning(ServiceType.HBASE_MASTER, hostname)) { - LOG.info("Restoring cluster - stopping backup master: " + hostname); - stopMaster(currentBackups.get(hostname)); + if(clusterManager.isRunning(ServiceType.HBASE_MASTER, sn.getHostname(), sn.getPort())) { + LOG.info("Restoring cluster - stopping backup master: " + sn.getHostAndPort()); + stopMaster(sn); } } catch (IOException ex) { deferred.add(ex); @@ -318,7 +336,8 @@ public class DistributedHBaseCluster extends HBaseCluster { } } if (!deferred.isEmpty()) { - LOG.warn("Restoring cluster - restoring region servers reported " + deferred.size() + " errors:"); + LOG.warn("Restoring cluster - restoring region servers reported " + + deferred.size() + " errors:"); for (int i=0; i initialServers = new HashMap(); - HashMap currentServers = new HashMap(); - for (ServerName server : initial.getServers()) { - initialServers.put(server.getHostname(), server); + private static class ServerNameIgnoreStartCodeComparator implements Comparator { + @Override + public int compare(ServerName o1, ServerName o2) { + int compare = o1.getHostname().compareToIgnoreCase(o2.getHostname()); + if (compare != 0) return compare; + compare = o1.getPort() - o2.getPort(); + if (compare != 0) return compare; + return 0; } + } + + protected boolean restoreRegionServers(ClusterStatus initial, ClusterStatus current) { + Set toStart = new TreeSet(new ServerNameIgnoreStartCodeComparator()); + Set toKill = new TreeSet(new ServerNameIgnoreStartCodeComparator()); + toStart.addAll(initial.getBackupMasters()); + toKill.addAll(current.getBackupMasters()); + for (ServerName server : current.getServers()) { - currentServers.put(server.getHostname(), server); + toStart.remove(server); + } + for (ServerName server: initial.getServers()) { + toKill.remove(server); } List deferred = new ArrayList(); - for (String hostname : Sets.difference(initialServers.keySet(), currentServers.keySet())) { + + for(ServerName sn:toStart) { try { - if(!clusterManager.isRunning(ServiceType.HBASE_REGIONSERVER, hostname)) { - LOG.info("Restoring cluster - starting initial region server: " + hostname); - startRegionServer(hostname); + if (!clusterManager.isRunning(ServiceType.HBASE_REGIONSERVER, + sn.getHostname(), + sn.getPort())) { + LOG.info("Restoring cluster - starting initial region server: " + sn.getHostAndPort()); + startRegionServer(sn.getHostname(), sn.getPort()); } } catch (IOException ex) { deferred.add(ex); } } - for (String hostname : Sets.difference(currentServers.keySet(), initialServers.keySet())) { + for(ServerName sn:toKill) { try { - if(clusterManager.isRunning(ServiceType.HBASE_REGIONSERVER, hostname)) { - LOG.info("Restoring cluster - stopping initial region server: " + hostname); - stopRegionServer(currentServers.get(hostname)); + if (clusterManager.isRunning(ServiceType.HBASE_REGIONSERVER, + sn.getHostname(), + sn.getPort())) { + LOG.info("Restoring cluster - stopping initial region server: " + sn.getHostAndPort()); + stopRegionServer(sn); } } catch (IOException ex) { deferred.add(ex); } } if (!deferred.isEmpty()) { - LOG.warn("Restoring cluster - restoring region servers reported " + deferred.size() + " errors:"); + LOG.warn("Restoring cluster - restoring region servers reported " + + deferred.size() + " errors:"); for (int i=0; i 0; } @Override - public void kill(ServiceType service, String hostname) throws IOException { + public void kill(ServiceType service, String hostname, int port) throws IOException { signal(service, SIGKILL, hostname); } @Override - public void suspend(ServiceType service, String hostname) throws IOException { + public void suspend(ServiceType service, String hostname, int port) throws IOException { signal(service, SIGSTOP, hostname); } @Override - public void resume(ServiceType service, String hostname) throws IOException { + public void resume(ServiceType service, String hostname, int port) throws IOException { signal(service, SIGCONT, hostname); } } diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestAcidGuarantees.java hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestAcidGuarantees.java new file mode 100644 index 0000000..41ea388 --- /dev/null +++ hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestAcidGuarantees.java @@ -0,0 +1,120 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import com.google.common.collect.Lists; +import com.google.common.collect.Sets; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.client.*; +import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.util.ToolRunner; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import java.io.IOException; +import java.util.List; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; + +/** + * This Integration Test verifies acid guarantees across column families by frequently writing + * values to rows with multiple column families and concurrently reading entire rows that expect all + * column families. + */ +@Category(IntegrationTests.class) +public class IntegrationTestAcidGuarantees extends IntegrationTestBase { + private static final int SERVER_COUNT = 1; // number of slaves for the smallest cluster + + // The unit test version. + TestAcidGuarantees tag; + + @Override + public int runTestFromCommandLine() throws Exception { + Configuration c = getConf(); + int millis = c.getInt("millis", 5000); + int numWriters = c.getInt("numWriters", 50); + int numGetters = c.getInt("numGetters", 2); + int numScanners = c.getInt("numScanners", 2); + int numUniqueRows = c.getInt("numUniqueRows", 3); + tag.runTestAtomicity(millis, numWriters, numGetters, numScanners, numUniqueRows, true); + return 0; + } + + @Override + public void setUpCluster() throws Exception { + // Set small flush size for minicluster so we exercise reseeking scanners + util = getTestingUtil(getConf()); + util.initializeCluster(SERVER_COUNT); + conf = getConf(); + conf.set(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, String.valueOf(128*1024)); + // prevent aggressive region split + conf.set(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, + ConstantSizeRegionSplitPolicy.class.getName()); + this.setConf(util.getConfiguration()); + + // replace the HBaseTestingUtility in the unit test with the integration test's + // IntegrationTestingUtility + tag = new TestAcidGuarantees(); + tag.setHBaseTestingUtil(util); + } + + @Override + public TableName getTablename() { + return TestAcidGuarantees.TABLE_NAME; + } + + @Override + protected Set getColumnFamilies() { + return Sets.newHashSet(String.valueOf(TestAcidGuarantees.FAMILY_A), + String.valueOf(TestAcidGuarantees.FAMILY_B), + String.valueOf(TestAcidGuarantees.FAMILY_C)); + } + + // ***** Actual integration tests + + @Test + public void testGetAtomicity() throws Exception { + tag.runTestAtomicity(20000, 5, 5, 0, 3); + } + + @Test + public void testScanAtomicity() throws Exception { + tag.runTestAtomicity(20000, 5, 0, 5, 3); + } + + @Test + public void testMixedAtomicity() throws Exception { + tag.runTestAtomicity(20000, 5, 2, 2, 3); + } + + + // **** Command line hook + + public static void main(String[] args) throws Exception { + Configuration conf = HBaseConfiguration.create(); + IntegrationTestingUtility.setUseDistributedCluster(conf); + int ret = ToolRunner.run(conf, new IntegrationTestAcidGuarantees(), args); + System.exit(ret); + } +} + + diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java index 0479945..c0c54b7 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngest.java @@ -29,6 +29,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LoadTestTool; +import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.util.ToolRunner; import org.junit.Assert; import org.junit.Test; @@ -170,7 +171,13 @@ public class IntegrationTestIngest extends IntegrationTestBase { , startKey, numKeys)); if (0 != ret) { String errorMsg = "Verification failed with error code " + ret; - LOG.error(errorMsg); + LOG.error(errorMsg + " Rerunning verification after 1 minute for debugging"); + Threads.sleep(1000 * 60); + ret = loadTool.run(getArgsForLoadTestTool("-read", String.format("100:%d", readThreads) + , startKey, numKeys)); + if (0 != ret) { + LOG.error("Rerun of Verification failed with error code " + ret); + } Assert.fail(errorMsg); } startKey += numKeys; diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java index dbaf5b8..ff8ed19 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithEncryption.java @@ -26,21 +26,21 @@ import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.io.hfile.HFileReaderV3; import org.apache.hadoop.hbase.io.hfile.HFileWriterV3; -import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.wal.WAL.Reader; import org.apache.hadoop.hbase.wal.WALProvider.Writer; import org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogReader; import org.apache.hadoop.hbase.regionserver.wal.SecureProtobufLogWriter; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.EncryptionTest; import org.apache.hadoop.util.ToolRunner; import org.apache.log4j.Level; import org.apache.log4j.Logger; - import org.junit.Before; import org.junit.experimental.categories.Category; @Category(IntegrationTests.class) public class IntegrationTestIngestWithEncryption extends IntegrationTestIngest { + boolean initialized = false; static { diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/Action.java hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/Action.java index f54f7dc..ebc83ff 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/Action.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/Action.java @@ -114,7 +114,7 @@ public class Action { protected void startMaster(ServerName server) throws IOException { LOG.info("Starting master:" + server.getHostname()); - cluster.startMaster(server.getHostname()); + cluster.startMaster(server.getHostname(), server.getPort()); cluster.waitForActiveAndReadyMaster(startMasterTimeout); LOG.info("Started master: " + server); } @@ -129,8 +129,8 @@ public class Action { protected void startRs(ServerName server) throws IOException { LOG.info("Starting region server:" + server.getHostname()); - cluster.startRegionServer(server.getHostname()); - cluster.waitForRegionServerToStart(server.getHostname(), startRsTimeout); + cluster.startRegionServer(server.getHostname(), server.getPort()); + cluster.waitForRegionServerToStart(server.getHostname(), server.getPort(), startRsTimeout); LOG.info("Started region server:" + server + ". Reported num of rs:" + cluster.getClusterStatus().getServersSize()); } diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/BatchRestartRsAction.java hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/BatchRestartRsAction.java index edfd9c4..b6a5b50 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/BatchRestartRsAction.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/BatchRestartRsAction.java @@ -57,11 +57,11 @@ public class BatchRestartRsAction extends RestartActionBaseAction { for (ServerName server : selectedServers) { LOG.info("Starting region server:" + server.getHostname()); - cluster.startRegionServer(server.getHostname()); + cluster.startRegionServer(server.getHostname(), server.getPort()); } for (ServerName server : selectedServers) { - cluster.waitForRegionServerToStart(server.getHostname(), PolicyBasedChaosMonkey.TIMEOUT); + cluster.waitForRegionServerToStart(server.getHostname(), server.getPort(), PolicyBasedChaosMonkey.TIMEOUT); } LOG.info("Started " + selectedServers.size() +" region servers. Reported num of rs:" + cluster.getClusterStatus().getServersSize()); diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java index 0ad65c3..2d686c5 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/mapreduce/IntegrationTestBulkLoad.java @@ -18,17 +18,10 @@ */ package org.apache.hadoop.hbase.mapreduce; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.Map; -import java.util.Random; -import java.util.Set; -import java.util.concurrent.atomic.AtomicLong; +import static org.junit.Assert.assertEquals; import com.google.common.base.Joiner; + import org.apache.commons.cli.CommandLine; import org.apache.commons.lang.RandomStringUtils; import org.apache.commons.logging.Log; @@ -42,20 +35,23 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.IntegrationTestBase; import org.apache.hadoop.hbase.IntegrationTestingUtility; -import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Consistency; -import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.RegionLocator; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.regionserver.InternalScanner; import org.apache.hadoop.hbase.regionserver.RegionScanner; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.RegionSplitter; @@ -79,7 +75,15 @@ import org.apache.hadoop.util.ToolRunner; import org.junit.Test; import org.junit.experimental.categories.Category; -import static org.junit.Assert.assertEquals; +import java.io.DataInput; +import java.io.DataOutput; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.atomic.AtomicLong; /** * Test Bulk Load and MR on a distributed cluster. @@ -199,6 +203,9 @@ public class IntegrationTestBulkLoad extends IntegrationTestBase { HTableDescriptor desc = admin.getTableDescriptor(t); desc.addCoprocessor(SlowMeCoproScanOperations.class.getName()); HBaseTestingUtility.modifyTableSync(admin, desc); + //sleep for sometime. Hope is that the regions are closed/opened before + //the sleep returns. TODO: do this better + Thread.sleep(30000); } @Test @@ -247,7 +254,6 @@ public class IntegrationTestBulkLoad extends IntegrationTestBase { EnvironmentEdgeManager.currentTime(); Configuration conf = new Configuration(util.getConfiguration()); Path p = util.getDataTestDirOnTestFS(getTablename() + "-" + iteration); - HTable table = new HTable(conf, getTablename()); conf.setBoolean("mapreduce.map.speculative", false); conf.setBoolean("mapreduce.reduce.speculative", false); @@ -273,18 +279,23 @@ public class IntegrationTestBulkLoad extends IntegrationTestBase { // Set where to place the hfiles. FileOutputFormat.setOutputPath(job, p); + try (Connection conn = ConnectionFactory.createConnection(conf); + Admin admin = conn.getAdmin(); + Table table = conn.getTable(getTablename()); + RegionLocator regionLocator = conn.getRegionLocator(getTablename())) { + + // Configure the partitioner and other things needed for HFileOutputFormat. + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); - // Configure the partitioner and other things needed for HFileOutputFormat. - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + // Run the job making sure it works. + assertEquals(true, job.waitForCompletion(true)); - // Run the job making sure it works. - assertEquals(true, job.waitForCompletion(true)); - - // Create a new loader. - LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf); + // Create a new loader. + LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf); - // Load the HFiles in. - loader.doBulkLoad(p, table); + // Load the HFiles in. + loader.doBulkLoad(p, admin, table, regionLocator); + } // Delete the files. util.getTestFileSystem().delete(p, true); diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java index f116a66..259de65 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java @@ -48,8 +48,8 @@ import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.IntegrationTestBase; import org.apache.hadoop.hbase.IntegrationTestingUtility; -import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; @@ -340,7 +340,7 @@ public class IntegrationTestBigLinkedList extends IntegrationTestBase { byte[] id; long count = 0; int i; - Table table; + HTable table; long numNodes; long wrap; int width; @@ -1219,4 +1219,4 @@ public class IntegrationTestBigLinkedList extends IntegrationTestBase { int ret = ToolRunner.run(conf, new IntegrationTestBigLinkedList(), args); System.exit(ret); } -} +} \ No newline at end of file diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedListWithVisibility.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedListWithVisibility.java index e702805..dc517a5 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedListWithVisibility.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedListWithVisibility.java @@ -35,7 +35,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.IntegrationTestingUtility; -import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.chaos.factories.MonkeyFactory; import org.apache.hadoop.hbase.client.Admin; @@ -59,6 +58,7 @@ import org.apache.hadoop.hbase.security.visibility.Authorizations; import org.apache.hadoop.hbase.security.visibility.CellVisibility; import org.apache.hadoop.hbase.security.visibility.VisibilityClient; import org.apache.hadoop.hbase.security.visibility.VisibilityController; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.AbstractHBaseTool; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.BytesWritable; @@ -184,7 +184,7 @@ public class IntegrationTestBigLinkedListWithVisibility extends IntegrationTestB @Override protected void instantiateHTable(Configuration conf) throws IOException { for (int i = 0; i < DEFAULT_TABLES_COUNT; i++) { - Table table = new HTable(conf, getTableName(i)); + HTable table = new HTable(conf, getTableName(i)); table.setAutoFlushTo(true); //table.setWriteBufferSize(4 * 1024 * 1024); this.tables[i] = table; diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java index 8a7e9f1..60f20a5 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestLoadAndVerify.java @@ -20,7 +20,12 @@ package org.apache.hadoop.hbase.test; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; -import com.google.common.collect.Sets; +import java.io.IOException; +import java.util.Random; +import java.util.Set; +import java.util.UUID; +import java.util.regex.Matcher; +import java.util.regex.Pattern; import org.apache.commons.cli.CommandLine; import org.apache.commons.logging.Log; @@ -44,7 +49,6 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.ScannerCallable; -import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.NMapInputFormat; import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil; @@ -64,12 +68,7 @@ import org.apache.hadoop.util.ToolRunner; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.io.IOException; -import java.util.Random; -import java.util.Set; -import java.util.UUID; -import java.util.regex.Matcher; -import java.util.regex.Pattern; +import com.google.common.collect.Sets; /** * A large test which loads a lot of data that has internal references, and @@ -165,7 +164,7 @@ public void cleanUpCluster() throws Exception { extends Mapper { protected long recordsToWrite; - protected Table table; + protected HTable table; protected Configuration conf; protected int numBackReferencesPerRow; protected String shortTaskId; diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas.java index b03a586..575febe 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedMultiGetRequestsWithRegionReplicas.java @@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.IntegrationTestingUtility; import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.LoadTestTool; +import org.apache.hadoop.hbase.util.MultiThreadedReader; import org.apache.hadoop.util.ToolRunner; import org.junit.experimental.categories.Category; diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedRequestsWithRegionReplicas.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedRequestsWithRegionReplicas.java index 9995124..b8e939f 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedRequestsWithRegionReplicas.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestTimeBoundedRequestsWithRegionReplicas.java @@ -19,7 +19,9 @@ package org.apache.hadoop.hbase.test; import java.io.IOException; +import java.util.HashSet; import java.util.List; +import java.util.Set; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; @@ -31,12 +33,15 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.IntegrationTestIngest; import org.apache.hadoop.hbase.IntegrationTestingUtility; +import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.chaos.factories.MonkeyFactory; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.Consistency; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; @@ -163,7 +168,7 @@ public class IntegrationTestTimeBoundedRequestsWithRegionReplicas extends Integr long refreshTime = conf.getLong(StorefileRefresherChore.REGIONSERVER_STOREFILE_REFRESH_PERIOD, 0); if (refreshTime > 0 && refreshTime <= 10000) { LOG.info("Sleeping " + refreshTime + "ms to ensure that the data is replicated"); - Threads.sleep(refreshTime); + Threads.sleep(refreshTime*3); } else { LOG.info("Reopening the table"); admin.disableTable(getTablename()); @@ -337,6 +342,15 @@ public class IntegrationTestTimeBoundedRequestsWithRegionReplicas extends Integr if (elapsedNano > timeoutNano) { timedOutReads.incrementAndGet(); numReadFailures.addAndGet(1); // fail the test + for (Result r : results) { + LOG.error("FAILED FOR " + r); + RegionLocations rl = ((ClusterConnection)connection). + locateRegion(tableName, r.getRow(), true, true); + HRegionLocation locations[] = rl.getRegionLocations(); + for (HRegionLocation h : locations) { + LOG.error("LOCATION " + h); + } + } } } } diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java index ea9b228..96743c8 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestWithCellVisibilityLoadAndVerify.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.IntegrationTestingUtility; -import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.Put; @@ -49,6 +48,7 @@ import org.apache.hadoop.hbase.security.visibility.Authorizations; import org.apache.hadoop.hbase.security.visibility.CellVisibility; import org.apache.hadoop.hbase.security.visibility.VisibilityClient; import org.apache.hadoop.hbase.security.visibility.VisibilityController; +import org.apache.hadoop.hbase.testclassification.IntegrationTests; import org.apache.hadoop.hbase.util.AbstractHBaseTool; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.BytesWritable; diff --git hbase-it/src/test/java/org/apache/hadoop/hbase/trace/IntegrationTestSendTraceRequests.java hbase-it/src/test/java/org/apache/hadoop/hbase/trace/IntegrationTestSendTraceRequests.java index c1dc5ea..b1cf57e 100644 --- hbase-it/src/test/java/org/apache/hadoop/hbase/trace/IntegrationTestSendTraceRequests.java +++ hbase-it/src/test/java/org/apache/hadoop/hbase/trace/IntegrationTestSendTraceRequests.java @@ -234,7 +234,7 @@ public class IntegrationTestSendTraceRequests extends AbstractHBaseTool { private LinkedBlockingQueue insertData() throws IOException, InterruptedException { LinkedBlockingQueue rowKeys = new LinkedBlockingQueue(25000); - Table ht = new HTable(util.getConfiguration(), this.tableName); + HTable ht = new HTable(util.getConfiguration(), this.tableName); byte[] value = new byte[300]; for (int x = 0; x < 5000; x++) { TraceScope traceScope = Trace.startSpan("insertData", Sampler.ALWAYS); diff --git hbase-prefix-tree/pom.xml hbase-prefix-tree/pom.xml index b965b37..a97de92 100644 --- hbase-prefix-tree/pom.xml +++ hbase-prefix-tree/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -33,18 +33,37 @@ - - org.apache.maven.plugins - maven-site-plugin - - true - - - - - org.apache.maven.plugins - maven-source-plugin - + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + + + + default-testCompile + + ${java.default.compiler} + true + + + + + + org.apache.maven.plugins + maven-site-plugin + + true + + + + + org.apache.maven.plugins + maven-source-plugin + maven-assembly-plugin diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java index 787d1de..9e27942 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java @@ -25,6 +25,7 @@ import java.util.List; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.row.TestRowData; import org.apache.hadoop.hbase.codec.prefixtree.row.data.TestRowDataRandomKeyValuesWithTags; @@ -36,13 +37,13 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestKeyValueTool { @Parameters public static Collection parameters() { - return new TestRowData.InMemory().getAllAsObjectArray(); + return TestRowData.InMemory.getAllAsObjectArray(); } private TestRowData rows; diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/blockmeta/TestBlockMeta.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/blockmeta/TestBlockMeta.java index e555b76..6bf14bf 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/blockmeta/TestBlockMeta.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/blockmeta/TestBlockMeta.java @@ -23,13 +23,14 @@ import java.io.IOException; import java.nio.ByteBuffer; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeBlockMeta; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestBlockMeta { static int BLOCK_START = 123; diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTokenizer.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTokenizer.java index 15825dc..77cc5d3 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTokenizer.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTokenizer.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.codec.prefixtree.builder; import java.util.Collection; import java.util.List; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.encode.tokenize.Tokenizer; import org.apache.hadoop.hbase.codec.prefixtree.encode.tokenize.TokenizerNode; @@ -34,7 +35,7 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; -@Category(SmallTests.class) +@Category({MiscTests.class,SmallTests.class}) @RunWith(Parameterized.class) public class TestTokenizer { diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTreeDepth.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTreeDepth.java index 100a637..87fcf07 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTreeDepth.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/builder/TestTreeDepth.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.codec.prefixtree.builder; import java.util.List; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.encode.tokenize.Tokenizer; import org.apache.hadoop.hbase.util.SimpleMutableByteRange; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestTreeDepth { @Test diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java index 6853438..c33a953 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/column/TestColumnBuilder.java @@ -23,6 +23,7 @@ import java.io.IOException; import java.util.Collection; import java.util.List; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeBlockMeta; import org.apache.hadoop.hbase.codec.prefixtree.decode.column.ColumnReader; @@ -43,7 +44,7 @@ import org.junit.runners.Parameterized.Parameters; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestColumnBuilder { diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java index 5ac3150..98513da 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java @@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.decode.DecoderFactory; import org.apache.hadoop.hbase.codec.prefixtree.encode.PrefixTreeEncoder; @@ -43,7 +44,7 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestPrefixTreeSearcher { @@ -51,7 +52,7 @@ public class TestPrefixTreeSearcher { @Parameters public static Collection parameters() { - return new TestRowData.InMemory().getAllAsObjectArray(); + return TestRowData.InMemory.getAllAsObjectArray(); } protected TestRowData rows; diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java index 2eb897f..4bf60e0 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java @@ -57,7 +57,7 @@ public interface TestRowData { void individualSearcherAssertions(CellSearcher searcher); - class InMemory { + static class InMemory { /* * The following are different styles of data that the codec may encounter. Having these small diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java index 8e515af..ec11551 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowEncoder.java @@ -28,6 +28,7 @@ import java.util.List; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeBlockMeta; import org.apache.hadoop.hbase.codec.prefixtree.decode.PrefixTreeArraySearcher; @@ -43,7 +44,7 @@ import org.junit.runners.Parameterized.Parameters; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestRowEncoder { diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/timestamp/TestTimestampEncoder.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/timestamp/TestTimestampEncoder.java index 2362d19..65cbcc9 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/timestamp/TestTimestampEncoder.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/timestamp/TestTimestampEncoder.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.codec.prefixtree.timestamp; import java.io.IOException; import java.util.Collection; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeBlockMeta; import org.apache.hadoop.hbase.codec.prefixtree.decode.timestamp.TimestampDecoder; @@ -32,7 +33,7 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestTimestampEncoder { diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/bytes/TestByteRange.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/bytes/TestByteRange.java index 3c9a6d3..028d604 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/bytes/TestByteRange.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/bytes/TestByteRange.java @@ -20,13 +20,14 @@ package org.apache.hadoop.hbase.util.bytes; import junit.framework.Assert; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ByteRange; import org.apache.hadoop.hbase.util.SimpleMutableByteRange; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestByteRange { @Test diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestFIntTool.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestFIntTool.java index 5851301..4d12335 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestFIntTool.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestFIntTool.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.util.vint; import java.io.ByteArrayOutputStream; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; /********************** tests *************************/ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestFIntTool { @Test public void testLeadingZeros() { diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVIntTool.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVIntTool.java index 2e2a549..b9cb372 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVIntTool.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVIntTool.java @@ -23,12 +23,13 @@ import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.Random; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestVIntTool { @Test diff --git hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVLongTool.java hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVLongTool.java index 289b727..ed637f6 100644 --- hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVLongTool.java +++ hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/util/vint/TestVLongTool.java @@ -22,13 +22,14 @@ import java.io.ByteArrayInputStream; import java.io.IOException; import java.util.Random; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.number.RandomNumberUtils; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestVLongTool { @Test diff --git hbase-protocol/pom.xml hbase-protocol/pom.xml index 50d14bc..7787c52 100644 --- hbase-protocol/pom.xml +++ hbase-protocol/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. @@ -176,6 +176,7 @@ MapReduce.proto Master.proto MultiRowMutation.proto + Quota.proto RegionServerStatus.proto RowProcessor.proto RPC.proto diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufMagic.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufMagic.java new file mode 100644 index 0000000..17bee5e --- /dev/null +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufMagic.java @@ -0,0 +1,90 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.protobuf; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * Protobufs utility. + */ +@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="DP_CREATE_CLASSLOADER_INSIDE_DO_PRIVILEGED", + justification="None. Address sometime.") +@InterfaceAudience.Private +public class ProtobufMagic { + + private ProtobufMagic() { + } + + /** + * Magic we put ahead of a serialized protobuf message. + * For example, all znode content is protobuf messages with the below magic + * for preamble. + */ + public static final byte [] PB_MAGIC = new byte [] {'P', 'B', 'U', 'F'}; + + /** + * @param bytes Bytes to check. + * @return True if passed bytes has {@link #PB_MAGIC} for a prefix. + */ + public static boolean isPBMagicPrefix(final byte [] bytes) { + if (bytes == null) return false; + return isPBMagicPrefix(bytes, 0, bytes.length); + } + + /* + * Copied from Bytes.java to here + * hbase-common now depends on hbase-protocol + * Referencing Bytes.java directly would create circular dependency + */ + private static int compareTo(byte[] buffer1, int offset1, int length1, + byte[] buffer2, int offset2, int length2) { + // Short circuit equal case + if (buffer1 == buffer2 && + offset1 == offset2 && + length1 == length2) { + return 0; + } + // Bring WritableComparator code local + int end1 = offset1 + length1; + int end2 = offset2 + length2; + for (int i = offset1, j = offset2; i < end1 && j < end2; i++, j++) { + int a = (buffer1[i] & 0xff); + int b = (buffer2[j] & 0xff); + if (a != b) { + return a - b; + } + } + return length1 - length2; + } + + /** + * @param bytes Bytes to check. + * @return True if passed bytes has {@link #PB_MAGIC} for a prefix. + */ + public static boolean isPBMagicPrefix(final byte [] bytes, int offset, int len) { + if (bytes == null || len < PB_MAGIC.length) return false; + return compareTo(PB_MAGIC, 0, PB_MAGIC.length, bytes, offset, PB_MAGIC.length) == 0; + } + + /** + * @return Length of {@link #PB_MAGIC} + */ + public static int lengthOfPBMagic() { + return PB_MAGIC.length; + } +} diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java index c36662e..ab86e1e 100644 --- hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ClientProtos.java @@ -26210,6 +26210,482 @@ public final class ClientProtos { // @@protoc_insertion_point(class_scope:RegionAction) } + public interface RegionLoadStatsOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional int32 memstoreLoad = 1 [default = 0]; + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *

    +     * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +     * 
    + */ + boolean hasMemstoreLoad(); + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +     * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +     * 
    + */ + int getMemstoreLoad(); + } + /** + * Protobuf type {@code RegionLoadStats} + * + *
    +   *
    +   * Statistics about the current load on the region
    +   * 
    + */ + public static final class RegionLoadStats extends + com.google.protobuf.GeneratedMessage + implements RegionLoadStatsOrBuilder { + // Use RegionLoadStats.newBuilder() to construct. + private RegionLoadStats(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private RegionLoadStats(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final RegionLoadStats defaultInstance; + public static RegionLoadStats getDefaultInstance() { + return defaultInstance; + } + + public RegionLoadStats getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private RegionLoadStats( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + bitField0_ |= 0x00000001; + memstoreLoad_ = input.readInt32(); + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.internal_static_RegionLoadStats_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.internal_static_RegionLoadStats_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.class, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public RegionLoadStats parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new RegionLoadStats(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional int32 memstoreLoad = 1 [default = 0]; + public static final int MEMSTORELOAD_FIELD_NUMBER = 1; + private int memstoreLoad_; + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +     * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +     * 
    + */ + public boolean hasMemstoreLoad() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +     * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +     * 
    + */ + public int getMemstoreLoad() { + return memstoreLoad_; + } + + private void initFields() { + memstoreLoad_ = 0; + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeInt32(1, memstoreLoad_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeInt32Size(1, memstoreLoad_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats other = (org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats) obj; + + boolean result = true; + result = result && (hasMemstoreLoad() == other.hasMemstoreLoad()); + if (hasMemstoreLoad()) { + result = result && (getMemstoreLoad() + == other.getMemstoreLoad()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasMemstoreLoad()) { + hash = (37 * hash) + MEMSTORELOAD_FIELD_NUMBER; + hash = (53 * hash) + getMemstoreLoad(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code RegionLoadStats} + * + *
    +     *
    +     * Statistics about the current load on the region
    +     * 
    + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.internal_static_RegionLoadStats_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.internal_static_RegionLoadStats_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.class, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + memstoreLoad_ = 0; + bitField0_ = (bitField0_ & ~0x00000001); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.internal_static_RegionLoadStats_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats build() { + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats result = new org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.memstoreLoad_ = memstoreLoad_; + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance()) return this; + if (other.hasMemstoreLoad()) { + setMemstoreLoad(other.getMemstoreLoad()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional int32 memstoreLoad = 1 [default = 0]; + private int memstoreLoad_ ; + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +       * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +       * 
    + */ + public boolean hasMemstoreLoad() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +       * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +       * 
    + */ + public int getMemstoreLoad() { + return memstoreLoad_; + } + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +       * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +       * 
    + */ + public Builder setMemstoreLoad(int value) { + bitField0_ |= 0x00000001; + memstoreLoad_ = value; + onChanged(); + return this; + } + /** + * optional int32 memstoreLoad = 1 [default = 0]; + * + *
    +       * percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +       * 
    + */ + public Builder clearMemstoreLoad() { + bitField0_ = (bitField0_ & ~0x00000001); + memstoreLoad_ = 0; + onChanged(); + return this; + } + + // @@protoc_insertion_point(builder_scope:RegionLoadStats) + } + + static { + defaultInstance = new RegionLoadStats(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:RegionLoadStats) + } + public interface ResultOrExceptionOrBuilder extends com.google.protobuf.MessageOrBuilder { @@ -26286,6 +26762,32 @@ public final class ClientProtos { * */ org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceResultOrBuilder getServiceResultOrBuilder(); + + // optional .RegionLoadStats loadStats = 5; + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + boolean hasLoadStats(); + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats getLoadStats(); + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder getLoadStatsOrBuilder(); } /** * Protobuf type {@code ResultOrException} @@ -26389,6 +26891,19 @@ public final class ClientProtos { bitField0_ |= 0x00000008; break; } + case 42: { + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder subBuilder = null; + if (((bitField0_ & 0x00000010) == 0x00000010)) { + subBuilder = loadStats_.toBuilder(); + } + loadStats_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(loadStats_); + loadStats_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000010; + break; + } } } } catch (com.google.protobuf.InvalidProtocolBufferException e) { @@ -26533,11 +27048,46 @@ public final class ClientProtos { return serviceResult_; } + // optional .RegionLoadStats loadStats = 5; + public static final int LOADSTATS_FIELD_NUMBER = 5; + private org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats loadStats_; + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + public boolean hasLoadStats() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats getLoadStats() { + return loadStats_; + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +     * current load on the region
    +     * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder getLoadStatsOrBuilder() { + return loadStats_; + } + private void initFields() { index_ = 0; result_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.Result.getDefaultInstance(); exception_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameBytesPair.getDefaultInstance(); serviceResult_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceResult.getDefaultInstance(); + loadStats_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance(); } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { @@ -26575,6 +27125,9 @@ public final class ClientProtos { if (((bitField0_ & 0x00000008) == 0x00000008)) { output.writeMessage(4, serviceResult_); } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + output.writeMessage(5, loadStats_); + } getUnknownFields().writeTo(output); } @@ -26600,6 +27153,10 @@ public final class ClientProtos { size += com.google.protobuf.CodedOutputStream .computeMessageSize(4, serviceResult_); } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(5, loadStats_); + } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; return size; @@ -26643,6 +27200,11 @@ public final class ClientProtos { result = result && getServiceResult() .equals(other.getServiceResult()); } + result = result && (hasLoadStats() == other.hasLoadStats()); + if (hasLoadStats()) { + result = result && getLoadStats() + .equals(other.getLoadStats()); + } result = result && getUnknownFields().equals(other.getUnknownFields()); return result; @@ -26672,6 +27234,10 @@ public final class ClientProtos { hash = (37 * hash) + SERVICE_RESULT_FIELD_NUMBER; hash = (53 * hash) + getServiceResult().hashCode(); } + if (hasLoadStats()) { + hash = (37 * hash) + LOADSTATS_FIELD_NUMBER; + hash = (53 * hash) + getLoadStats().hashCode(); + } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; @@ -26783,6 +27349,7 @@ public final class ClientProtos { getResultFieldBuilder(); getExceptionFieldBuilder(); getServiceResultFieldBuilder(); + getLoadStatsFieldBuilder(); } } private static Builder create() { @@ -26811,6 +27378,12 @@ public final class ClientProtos { serviceResultBuilder_.clear(); } bitField0_ = (bitField0_ & ~0x00000008); + if (loadStatsBuilder_ == null) { + loadStats_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance(); + } else { + loadStatsBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000010); return this; } @@ -26867,6 +27440,14 @@ public final class ClientProtos { } else { result.serviceResult_ = serviceResultBuilder_.build(); } + if (((from_bitField0_ & 0x00000010) == 0x00000010)) { + to_bitField0_ |= 0x00000010; + } + if (loadStatsBuilder_ == null) { + result.loadStats_ = loadStats_; + } else { + result.loadStats_ = loadStatsBuilder_.build(); + } result.bitField0_ = to_bitField0_; onBuilt(); return result; @@ -26895,6 +27476,9 @@ public final class ClientProtos { if (other.hasServiceResult()) { mergeServiceResult(other.getServiceResult()); } + if (other.hasLoadStats()) { + mergeLoadStats(other.getLoadStats()); + } this.mergeUnknownFields(other.getUnknownFields()); return this; } @@ -27374,6 +27958,159 @@ public final class ClientProtos { return serviceResultBuilder_; } + // optional .RegionLoadStats loadStats = 5; + private org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats loadStats_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder> loadStatsBuilder_; + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public boolean hasLoadStats() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats getLoadStats() { + if (loadStatsBuilder_ == null) { + return loadStats_; + } else { + return loadStatsBuilder_.getMessage(); + } + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public Builder setLoadStats(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats value) { + if (loadStatsBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + loadStats_ = value; + onChanged(); + } else { + loadStatsBuilder_.setMessage(value); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public Builder setLoadStats( + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder builderForValue) { + if (loadStatsBuilder_ == null) { + loadStats_ = builderForValue.build(); + onChanged(); + } else { + loadStatsBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public Builder mergeLoadStats(org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats value) { + if (loadStatsBuilder_ == null) { + if (((bitField0_ & 0x00000010) == 0x00000010) && + loadStats_ != org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance()) { + loadStats_ = + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.newBuilder(loadStats_).mergeFrom(value).buildPartial(); + } else { + loadStats_ = value; + } + onChanged(); + } else { + loadStatsBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public Builder clearLoadStats() { + if (loadStatsBuilder_ == null) { + loadStats_ = org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.getDefaultInstance(); + onChanged(); + } else { + loadStatsBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000010); + return this; + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder getLoadStatsBuilder() { + bitField0_ |= 0x00000010; + onChanged(); + return getLoadStatsFieldBuilder().getBuilder(); + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder getLoadStatsOrBuilder() { + if (loadStatsBuilder_ != null) { + return loadStatsBuilder_.getMessageOrBuilder(); + } else { + return loadStats_; + } + } + /** + * optional .RegionLoadStats loadStats = 5; + * + *
    +       * current load on the region
    +       * 
    + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder> + getLoadStatsFieldBuilder() { + if (loadStatsBuilder_ == null) { + loadStatsBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats.Builder, org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStatsOrBuilder>( + loadStats_, + getParentForChildren(), + isClean()); + loadStats_ = null; + } + return loadStatsBuilder_; + } + // @@protoc_insertion_point(builder_scope:ResultOrException) } @@ -31067,6 +31804,11 @@ public final class ClientProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable internal_static_RegionAction_fieldAccessorTable; private static com.google.protobuf.Descriptors.Descriptor + internal_static_RegionLoadStats_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_RegionLoadStats_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor internal_static_ResultOrException_descriptor; private static com.google.protobuf.GeneratedMessage.FieldAccessorTable @@ -31180,31 +31922,33 @@ public final class ClientProtos { "\030\003 \001(\0132\004.Get\022-\n\014service_call\030\004 \001(\0132\027.Cop" + "rocessorServiceCall\"Y\n\014RegionAction\022 \n\006r" + "egion\030\001 \002(\0132\020.RegionSpecifier\022\016\n\006atomic\030" + - "\002 \001(\010\022\027\n\006action\030\003 \003(\0132\007.Action\"\221\001\n\021Resul" + - "tOrException\022\r\n\005index\030\001 \001(\r\022\027\n\006result\030\002 " + - "\001(\0132\007.Result\022!\n\texception\030\003 \001(\0132\016.NameBy" + - "tesPair\0221\n\016service_result\030\004 \001(\0132\031.Coproc" + - "essorServiceResult\"f\n\022RegionActionResult", - "\022-\n\021resultOrException\030\001 \003(\0132\022.ResultOrEx" + - "ception\022!\n\texception\030\002 \001(\0132\016.NameBytesPa" + - "ir\"f\n\014MultiRequest\022#\n\014regionAction\030\001 \003(\013" + - "2\r.RegionAction\022\022\n\nnonceGroup\030\002 \001(\004\022\035\n\tc" + - "ondition\030\003 \001(\0132\n.Condition\"S\n\rMultiRespo" + - "nse\022/\n\022regionActionResult\030\001 \003(\0132\023.Region" + - "ActionResult\022\021\n\tprocessed\030\002 \001(\010*\'\n\013Consi" + - "stency\022\n\n\006STRONG\020\000\022\014\n\010TIMELINE\020\0012\205\003\n\rCli" + - "entService\022 \n\003Get\022\013.GetRequest\032\014.GetResp" + - "onse\022)\n\006Mutate\022\016.MutateRequest\032\017.MutateR", - "esponse\022#\n\004Scan\022\014.ScanRequest\032\r.ScanResp" + - "onse\022>\n\rBulkLoadHFile\022\025.BulkLoadHFileReq" + - "uest\032\026.BulkLoadHFileResponse\022F\n\013ExecServ" + - "ice\022\032.CoprocessorServiceRequest\032\033.Coproc" + - "essorServiceResponse\022R\n\027ExecRegionServer" + - "Service\022\032.CoprocessorServiceRequest\032\033.Co" + - "processorServiceResponse\022&\n\005Multi\022\r.Mult" + - "iRequest\032\016.MultiResponseBB\n*org.apache.h" + - "adoop.hbase.protobuf.generatedB\014ClientPr" + - "otosH\001\210\001\001\240\001\001" + "\002 \001(\010\022\027\n\006action\030\003 \003(\0132\007.Action\"*\n\017Region" + + "LoadStats\022\027\n\014memstoreLoad\030\001 \001(\005:\0010\"\266\001\n\021R" + + "esultOrException\022\r\n\005index\030\001 \001(\r\022\027\n\006resul" + + "t\030\002 \001(\0132\007.Result\022!\n\texception\030\003 \001(\0132\016.Na" + + "meBytesPair\0221\n\016service_result\030\004 \001(\0132\031.Co", + "processorServiceResult\022#\n\tloadStats\030\005 \001(" + + "\0132\020.RegionLoadStats\"f\n\022RegionActionResul" + + "t\022-\n\021resultOrException\030\001 \003(\0132\022.ResultOrE" + + "xception\022!\n\texception\030\002 \001(\0132\016.NameBytesP" + + "air\"f\n\014MultiRequest\022#\n\014regionAction\030\001 \003(" + + "\0132\r.RegionAction\022\022\n\nnonceGroup\030\002 \001(\004\022\035\n\t" + + "condition\030\003 \001(\0132\n.Condition\"S\n\rMultiResp" + + "onse\022/\n\022regionActionResult\030\001 \003(\0132\023.Regio" + + "nActionResult\022\021\n\tprocessed\030\002 \001(\010*\'\n\013Cons" + + "istency\022\n\n\006STRONG\020\000\022\014\n\010TIMELINE\020\0012\205\003\n\rCl", + "ientService\022 \n\003Get\022\013.GetRequest\032\014.GetRes" + + "ponse\022)\n\006Mutate\022\016.MutateRequest\032\017.Mutate" + + "Response\022#\n\004Scan\022\014.ScanRequest\032\r.ScanRes" + + "ponse\022>\n\rBulkLoadHFile\022\025.BulkLoadHFileRe" + + "quest\032\026.BulkLoadHFileResponse\022F\n\013ExecSer" + + "vice\022\032.CoprocessorServiceRequest\032\033.Copro" + + "cessorServiceResponse\022R\n\027ExecRegionServe" + + "rService\022\032.CoprocessorServiceRequest\032\033.C" + + "oprocessorServiceResponse\022&\n\005Multi\022\r.Mul" + + "tiRequest\032\016.MultiResponseBB\n*org.apache.", + "hadoop.hbase.protobuf.generatedB\014ClientP" + + "rotosH\001\210\001\001\240\001\001" }; com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { @@ -31361,26 +32105,32 @@ public final class ClientProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_RegionAction_descriptor, new java.lang.String[] { "Region", "Atomic", "Action", }); - internal_static_ResultOrException_descriptor = + internal_static_RegionLoadStats_descriptor = getDescriptor().getMessageTypes().get(22); + internal_static_RegionLoadStats_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_RegionLoadStats_descriptor, + new java.lang.String[] { "MemstoreLoad", }); + internal_static_ResultOrException_descriptor = + getDescriptor().getMessageTypes().get(23); internal_static_ResultOrException_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ResultOrException_descriptor, - new java.lang.String[] { "Index", "Result", "Exception", "ServiceResult", }); + new java.lang.String[] { "Index", "Result", "Exception", "ServiceResult", "LoadStats", }); internal_static_RegionActionResult_descriptor = - getDescriptor().getMessageTypes().get(23); + getDescriptor().getMessageTypes().get(24); internal_static_RegionActionResult_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_RegionActionResult_descriptor, new java.lang.String[] { "ResultOrException", "Exception", }); internal_static_MultiRequest_descriptor = - getDescriptor().getMessageTypes().get(24); + getDescriptor().getMessageTypes().get(25); internal_static_MultiRequest_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_MultiRequest_descriptor, new java.lang.String[] { "RegionAction", "NonceGroup", "Condition", }); internal_static_MultiResponse_descriptor = - getDescriptor().getMessageTypes().get(25); + getDescriptor().getMessageTypes().get(26); internal_static_MultiResponse_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_MultiResponse_descriptor, diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java index 1dbce4d..2947f40 100644 --- hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/HBaseProtos.java @@ -139,6 +139,133 @@ public final class HBaseProtos { // @@protoc_insertion_point(enum_scope:CompareType) } + /** + * Protobuf enum {@code TimeUnit} + */ + public enum TimeUnit + implements com.google.protobuf.ProtocolMessageEnum { + /** + * NANOSECONDS = 1; + */ + NANOSECONDS(0, 1), + /** + * MICROSECONDS = 2; + */ + MICROSECONDS(1, 2), + /** + * MILLISECONDS = 3; + */ + MILLISECONDS(2, 3), + /** + * SECONDS = 4; + */ + SECONDS(3, 4), + /** + * MINUTES = 5; + */ + MINUTES(4, 5), + /** + * HOURS = 6; + */ + HOURS(5, 6), + /** + * DAYS = 7; + */ + DAYS(6, 7), + ; + + /** + * NANOSECONDS = 1; + */ + public static final int NANOSECONDS_VALUE = 1; + /** + * MICROSECONDS = 2; + */ + public static final int MICROSECONDS_VALUE = 2; + /** + * MILLISECONDS = 3; + */ + public static final int MILLISECONDS_VALUE = 3; + /** + * SECONDS = 4; + */ + public static final int SECONDS_VALUE = 4; + /** + * MINUTES = 5; + */ + public static final int MINUTES_VALUE = 5; + /** + * HOURS = 6; + */ + public static final int HOURS_VALUE = 6; + /** + * DAYS = 7; + */ + public static final int DAYS_VALUE = 7; + + + public final int getNumber() { return value; } + + public static TimeUnit valueOf(int value) { + switch (value) { + case 1: return NANOSECONDS; + case 2: return MICROSECONDS; + case 3: return MILLISECONDS; + case 4: return SECONDS; + case 5: return MINUTES; + case 6: return HOURS; + case 7: return DAYS; + default: return null; + } + } + + public static com.google.protobuf.Internal.EnumLiteMap + internalGetValueMap() { + return internalValueMap; + } + private static com.google.protobuf.Internal.EnumLiteMap + internalValueMap = + new com.google.protobuf.Internal.EnumLiteMap() { + public TimeUnit findValueByNumber(int number) { + return TimeUnit.valueOf(number); + } + }; + + public final com.google.protobuf.Descriptors.EnumValueDescriptor + getValueDescriptor() { + return getDescriptor().getValues().get(index); + } + public final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptorForType() { + return getDescriptor(); + } + public static final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.getDescriptor().getEnumTypes().get(1); + } + + private static final TimeUnit[] VALUES = values(); + + public static TimeUnit valueOf( + com.google.protobuf.Descriptors.EnumValueDescriptor desc) { + if (desc.getType() != getDescriptor()) { + throw new java.lang.IllegalArgumentException( + "EnumValueDescriptor is not for this type."); + } + return VALUES[desc.getIndex()]; + } + + private final int index; + private final int value; + + private TimeUnit(int index, int value) { + this.index = index; + this.value = value; + } + + // @@protoc_insertion_point(enum_scope:TimeUnit) + } + public interface TableNameOrBuilder extends com.google.protobuf.MessageOrBuilder { @@ -2270,138 +2397,1708 @@ public final class HBaseProtos { configuration_.add(builderForValue.build()); onChanged(); } else { - configurationBuilder_.addMessage(builderForValue.build()); + configurationBuilder_.addMessage(builderForValue.build()); + } + return this; + } + /** + * repeated .NameStringPair configuration = 4; + */ + public Builder addConfiguration( + int index, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder builderForValue) { + if (configurationBuilder_ == null) { + ensureConfigurationIsMutable(); + configuration_.add(index, builderForValue.build()); + onChanged(); + } else { + configurationBuilder_.addMessage(index, builderForValue.build()); + } + return this; + } + /** + * repeated .NameStringPair configuration = 4; + */ + public Builder addAllConfiguration( + java.lang.Iterable values) { + if (configurationBuilder_ == null) { + ensureConfigurationIsMutable(); + super.addAll(values, configuration_); + onChanged(); + } else { + configurationBuilder_.addAllMessages(values); + } + return this; + } + /** + * repeated .NameStringPair configuration = 4; + */ + public Builder clearConfiguration() { + if (configurationBuilder_ == null) { + configuration_ = java.util.Collections.emptyList(); + bitField0_ = (bitField0_ & ~0x00000008); + onChanged(); + } else { + configurationBuilder_.clear(); + } + return this; + } + /** + * repeated .NameStringPair configuration = 4; + */ + public Builder removeConfiguration(int index) { + if (configurationBuilder_ == null) { + ensureConfigurationIsMutable(); + configuration_.remove(index); + onChanged(); + } else { + configurationBuilder_.remove(index); + } + return this; + } + /** + * repeated .NameStringPair configuration = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder getConfigurationBuilder( + int index) { + return getConfigurationFieldBuilder().getBuilder(index); + } + /** + * repeated .NameStringPair configuration = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder getConfigurationOrBuilder( + int index) { + if (configurationBuilder_ == null) { + return configuration_.get(index); } else { + return configurationBuilder_.getMessageOrBuilder(index); + } + } + /** + * repeated .NameStringPair configuration = 4; + */ + public java.util.List + getConfigurationOrBuilderList() { + if (configurationBuilder_ != null) { + return configurationBuilder_.getMessageOrBuilderList(); + } else { + return java.util.Collections.unmodifiableList(configuration_); + } + } + /** + * repeated .NameStringPair configuration = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder addConfigurationBuilder() { + return getConfigurationFieldBuilder().addBuilder( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.getDefaultInstance()); + } + /** + * repeated .NameStringPair configuration = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder addConfigurationBuilder( + int index) { + return getConfigurationFieldBuilder().addBuilder( + index, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.getDefaultInstance()); + } + /** + * repeated .NameStringPair configuration = 4; + */ + public java.util.List + getConfigurationBuilderList() { + return getConfigurationFieldBuilder().getBuilderList(); + } + private com.google.protobuf.RepeatedFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder> + getConfigurationFieldBuilder() { + if (configurationBuilder_ == null) { + configurationBuilder_ = new com.google.protobuf.RepeatedFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder>( + configuration_, + ((bitField0_ & 0x00000008) == 0x00000008), + getParentForChildren(), + isClean()); + configuration_ = null; + } + return configurationBuilder_; + } + + // @@protoc_insertion_point(builder_scope:TableSchema) + } + + static { + defaultInstance = new TableSchema(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:TableSchema) + } + + public interface TableStateOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // required .TableState.State state = 1; + /** + * required .TableState.State state = 1; + * + *
    +     * This is the table's state.
    +     * 
    + */ + boolean hasState(); + /** + * required .TableState.State state = 1; + * + *
    +     * This is the table's state.
    +     * 
    + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState(); + + // required .TableName table = 2; + /** + * required .TableName table = 2; + */ + boolean hasTable(); + /** + * required .TableName table = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTable(); + /** + * required .TableName table = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableOrBuilder(); + + // optional uint64 timestamp = 3; + /** + * optional uint64 timestamp = 3; + */ + boolean hasTimestamp(); + /** + * optional uint64 timestamp = 3; + */ + long getTimestamp(); + } + /** + * Protobuf type {@code TableState} + * + *
    +   ** Denotes state of the table 
    +   * 
    + */ + public static final class TableState extends + com.google.protobuf.GeneratedMessage + implements TableStateOrBuilder { + // Use TableState.newBuilder() to construct. + private TableState(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private TableState(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final TableState defaultInstance; + public static TableState getDefaultInstance() { + return defaultInstance; + } + + public TableState getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private TableState( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + int rawValue = input.readEnum(); + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State value = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.valueOf(rawValue); + if (value == null) { + unknownFields.mergeVarintField(1, rawValue); + } else { + bitField0_ |= 0x00000001; + state_ = value; + } + break; + } + case 18: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder subBuilder = null; + if (((bitField0_ & 0x00000002) == 0x00000002)) { + subBuilder = table_.toBuilder(); + } + table_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(table_); + table_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000002; + break; + } + case 24: { + bitField0_ |= 0x00000004; + timestamp_ = input.readUInt64(); + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableState_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableState_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.class, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public TableState parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new TableState(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + /** + * Protobuf enum {@code TableState.State} + * + *
    +     * Table's current state
    +     * 
    + */ + public enum State + implements com.google.protobuf.ProtocolMessageEnum { + /** + * ENABLED = 0; + */ + ENABLED(0, 0), + /** + * DISABLED = 1; + */ + DISABLED(1, 1), + /** + * DISABLING = 2; + */ + DISABLING(2, 2), + /** + * ENABLING = 3; + */ + ENABLING(3, 3), + ; + + /** + * ENABLED = 0; + */ + public static final int ENABLED_VALUE = 0; + /** + * DISABLED = 1; + */ + public static final int DISABLED_VALUE = 1; + /** + * DISABLING = 2; + */ + public static final int DISABLING_VALUE = 2; + /** + * ENABLING = 3; + */ + public static final int ENABLING_VALUE = 3; + + + public final int getNumber() { return value; } + + public static State valueOf(int value) { + switch (value) { + case 0: return ENABLED; + case 1: return DISABLED; + case 2: return DISABLING; + case 3: return ENABLING; + default: return null; + } + } + + public static com.google.protobuf.Internal.EnumLiteMap + internalGetValueMap() { + return internalValueMap; + } + private static com.google.protobuf.Internal.EnumLiteMap + internalValueMap = + new com.google.protobuf.Internal.EnumLiteMap() { + public State findValueByNumber(int number) { + return State.valueOf(number); + } + }; + + public final com.google.protobuf.Descriptors.EnumValueDescriptor + getValueDescriptor() { + return getDescriptor().getValues().get(index); + } + public final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptorForType() { + return getDescriptor(); + } + public static final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDescriptor().getEnumTypes().get(0); + } + + private static final State[] VALUES = values(); + + public static State valueOf( + com.google.protobuf.Descriptors.EnumValueDescriptor desc) { + if (desc.getType() != getDescriptor()) { + throw new java.lang.IllegalArgumentException( + "EnumValueDescriptor is not for this type."); + } + return VALUES[desc.getIndex()]; + } + + private final int index; + private final int value; + + private State(int index, int value) { + this.index = index; + this.value = value; + } + + // @@protoc_insertion_point(enum_scope:TableState.State) + } + + private int bitField0_; + // required .TableState.State state = 1; + public static final int STATE_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State state_; + /** + * required .TableState.State state = 1; + * + *
    +     * This is the table's state.
    +     * 
    + */ + public boolean hasState() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableState.State state = 1; + * + *
    +     * This is the table's state.
    +     * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState() { + return state_; + } + + // required .TableName table = 2; + public static final int TABLE_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName table_; + /** + * required .TableName table = 2; + */ + public boolean hasTable() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * required .TableName table = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTable() { + return table_; + } + /** + * required .TableName table = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableOrBuilder() { + return table_; + } + + // optional uint64 timestamp = 3; + public static final int TIMESTAMP_FIELD_NUMBER = 3; + private long timestamp_; + /** + * optional uint64 timestamp = 3; + */ + public boolean hasTimestamp() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional uint64 timestamp = 3; + */ + public long getTimestamp() { + return timestamp_; + } + + private void initFields() { + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + table_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + timestamp_ = 0L; + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (!hasState()) { + memoizedIsInitialized = 0; + return false; + } + if (!hasTable()) { + memoizedIsInitialized = 0; + return false; + } + if (!getTable().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeEnum(1, state_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeMessage(2, table_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + output.writeUInt64(3, timestamp_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeEnumSize(1, state_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(2, table_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + size += com.google.protobuf.CodedOutputStream + .computeUInt64Size(3, timestamp_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState other = (org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState) obj; + + boolean result = true; + result = result && (hasState() == other.hasState()); + if (hasState()) { + result = result && + (getState() == other.getState()); + } + result = result && (hasTable() == other.hasTable()); + if (hasTable()) { + result = result && getTable() + .equals(other.getTable()); + } + result = result && (hasTimestamp() == other.hasTimestamp()); + if (hasTimestamp()) { + result = result && (getTimestamp() + == other.getTimestamp()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasState()) { + hash = (37 * hash) + STATE_FIELD_NUMBER; + hash = (53 * hash) + hashEnum(getState()); + } + if (hasTable()) { + hash = (37 * hash) + TABLE_FIELD_NUMBER; + hash = (53 * hash) + getTable().hashCode(); + } + if (hasTimestamp()) { + hash = (37 * hash) + TIMESTAMP_FIELD_NUMBER; + hash = (53 * hash) + hashLong(getTimestamp()); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code TableState} + * + *
    +     ** Denotes state of the table 
    +     * 
    + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableState_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableState_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.class, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getTableFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + bitField0_ = (bitField0_ & ~0x00000001); + if (tableBuilder_ == null) { + table_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + } else { + tableBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + timestamp_ = 0L; + bitField0_ = (bitField0_ & ~0x00000004); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableState_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState build() { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState result = new org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.state_ = state_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + if (tableBuilder_ == null) { + result.table_ = table_; + } else { + result.table_ = tableBuilder_.build(); + } + if (((from_bitField0_ & 0x00000004) == 0x00000004)) { + to_bitField0_ |= 0x00000004; + } + result.timestamp_ = timestamp_; + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance()) return this; + if (other.hasState()) { + setState(other.getState()); + } + if (other.hasTable()) { + mergeTable(other.getTable()); + } + if (other.hasTimestamp()) { + setTimestamp(other.getTimestamp()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (!hasState()) { + + return false; + } + if (!hasTable()) { + + return false; + } + if (!getTable().isInitialized()) { + + return false; + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // required .TableState.State state = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + /** + * required .TableState.State state = 1; + * + *
    +       * This is the table's state.
    +       * 
    + */ + public boolean hasState() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableState.State state = 1; + * + *
    +       * This is the table's state.
    +       * 
    + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState() { + return state_; + } + /** + * required .TableState.State state = 1; + * + *
    +       * This is the table's state.
    +       * 
    + */ + public Builder setState(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000001; + state_ = value; + onChanged(); + return this; + } + /** + * required .TableState.State state = 1; + * + *
    +       * This is the table's state.
    +       * 
    + */ + public Builder clearState() { + bitField0_ = (bitField0_ & ~0x00000001); + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + onChanged(); + return this; + } + + // required .TableName table = 2; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName table_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> tableBuilder_; + /** + * required .TableName table = 2; + */ + public boolean hasTable() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * required .TableName table = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTable() { + if (tableBuilder_ == null) { + return table_; + } else { + return tableBuilder_.getMessage(); + } + } + /** + * required .TableName table = 2; + */ + public Builder setTable(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + table_ = value; + onChanged(); + } else { + tableBuilder_.setMessage(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * required .TableName table = 2; + */ + public Builder setTable( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder builderForValue) { + if (tableBuilder_ == null) { + table_ = builderForValue.build(); + onChanged(); + } else { + tableBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * required .TableName table = 2; + */ + public Builder mergeTable(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableBuilder_ == null) { + if (((bitField0_ & 0x00000002) == 0x00000002) && + table_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance()) { + table_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.newBuilder(table_).mergeFrom(value).buildPartial(); + } else { + table_ = value; + } + onChanged(); + } else { + tableBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * required .TableName table = 2; + */ + public Builder clearTable() { + if (tableBuilder_ == null) { + table_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + onChanged(); + } else { + tableBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + /** + * required .TableName table = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder getTableBuilder() { + bitField0_ |= 0x00000002; + onChanged(); + return getTableFieldBuilder().getBuilder(); + } + /** + * required .TableName table = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableOrBuilder() { + if (tableBuilder_ != null) { + return tableBuilder_.getMessageOrBuilder(); + } else { + return table_; + } + } + /** + * required .TableName table = 2; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> + getTableFieldBuilder() { + if (tableBuilder_ == null) { + tableBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder>( + table_, + getParentForChildren(), + isClean()); + table_ = null; + } + return tableBuilder_; + } + + // optional uint64 timestamp = 3; + private long timestamp_ ; + /** + * optional uint64 timestamp = 3; + */ + public boolean hasTimestamp() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional uint64 timestamp = 3; + */ + public long getTimestamp() { + return timestamp_; + } + /** + * optional uint64 timestamp = 3; + */ + public Builder setTimestamp(long value) { + bitField0_ |= 0x00000004; + timestamp_ = value; + onChanged(); + return this; + } + /** + * optional uint64 timestamp = 3; + */ + public Builder clearTimestamp() { + bitField0_ = (bitField0_ & ~0x00000004); + timestamp_ = 0L; + onChanged(); + return this; + } + + // @@protoc_insertion_point(builder_scope:TableState) + } + + static { + defaultInstance = new TableState(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:TableState) + } + + public interface TableDescriptorOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // required .TableSchema schema = 1; + /** + * required .TableSchema schema = 1; + */ + boolean hasSchema(); + /** + * required .TableSchema schema = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema getSchema(); + /** + * required .TableSchema schema = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder getSchemaOrBuilder(); + + // optional .TableState.State state = 2 [default = ENABLED]; + /** + * optional .TableState.State state = 2 [default = ENABLED]; + */ + boolean hasState(); + /** + * optional .TableState.State state = 2 [default = ENABLED]; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState(); + } + /** + * Protobuf type {@code TableDescriptor} + * + *
    +   ** On HDFS representation of table state. 
    +   * 
    + */ + public static final class TableDescriptor extends + com.google.protobuf.GeneratedMessage + implements TableDescriptorOrBuilder { + // Use TableDescriptor.newBuilder() to construct. + private TableDescriptor(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private TableDescriptor(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final TableDescriptor defaultInstance; + public static TableDescriptor getDefaultInstance() { + return defaultInstance; + } + + public TableDescriptor getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private TableDescriptor( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = schema_.toBuilder(); + } + schema_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(schema_); + schema_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } + case 16: { + int rawValue = input.readEnum(); + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State value = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.valueOf(rawValue); + if (value == null) { + unknownFields.mergeVarintField(2, rawValue); + } else { + bitField0_ |= 0x00000002; + state_ = value; + } + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableDescriptor_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableDescriptor_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.class, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public TableDescriptor parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new TableDescriptor(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // required .TableSchema schema = 1; + public static final int SCHEMA_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema schema_; + /** + * required .TableSchema schema = 1; + */ + public boolean hasSchema() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableSchema schema = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema getSchema() { + return schema_; + } + /** + * required .TableSchema schema = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder getSchemaOrBuilder() { + return schema_; + } + + // optional .TableState.State state = 2 [default = ENABLED]; + public static final int STATE_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State state_; + /** + * optional .TableState.State state = 2 [default = ENABLED]; + */ + public boolean hasState() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .TableState.State state = 2 [default = ENABLED]; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState() { + return state_; + } + + private void initFields() { + schema_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.getDefaultInstance(); + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (!hasSchema()) { + memoizedIsInitialized = 0; + return false; + } + if (!getSchema().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeMessage(1, schema_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeEnum(2, state_.getNumber()); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(1, schema_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeEnumSize(2, state_.getNumber()); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor other = (org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor) obj; + + boolean result = true; + result = result && (hasSchema() == other.hasSchema()); + if (hasSchema()) { + result = result && getSchema() + .equals(other.getSchema()); + } + result = result && (hasState() == other.hasState()); + if (hasState()) { + result = result && + (getState() == other.getState()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasSchema()) { + hash = (37 * hash) + SCHEMA_FIELD_NUMBER; + hash = (53 * hash) + getSchema().hashCode(); + } + if (hasState()) { + hash = (37 * hash) + STATE_FIELD_NUMBER; + hash = (53 * hash) + hashEnum(getState()); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code TableDescriptor} + * + *
    +     ** On HDFS representation of table state. 
    +     * 
    + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptorOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableDescriptor_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableDescriptor_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.class, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getSchemaFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + if (schemaBuilder_ == null) { + schema_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.getDefaultInstance(); + } else { + schemaBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.internal_static_TableDescriptor_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor build() { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor result = new org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + if (schemaBuilder_ == null) { + result.schema_ = schema_; + } else { + result.schema_ = schemaBuilder_.build(); + } + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + result.state_ = state_; + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor.getDefaultInstance()) return this; + if (other.hasSchema()) { + mergeSchema(other.getSchema()); + } + if (other.hasState()) { + setState(other.getState()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (!hasSchema()) { + + return false; + } + if (!getSchema().isInitialized()) { + + return false; + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableDescriptor) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // required .TableSchema schema = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema schema_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder> schemaBuilder_; + /** + * required .TableSchema schema = 1; + */ + public boolean hasSchema() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableSchema schema = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema getSchema() { + if (schemaBuilder_ == null) { + return schema_; + } else { + return schemaBuilder_.getMessage(); } - return this; } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public Builder addConfiguration( - int index, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder builderForValue) { - if (configurationBuilder_ == null) { - ensureConfigurationIsMutable(); - configuration_.add(index, builderForValue.build()); + public Builder setSchema(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema value) { + if (schemaBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + schema_ = value; onChanged(); } else { - configurationBuilder_.addMessage(index, builderForValue.build()); + schemaBuilder_.setMessage(value); } + bitField0_ |= 0x00000001; return this; } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public Builder addAllConfiguration( - java.lang.Iterable values) { - if (configurationBuilder_ == null) { - ensureConfigurationIsMutable(); - super.addAll(values, configuration_); + public Builder setSchema( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder builderForValue) { + if (schemaBuilder_ == null) { + schema_ = builderForValue.build(); onChanged(); } else { - configurationBuilder_.addAllMessages(values); + schemaBuilder_.setMessage(builderForValue.build()); } + bitField0_ |= 0x00000001; return this; } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public Builder clearConfiguration() { - if (configurationBuilder_ == null) { - configuration_ = java.util.Collections.emptyList(); - bitField0_ = (bitField0_ & ~0x00000008); + public Builder mergeSchema(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema value) { + if (schemaBuilder_ == null) { + if (((bitField0_ & 0x00000001) == 0x00000001) && + schema_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.getDefaultInstance()) { + schema_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.newBuilder(schema_).mergeFrom(value).buildPartial(); + } else { + schema_ = value; + } onChanged(); } else { - configurationBuilder_.clear(); + schemaBuilder_.mergeFrom(value); } + bitField0_ |= 0x00000001; return this; } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public Builder removeConfiguration(int index) { - if (configurationBuilder_ == null) { - ensureConfigurationIsMutable(); - configuration_.remove(index); + public Builder clearSchema() { + if (schemaBuilder_ == null) { + schema_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.getDefaultInstance(); onChanged(); } else { - configurationBuilder_.remove(index); + schemaBuilder_.clear(); } + bitField0_ = (bitField0_ & ~0x00000001); return this; } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder getConfigurationBuilder( - int index) { - return getConfigurationFieldBuilder().getBuilder(index); + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder getSchemaBuilder() { + bitField0_ |= 0x00000001; + onChanged(); + return getSchemaFieldBuilder().getBuilder(); } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder getConfigurationOrBuilder( - int index) { - if (configurationBuilder_ == null) { - return configuration_.get(index); } else { - return configurationBuilder_.getMessageOrBuilder(index); + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder getSchemaOrBuilder() { + if (schemaBuilder_ != null) { + return schemaBuilder_.getMessageOrBuilder(); + } else { + return schema_; } } /** - * repeated .NameStringPair configuration = 4; + * required .TableSchema schema = 1; */ - public java.util.List - getConfigurationOrBuilderList() { - if (configurationBuilder_ != null) { - return configurationBuilder_.getMessageOrBuilderList(); - } else { - return java.util.Collections.unmodifiableList(configuration_); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder> + getSchemaFieldBuilder() { + if (schemaBuilder_ == null) { + schemaBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchema.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableSchemaOrBuilder>( + schema_, + getParentForChildren(), + isClean()); + schema_ = null; } + return schemaBuilder_; } + + // optional .TableState.State state = 2 [default = ENABLED]; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; /** - * repeated .NameStringPair configuration = 4; + * optional .TableState.State state = 2 [default = ENABLED]; */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder addConfigurationBuilder() { - return getConfigurationFieldBuilder().addBuilder( - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.getDefaultInstance()); + public boolean hasState() { + return ((bitField0_ & 0x00000002) == 0x00000002); } /** - * repeated .NameStringPair configuration = 4; + * optional .TableState.State state = 2 [default = ENABLED]; */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder addConfigurationBuilder( - int index) { - return getConfigurationFieldBuilder().addBuilder( - index, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.getDefaultInstance()); + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State getState() { + return state_; } /** - * repeated .NameStringPair configuration = 4; + * optional .TableState.State state = 2 [default = ENABLED]; */ - public java.util.List - getConfigurationBuilderList() { - return getConfigurationFieldBuilder().getBuilderList(); - } - private com.google.protobuf.RepeatedFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder> - getConfigurationFieldBuilder() { - if (configurationBuilder_ == null) { - configurationBuilder_ = new com.google.protobuf.RepeatedFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPairOrBuilder>( - configuration_, - ((bitField0_ & 0x00000008) == 0x00000008), - getParentForChildren(), - isClean()); - configuration_ = null; + public Builder setState(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State value) { + if (value == null) { + throw new NullPointerException(); } - return configurationBuilder_; + bitField0_ |= 0x00000002; + state_ = value; + onChanged(); + return this; + } + /** + * optional .TableState.State state = 2 [default = ENABLED]; + */ + public Builder clearState() { + bitField0_ = (bitField0_ & ~0x00000002); + state_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.State.ENABLED; + onChanged(); + return this; } - // @@protoc_insertion_point(builder_scope:TableSchema) + // @@protoc_insertion_point(builder_scope:TableDescriptor) } static { - defaultInstance = new TableSchema(true); + defaultInstance = new TableDescriptor(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:TableSchema) + // @@protoc_insertion_point(class_scope:TableDescriptor) } public interface ColumnFamilySchemaOrBuilder @@ -10426,6 +12123,21 @@ public final class HBaseProtos { * optional int32 version = 5; */ int getVersion(); + + // optional string owner = 6; + /** + * optional string owner = 6; + */ + boolean hasOwner(); + /** + * optional string owner = 6; + */ + java.lang.String getOwner(); + /** + * optional string owner = 6; + */ + com.google.protobuf.ByteString + getOwnerBytes(); } /** * Protobuf type {@code SnapshotDescription} @@ -10514,6 +12226,11 @@ public final class HBaseProtos { version_ = input.readInt32(); break; } + case 50: { + bitField0_ |= 0x00000020; + owner_ = input.readBytes(); + break; + } } } } catch (com.google.protobuf.InvalidProtocolBufferException e) { @@ -10791,12 +12508,56 @@ public final class HBaseProtos { return version_; } + // optional string owner = 6; + public static final int OWNER_FIELD_NUMBER = 6; + private java.lang.Object owner_; + /** + * optional string owner = 6; + */ + public boolean hasOwner() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional string owner = 6; + */ + public java.lang.String getOwner() { + java.lang.Object ref = owner_; + if (ref instanceof java.lang.String) { + return (java.lang.String) ref; + } else { + com.google.protobuf.ByteString bs = + (com.google.protobuf.ByteString) ref; + java.lang.String s = bs.toStringUtf8(); + if (bs.isValidUtf8()) { + owner_ = s; + } + return s; + } + } + /** + * optional string owner = 6; + */ + public com.google.protobuf.ByteString + getOwnerBytes() { + java.lang.Object ref = owner_; + if (ref instanceof java.lang.String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + owner_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + private void initFields() { name_ = ""; table_ = ""; creationTime_ = 0L; type_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription.Type.FLUSH; version_ = 0; + owner_ = ""; } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { @@ -10829,6 +12590,9 @@ public final class HBaseProtos { if (((bitField0_ & 0x00000010) == 0x00000010)) { output.writeInt32(5, version_); } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + output.writeBytes(6, getOwnerBytes()); + } getUnknownFields().writeTo(output); } @@ -10858,6 +12622,10 @@ public final class HBaseProtos { size += com.google.protobuf.CodedOutputStream .computeInt32Size(5, version_); } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + size += com.google.protobuf.CodedOutputStream + .computeBytesSize(6, getOwnerBytes()); + } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; return size; @@ -10906,6 +12674,11 @@ public final class HBaseProtos { result = result && (getVersion() == other.getVersion()); } + result = result && (hasOwner() == other.hasOwner()); + if (hasOwner()) { + result = result && getOwner() + .equals(other.getOwner()); + } result = result && getUnknownFields().equals(other.getUnknownFields()); return result; @@ -10939,6 +12712,10 @@ public final class HBaseProtos { hash = (37 * hash) + VERSION_FIELD_NUMBER; hash = (53 * hash) + getVersion(); } + if (hasOwner()) { + hash = (37 * hash) + OWNER_FIELD_NUMBER; + hash = (53 * hash) + getOwner().hashCode(); + } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; @@ -11063,6 +12840,8 @@ public final class HBaseProtos { bitField0_ = (bitField0_ & ~0x00000008); version_ = 0; bitField0_ = (bitField0_ & ~0x00000010); + owner_ = ""; + bitField0_ = (bitField0_ & ~0x00000020); return this; } @@ -11111,6 +12890,10 @@ public final class HBaseProtos { to_bitField0_ |= 0x00000010; } result.version_ = version_; + if (((from_bitField0_ & 0x00000020) == 0x00000020)) { + to_bitField0_ |= 0x00000020; + } + result.owner_ = owner_; result.bitField0_ = to_bitField0_; onBuilt(); return result; @@ -11146,6 +12929,11 @@ public final class HBaseProtos { if (other.hasVersion()) { setVersion(other.getVersion()); } + if (other.hasOwner()) { + bitField0_ |= 0x00000020; + owner_ = other.owner_; + onChanged(); + } this.mergeUnknownFields(other.getUnknownFields()); return this; } @@ -11451,6 +13239,80 @@ public final class HBaseProtos { return this; } + // optional string owner = 6; + private java.lang.Object owner_ = ""; + /** + * optional string owner = 6; + */ + public boolean hasOwner() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional string owner = 6; + */ + public java.lang.String getOwner() { + java.lang.Object ref = owner_; + if (!(ref instanceof java.lang.String)) { + java.lang.String s = ((com.google.protobuf.ByteString) ref) + .toStringUtf8(); + owner_ = s; + return s; + } else { + return (java.lang.String) ref; + } + } + /** + * optional string owner = 6; + */ + public com.google.protobuf.ByteString + getOwnerBytes() { + java.lang.Object ref = owner_; + if (ref instanceof String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + owner_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + /** + * optional string owner = 6; + */ + public Builder setOwner( + java.lang.String value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000020; + owner_ = value; + onChanged(); + return this; + } + /** + * optional string owner = 6; + */ + public Builder clearOwner() { + bitField0_ = (bitField0_ & ~0x00000020); + owner_ = getDefaultInstance().getOwner(); + onChanged(); + return this; + } + /** + * optional string owner = 6; + */ + public Builder setOwnerBytes( + com.google.protobuf.ByteString value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000020; + owner_ = value; + onChanged(); + return this; + } + // @@protoc_insertion_point(builder_scope:SnapshotDescription) } @@ -16207,6 +18069,16 @@ public final class HBaseProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable internal_static_TableSchema_fieldAccessorTable; private static com.google.protobuf.Descriptors.Descriptor + internal_static_TableState_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_TableState_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_TableDescriptor_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_TableDescriptor_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor internal_static_ColumnFamilySchema_descriptor; private static com.google.protobuf.GeneratedMessage.FieldAccessorTable @@ -16321,46 +18193,56 @@ public final class HBaseProtos { "Name\022#\n\nattributes\030\002 \003(\0132\017.BytesBytesPai" + "r\022,\n\017column_families\030\003 \003(\0132\023.ColumnFamil" + "ySchema\022&\n\rconfiguration\030\004 \003(\0132\017.NameStr" + - "ingPair\"o\n\022ColumnFamilySchema\022\014\n\004name\030\001 " + - "\002(\014\022#\n\nattributes\030\002 \003(\0132\017.BytesBytesPair" + - "\022&\n\rconfiguration\030\003 \003(\0132\017.NameStringPair" + - "\"\232\001\n\nRegionInfo\022\021\n\tregion_id\030\001 \002(\004\022\036\n\nta", - "ble_name\030\002 \002(\0132\n.TableName\022\021\n\tstart_key\030" + - "\003 \001(\014\022\017\n\007end_key\030\004 \001(\014\022\017\n\007offline\030\005 \001(\010\022" + - "\r\n\005split\030\006 \001(\010\022\025\n\nreplica_id\030\007 \001(\005:\0010\"1\n" + - "\014FavoredNodes\022!\n\014favored_node\030\001 \003(\0132\013.Se" + - "rverName\"\225\001\n\017RegionSpecifier\0222\n\004type\030\001 \002" + - "(\0162$.RegionSpecifier.RegionSpecifierType" + - "\022\r\n\005value\030\002 \002(\014\"?\n\023RegionSpecifierType\022\017" + - "\n\013REGION_NAME\020\001\022\027\n\023ENCODED_REGION_NAME\020\002" + - "\"%\n\tTimeRange\022\014\n\004from\030\001 \001(\004\022\n\n\002to\030\002 \001(\004\"" + - "A\n\nServerName\022\021\n\thost_name\030\001 \002(\t\022\014\n\004port", - "\030\002 \001(\r\022\022\n\nstart_code\030\003 \001(\004\"\033\n\013Coprocesso" + - "r\022\014\n\004name\030\001 \002(\t\"-\n\016NameStringPair\022\014\n\004nam" + - "e\030\001 \002(\t\022\r\n\005value\030\002 \002(\t\",\n\rNameBytesPair\022" + - "\014\n\004name\030\001 \002(\t\022\r\n\005value\030\002 \001(\014\"/\n\016BytesByt" + - "esPair\022\r\n\005first\030\001 \002(\014\022\016\n\006second\030\002 \002(\014\",\n" + - "\rNameInt64Pair\022\014\n\004name\030\001 \001(\t\022\r\n\005value\030\002 " + - "\001(\003\"\275\001\n\023SnapshotDescription\022\014\n\004name\030\001 \002(" + - "\t\022\r\n\005table\030\002 \001(\t\022\030\n\rcreation_time\030\003 \001(\003:" + - "\0010\022.\n\004type\030\004 \001(\0162\031.SnapshotDescription.T" + - "ype:\005FLUSH\022\017\n\007version\030\005 \001(\005\".\n\004Type\022\014\n\010D", - "ISABLED\020\000\022\t\n\005FLUSH\020\001\022\r\n\tSKIPFLUSH\020\002\"}\n\024P" + - "rocedureDescription\022\021\n\tsignature\030\001 \002(\t\022\020" + - "\n\010instance\030\002 \001(\t\022\030\n\rcreation_time\030\003 \001(\003:" + - "\0010\022&\n\rconfiguration\030\004 \003(\0132\017.NameStringPa" + - "ir\"\n\n\010EmptyMsg\"\033\n\007LongMsg\022\020\n\010long_msg\030\001 " + - "\002(\003\"\037\n\tDoubleMsg\022\022\n\ndouble_msg\030\001 \002(\001\"\'\n\r" + - "BigDecimalMsg\022\026\n\016bigdecimal_msg\030\001 \002(\014\"5\n" + - "\004UUID\022\026\n\016least_sig_bits\030\001 \002(\004\022\025\n\rmost_si" + - "g_bits\030\002 \002(\004\"K\n\023NamespaceDescriptor\022\014\n\004n" + - "ame\030\001 \002(\014\022&\n\rconfiguration\030\002 \003(\0132\017.NameS", - "tringPair\"$\n\020RegionServerInfo\022\020\n\010infoPor" + - "t\030\001 \001(\005*r\n\013CompareType\022\010\n\004LESS\020\000\022\021\n\rLESS" + - "_OR_EQUAL\020\001\022\t\n\005EQUAL\020\002\022\r\n\tNOT_EQUAL\020\003\022\024\n" + - "\020GREATER_OR_EQUAL\020\004\022\013\n\007GREATER\020\005\022\t\n\005NO_O" + - "P\020\006B>\n*org.apache.hadoop.hbase.protobuf." + - "generatedB\013HBaseProtosH\001\240\001\001" + "ingPair\"\235\001\n\nTableState\022 \n\005state\030\001 \002(\0162\021." + + "TableState.State\022\031\n\005table\030\002 \002(\0132\n.TableN" + + "ame\022\021\n\ttimestamp\030\003 \001(\004\"?\n\005State\022\013\n\007ENABL" + + "ED\020\000\022\014\n\010DISABLED\020\001\022\r\n\tDISABLING\020\002\022\014\n\010ENA", + "BLING\020\003\"Z\n\017TableDescriptor\022\034\n\006schema\030\001 \002" + + "(\0132\014.TableSchema\022)\n\005state\030\002 \001(\0162\021.TableS" + + "tate.State:\007ENABLED\"o\n\022ColumnFamilySchem" + + "a\022\014\n\004name\030\001 \002(\014\022#\n\nattributes\030\002 \003(\0132\017.By" + + "tesBytesPair\022&\n\rconfiguration\030\003 \003(\0132\017.Na" + + "meStringPair\"\232\001\n\nRegionInfo\022\021\n\tregion_id" + + "\030\001 \002(\004\022\036\n\ntable_name\030\002 \002(\0132\n.TableName\022\021" + + "\n\tstart_key\030\003 \001(\014\022\017\n\007end_key\030\004 \001(\014\022\017\n\007of" + + "fline\030\005 \001(\010\022\r\n\005split\030\006 \001(\010\022\025\n\nreplica_id" + + "\030\007 \001(\005:\0010\"1\n\014FavoredNodes\022!\n\014favored_nod", + "e\030\001 \003(\0132\013.ServerName\"\225\001\n\017RegionSpecifier" + + "\0222\n\004type\030\001 \002(\0162$.RegionSpecifier.RegionS" + + "pecifierType\022\r\n\005value\030\002 \002(\014\"?\n\023RegionSpe" + + "cifierType\022\017\n\013REGION_NAME\020\001\022\027\n\023ENCODED_R" + + "EGION_NAME\020\002\"%\n\tTimeRange\022\014\n\004from\030\001 \001(\004\022" + + "\n\n\002to\030\002 \001(\004\"A\n\nServerName\022\021\n\thost_name\030\001" + + " \002(\t\022\014\n\004port\030\002 \001(\r\022\022\n\nstart_code\030\003 \001(\004\"\033" + + "\n\013Coprocessor\022\014\n\004name\030\001 \002(\t\"-\n\016NameStrin" + + "gPair\022\014\n\004name\030\001 \002(\t\022\r\n\005value\030\002 \002(\t\",\n\rNa" + + "meBytesPair\022\014\n\004name\030\001 \002(\t\022\r\n\005value\030\002 \001(\014", + "\"/\n\016BytesBytesPair\022\r\n\005first\030\001 \002(\014\022\016\n\006sec" + + "ond\030\002 \002(\014\",\n\rNameInt64Pair\022\014\n\004name\030\001 \001(\t" + + "\022\r\n\005value\030\002 \001(\003\"\314\001\n\023SnapshotDescription\022" + + "\014\n\004name\030\001 \002(\t\022\r\n\005table\030\002 \001(\t\022\030\n\rcreation" + + "_time\030\003 \001(\003:\0010\022.\n\004type\030\004 \001(\0162\031.SnapshotD" + + "escription.Type:\005FLUSH\022\017\n\007version\030\005 \001(\005\022" + + "\r\n\005owner\030\006 \001(\t\".\n\004Type\022\014\n\010DISABLED\020\000\022\t\n\005" + + "FLUSH\020\001\022\r\n\tSKIPFLUSH\020\002\"}\n\024ProcedureDescr" + + "iption\022\021\n\tsignature\030\001 \002(\t\022\020\n\010instance\030\002 " + + "\001(\t\022\030\n\rcreation_time\030\003 \001(\003:\0010\022&\n\rconfigu", + "ration\030\004 \003(\0132\017.NameStringPair\"\n\n\010EmptyMs" + + "g\"\033\n\007LongMsg\022\020\n\010long_msg\030\001 \002(\003\"\037\n\tDouble" + + "Msg\022\022\n\ndouble_msg\030\001 \002(\001\"\'\n\rBigDecimalMsg" + + "\022\026\n\016bigdecimal_msg\030\001 \002(\014\"5\n\004UUID\022\026\n\016leas" + + "t_sig_bits\030\001 \002(\004\022\025\n\rmost_sig_bits\030\002 \002(\004\"" + + "K\n\023NamespaceDescriptor\022\014\n\004name\030\001 \002(\014\022&\n\r" + + "configuration\030\002 \003(\0132\017.NameStringPair\"$\n\020" + + "RegionServerInfo\022\020\n\010infoPort\030\001 \001(\005*r\n\013Co" + + "mpareType\022\010\n\004LESS\020\000\022\021\n\rLESS_OR_EQUAL\020\001\022\t" + + "\n\005EQUAL\020\002\022\r\n\tNOT_EQUAL\020\003\022\024\n\020GREATER_OR_E", + "QUAL\020\004\022\013\n\007GREATER\020\005\022\t\n\005NO_OP\020\006*n\n\010TimeUn" + + "it\022\017\n\013NANOSECONDS\020\001\022\020\n\014MICROSECONDS\020\002\022\020\n" + + "\014MILLISECONDS\020\003\022\013\n\007SECONDS\020\004\022\013\n\007MINUTES\020" + + "\005\022\t\n\005HOURS\020\006\022\010\n\004DAYS\020\007B>\n*org.apache.had" + + "oop.hbase.protobuf.generatedB\013HBaseProto" + + "sH\001\240\001\001" }; com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { @@ -16379,122 +18261,134 @@ public final class HBaseProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_TableSchema_descriptor, new java.lang.String[] { "TableName", "Attributes", "ColumnFamilies", "Configuration", }); - internal_static_ColumnFamilySchema_descriptor = + internal_static_TableState_descriptor = getDescriptor().getMessageTypes().get(2); + internal_static_TableState_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_TableState_descriptor, + new java.lang.String[] { "State", "Table", "Timestamp", }); + internal_static_TableDescriptor_descriptor = + getDescriptor().getMessageTypes().get(3); + internal_static_TableDescriptor_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_TableDescriptor_descriptor, + new java.lang.String[] { "Schema", "State", }); + internal_static_ColumnFamilySchema_descriptor = + getDescriptor().getMessageTypes().get(4); internal_static_ColumnFamilySchema_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ColumnFamilySchema_descriptor, new java.lang.String[] { "Name", "Attributes", "Configuration", }); internal_static_RegionInfo_descriptor = - getDescriptor().getMessageTypes().get(3); + getDescriptor().getMessageTypes().get(5); internal_static_RegionInfo_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_RegionInfo_descriptor, new java.lang.String[] { "RegionId", "TableName", "StartKey", "EndKey", "Offline", "Split", "ReplicaId", }); internal_static_FavoredNodes_descriptor = - getDescriptor().getMessageTypes().get(4); + getDescriptor().getMessageTypes().get(6); internal_static_FavoredNodes_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_FavoredNodes_descriptor, new java.lang.String[] { "FavoredNode", }); internal_static_RegionSpecifier_descriptor = - getDescriptor().getMessageTypes().get(5); + getDescriptor().getMessageTypes().get(7); internal_static_RegionSpecifier_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_RegionSpecifier_descriptor, new java.lang.String[] { "Type", "Value", }); internal_static_TimeRange_descriptor = - getDescriptor().getMessageTypes().get(6); + getDescriptor().getMessageTypes().get(8); internal_static_TimeRange_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_TimeRange_descriptor, new java.lang.String[] { "From", "To", }); internal_static_ServerName_descriptor = - getDescriptor().getMessageTypes().get(7); + getDescriptor().getMessageTypes().get(9); internal_static_ServerName_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ServerName_descriptor, new java.lang.String[] { "HostName", "Port", "StartCode", }); internal_static_Coprocessor_descriptor = - getDescriptor().getMessageTypes().get(8); + getDescriptor().getMessageTypes().get(10); internal_static_Coprocessor_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_Coprocessor_descriptor, new java.lang.String[] { "Name", }); internal_static_NameStringPair_descriptor = - getDescriptor().getMessageTypes().get(9); + getDescriptor().getMessageTypes().get(11); internal_static_NameStringPair_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_NameStringPair_descriptor, new java.lang.String[] { "Name", "Value", }); internal_static_NameBytesPair_descriptor = - getDescriptor().getMessageTypes().get(10); + getDescriptor().getMessageTypes().get(12); internal_static_NameBytesPair_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_NameBytesPair_descriptor, new java.lang.String[] { "Name", "Value", }); internal_static_BytesBytesPair_descriptor = - getDescriptor().getMessageTypes().get(11); + getDescriptor().getMessageTypes().get(13); internal_static_BytesBytesPair_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_BytesBytesPair_descriptor, new java.lang.String[] { "First", "Second", }); internal_static_NameInt64Pair_descriptor = - getDescriptor().getMessageTypes().get(12); + getDescriptor().getMessageTypes().get(14); internal_static_NameInt64Pair_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_NameInt64Pair_descriptor, new java.lang.String[] { "Name", "Value", }); internal_static_SnapshotDescription_descriptor = - getDescriptor().getMessageTypes().get(13); + getDescriptor().getMessageTypes().get(15); internal_static_SnapshotDescription_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_SnapshotDescription_descriptor, - new java.lang.String[] { "Name", "Table", "CreationTime", "Type", "Version", }); + new java.lang.String[] { "Name", "Table", "CreationTime", "Type", "Version", "Owner", }); internal_static_ProcedureDescription_descriptor = - getDescriptor().getMessageTypes().get(14); + getDescriptor().getMessageTypes().get(16); internal_static_ProcedureDescription_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ProcedureDescription_descriptor, new java.lang.String[] { "Signature", "Instance", "CreationTime", "Configuration", }); internal_static_EmptyMsg_descriptor = - getDescriptor().getMessageTypes().get(15); + getDescriptor().getMessageTypes().get(17); internal_static_EmptyMsg_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_EmptyMsg_descriptor, new java.lang.String[] { }); internal_static_LongMsg_descriptor = - getDescriptor().getMessageTypes().get(16); + getDescriptor().getMessageTypes().get(18); internal_static_LongMsg_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_LongMsg_descriptor, new java.lang.String[] { "LongMsg", }); internal_static_DoubleMsg_descriptor = - getDescriptor().getMessageTypes().get(17); + getDescriptor().getMessageTypes().get(19); internal_static_DoubleMsg_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_DoubleMsg_descriptor, new java.lang.String[] { "DoubleMsg", }); internal_static_BigDecimalMsg_descriptor = - getDescriptor().getMessageTypes().get(18); + getDescriptor().getMessageTypes().get(20); internal_static_BigDecimalMsg_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_BigDecimalMsg_descriptor, new java.lang.String[] { "BigdecimalMsg", }); internal_static_UUID_descriptor = - getDescriptor().getMessageTypes().get(19); + getDescriptor().getMessageTypes().get(21); internal_static_UUID_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_UUID_descriptor, new java.lang.String[] { "LeastSigBits", "MostSigBits", }); internal_static_NamespaceDescriptor_descriptor = - getDescriptor().getMessageTypes().get(20); + getDescriptor().getMessageTypes().get(22); internal_static_NamespaceDescriptor_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_NamespaceDescriptor_descriptor, new java.lang.String[] { "Name", "Configuration", }); internal_static_RegionServerInfo_descriptor = - getDescriptor().getMessageTypes().get(21); + getDescriptor().getMessageTypes().get(23); internal_static_RegionServerInfo_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_RegionServerInfo_descriptor, diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java index 6821a81..4f7f954 100644 --- hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterProtos.java @@ -37504,28 +37504,42 @@ public final class MasterProtos { // @@protoc_insertion_point(class_scope:GetTableNamesResponse) } - public interface GetClusterStatusRequestOrBuilder + public interface GetTableStateRequestOrBuilder extends com.google.protobuf.MessageOrBuilder { + + // required .TableName table_name = 1; + /** + * required .TableName table_name = 1; + */ + boolean hasTableName(); + /** + * required .TableName table_name = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName(); + /** + * required .TableName table_name = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder(); } /** - * Protobuf type {@code GetClusterStatusRequest} + * Protobuf type {@code GetTableStateRequest} */ - public static final class GetClusterStatusRequest extends + public static final class GetTableStateRequest extends com.google.protobuf.GeneratedMessage - implements GetClusterStatusRequestOrBuilder { - // Use GetClusterStatusRequest.newBuilder() to construct. - private GetClusterStatusRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + implements GetTableStateRequestOrBuilder { + // Use GetTableStateRequest.newBuilder() to construct. + private GetTableStateRequest(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private GetClusterStatusRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private GetTableStateRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final GetClusterStatusRequest defaultInstance; - public static GetClusterStatusRequest getDefaultInstance() { + private static final GetTableStateRequest defaultInstance; + public static GetTableStateRequest getDefaultInstance() { return defaultInstance; } - public GetClusterStatusRequest getDefaultInstanceForType() { + public GetTableStateRequest getDefaultInstanceForType() { return defaultInstance; } @@ -37535,11 +37549,12 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private GetClusterStatusRequest( + private GetTableStateRequest( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { initFields(); + int mutable_bitField0_ = 0; com.google.protobuf.UnknownFieldSet.Builder unknownFields = com.google.protobuf.UnknownFieldSet.newBuilder(); try { @@ -37557,6 +37572,19 @@ public final class MasterProtos { } break; } + case 10: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = tableName_.toBuilder(); + } + tableName_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(tableName_); + tableName_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } } } } catch (com.google.protobuf.InvalidProtocolBufferException e) { @@ -37571,38 +37599,70 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public GetClusterStatusRequest parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public GetTableStateRequest parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new GetClusterStatusRequest(input, extensionRegistry); + return new GetTableStateRequest(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } + private int bitField0_; + // required .TableName table_name = 1; + public static final int TABLE_NAME_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName tableName_; + /** + * required .TableName table_name = 1; + */ + public boolean hasTableName() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableName table_name = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName() { + return tableName_; + } + /** + * required .TableName table_name = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder() { + return tableName_; + } + private void initFields() { + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; + if (!hasTableName()) { + memoizedIsInitialized = 0; + return false; + } + if (!getTableName().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } memoizedIsInitialized = 1; return true; } @@ -37610,6 +37670,9 @@ public final class MasterProtos { public void writeTo(com.google.protobuf.CodedOutputStream output) throws java.io.IOException { getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeMessage(1, tableName_); + } getUnknownFields().writeTo(output); } @@ -37619,6 +37682,10 @@ public final class MasterProtos { if (size != -1) return size; size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(1, tableName_); + } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; return size; @@ -37636,12 +37703,17 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest) obj; boolean result = true; + result = result && (hasTableName() == other.hasTableName()); + if (hasTableName()) { + result = result && getTableName() + .equals(other.getTableName()); + } result = result && getUnknownFields().equals(other.getUnknownFields()); return result; @@ -37655,58 +37727,62 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasTableName()) { + hash = (37 * hash) + TABLE_NAME_FIELD_NUMBER; + hash = (53 * hash) + getTableName().hashCode(); + } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -37715,7 +37791,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -37727,24 +37803,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code GetClusterStatusRequest} + * Protobuf type {@code GetTableStateRequest} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequestOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequestOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -37756,6 +37832,7 @@ public final class MasterProtos { } private void maybeForceBuilderInitialization() { if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getTableNameFieldBuilder(); } } private static Builder create() { @@ -37764,6 +37841,12 @@ public final class MasterProtos { public Builder clear() { super.clear(); + if (tableNameBuilder_ == null) { + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + } else { + tableNameBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); return this; } @@ -37773,43 +37856,65 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateRequest_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + if (tableNameBuilder_ == null) { + result.tableName_ = tableName_; + } else { + result.tableName_ = tableNameBuilder_.build(); + } + result.bitField0_ = to_bitField0_; onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.getDefaultInstance()) return this; + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.getDefaultInstance()) return this; + if (other.hasTableName()) { + mergeTableName(other.getTableName()); + } this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { + if (!hasTableName()) { + + return false; + } + if (!getTableName().isInitialized()) { + + return false; + } return true; } @@ -37817,11 +37922,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -37830,106 +37935,224 @@ public final class MasterProtos { } return this; } + private int bitField0_; - // @@protoc_insertion_point(builder_scope:GetClusterStatusRequest) - } - - static { - defaultInstance = new GetClusterStatusRequest(true); - defaultInstance.initFields(); - } - - // @@protoc_insertion_point(class_scope:GetClusterStatusRequest) - } - - public interface GetClusterStatusResponseOrBuilder - extends com.google.protobuf.MessageOrBuilder { - - // required .ClusterStatus cluster_status = 1; - /** - * required .ClusterStatus cluster_status = 1; - */ - boolean hasClusterStatus(); - /** - * required .ClusterStatus cluster_status = 1; - */ - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus(); - /** - * required .ClusterStatus cluster_status = 1; - */ - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder(); - } - /** - * Protobuf type {@code GetClusterStatusResponse} - */ - public static final class GetClusterStatusResponse extends - com.google.protobuf.GeneratedMessage - implements GetClusterStatusResponseOrBuilder { - // Use GetClusterStatusResponse.newBuilder() to construct. - private GetClusterStatusResponse(com.google.protobuf.GeneratedMessage.Builder builder) { - super(builder); - this.unknownFields = builder.getUnknownFields(); - } - private GetClusterStatusResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - - private static final GetClusterStatusResponse defaultInstance; - public static GetClusterStatusResponse getDefaultInstance() { - return defaultInstance; - } - - public GetClusterStatusResponse getDefaultInstanceForType() { - return defaultInstance; - } - - private final com.google.protobuf.UnknownFieldSet unknownFields; - @java.lang.Override - public final com.google.protobuf.UnknownFieldSet - getUnknownFields() { - return this.unknownFields; - } - private GetClusterStatusResponse( - com.google.protobuf.CodedInputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws com.google.protobuf.InvalidProtocolBufferException { - initFields(); - int mutable_bitField0_ = 0; - com.google.protobuf.UnknownFieldSet.Builder unknownFields = - com.google.protobuf.UnknownFieldSet.newBuilder(); - try { - boolean done = false; - while (!done) { - int tag = input.readTag(); - switch (tag) { - case 0: - done = true; - break; - default: { - if (!parseUnknownField(input, unknownFields, - extensionRegistry, tag)) { - done = true; - } - break; - } - case 10: { - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder subBuilder = null; - if (((bitField0_ & 0x00000001) == 0x00000001)) { - subBuilder = clusterStatus_.toBuilder(); - } - clusterStatus_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.PARSER, extensionRegistry); - if (subBuilder != null) { - subBuilder.mergeFrom(clusterStatus_); - clusterStatus_ = subBuilder.buildPartial(); - } - bitField0_ |= 0x00000001; - break; - } - } - } - } catch (com.google.protobuf.InvalidProtocolBufferException e) { - throw e.setUnfinishedMessage(this); - } catch (java.io.IOException e) { - throw new com.google.protobuf.InvalidProtocolBufferException( - e.getMessage()).setUnfinishedMessage(this); + // required .TableName table_name = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> tableNameBuilder_; + /** + * required .TableName table_name = 1; + */ + public boolean hasTableName() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TableName table_name = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName() { + if (tableNameBuilder_ == null) { + return tableName_; + } else { + return tableNameBuilder_.getMessage(); + } + } + /** + * required .TableName table_name = 1; + */ + public Builder setTableName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableNameBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + tableName_ = value; + onChanged(); + } else { + tableNameBuilder_.setMessage(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * required .TableName table_name = 1; + */ + public Builder setTableName( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder builderForValue) { + if (tableNameBuilder_ == null) { + tableName_ = builderForValue.build(); + onChanged(); + } else { + tableNameBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * required .TableName table_name = 1; + */ + public Builder mergeTableName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableNameBuilder_ == null) { + if (((bitField0_ & 0x00000001) == 0x00000001) && + tableName_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance()) { + tableName_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.newBuilder(tableName_).mergeFrom(value).buildPartial(); + } else { + tableName_ = value; + } + onChanged(); + } else { + tableNameBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * required .TableName table_name = 1; + */ + public Builder clearTableName() { + if (tableNameBuilder_ == null) { + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + onChanged(); + } else { + tableNameBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + return this; + } + /** + * required .TableName table_name = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder getTableNameBuilder() { + bitField0_ |= 0x00000001; + onChanged(); + return getTableNameFieldBuilder().getBuilder(); + } + /** + * required .TableName table_name = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder() { + if (tableNameBuilder_ != null) { + return tableNameBuilder_.getMessageOrBuilder(); + } else { + return tableName_; + } + } + /** + * required .TableName table_name = 1; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> + getTableNameFieldBuilder() { + if (tableNameBuilder_ == null) { + tableNameBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder>( + tableName_, + getParentForChildren(), + isClean()); + tableName_ = null; + } + return tableNameBuilder_; + } + + // @@protoc_insertion_point(builder_scope:GetTableStateRequest) + } + + static { + defaultInstance = new GetTableStateRequest(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:GetTableStateRequest) + } + + public interface GetTableStateResponseOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // required .TableState table_state = 1; + /** + * required .TableState table_state = 1; + */ + boolean hasTableState(); + /** + * required .TableState table_state = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState getTableState(); + /** + * required .TableState table_state = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder getTableStateOrBuilder(); + } + /** + * Protobuf type {@code GetTableStateResponse} + */ + public static final class GetTableStateResponse extends + com.google.protobuf.GeneratedMessage + implements GetTableStateResponseOrBuilder { + // Use GetTableStateResponse.newBuilder() to construct. + private GetTableStateResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private GetTableStateResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final GetTableStateResponse defaultInstance; + public static GetTableStateResponse getDefaultInstance() { + return defaultInstance; + } + + public GetTableStateResponse getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private GetTableStateResponse( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = tableState_.toBuilder(); + } + tableState_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(tableState_); + tableState_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); } finally { this.unknownFields = unknownFields.build(); makeExtensionsImmutable(); @@ -37937,67 +38160,67 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public GetClusterStatusResponse parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public GetTableStateResponse parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new GetClusterStatusResponse(input, extensionRegistry); + return new GetTableStateResponse(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } private int bitField0_; - // required .ClusterStatus cluster_status = 1; - public static final int CLUSTER_STATUS_FIELD_NUMBER = 1; - private org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus clusterStatus_; + // required .TableState table_state = 1; + public static final int TABLE_STATE_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState tableState_; /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public boolean hasClusterStatus() { + public boolean hasTableState() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus() { - return clusterStatus_; + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState getTableState() { + return tableState_; } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder() { - return clusterStatus_; + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder getTableStateOrBuilder() { + return tableState_; } private void initFields() { - clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + tableState_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance(); } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; - if (!hasClusterStatus()) { + if (!hasTableState()) { memoizedIsInitialized = 0; return false; } - if (!getClusterStatus().isInitialized()) { + if (!getTableState().isInitialized()) { memoizedIsInitialized = 0; return false; } @@ -38009,7 +38232,7 @@ public final class MasterProtos { throws java.io.IOException { getSerializedSize(); if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeMessage(1, clusterStatus_); + output.writeMessage(1, tableState_); } getUnknownFields().writeTo(output); } @@ -38022,7 +38245,7 @@ public final class MasterProtos { size = 0; if (((bitField0_ & 0x00000001) == 0x00000001)) { size += com.google.protobuf.CodedOutputStream - .computeMessageSize(1, clusterStatus_); + .computeMessageSize(1, tableState_); } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; @@ -38041,16 +38264,16 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse) obj; boolean result = true; - result = result && (hasClusterStatus() == other.hasClusterStatus()); - if (hasClusterStatus()) { - result = result && getClusterStatus() - .equals(other.getClusterStatus()); + result = result && (hasTableState() == other.hasTableState()); + if (hasTableState()) { + result = result && getTableState() + .equals(other.getTableState()); } result = result && getUnknownFields().equals(other.getUnknownFields()); @@ -38065,62 +38288,62 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasClusterStatus()) { - hash = (37 * hash) + CLUSTER_STATUS_FIELD_NUMBER; - hash = (53 * hash) + getClusterStatus().hashCode(); + if (hasTableState()) { + hash = (37 * hash) + TABLE_STATE_FIELD_NUMBER; + hash = (53 * hash) + getTableState().hashCode(); } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -38129,7 +38352,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -38141,24 +38364,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code GetClusterStatusResponse} + * Protobuf type {@code GetTableStateResponse} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponseOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponseOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -38170,7 +38393,7 @@ public final class MasterProtos { } private void maybeForceBuilderInitialization() { if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { - getClusterStatusFieldBuilder(); + getTableStateFieldBuilder(); } } private static Builder create() { @@ -38179,10 +38402,10 @@ public final class MasterProtos { public Builder clear() { super.clear(); - if (clusterStatusBuilder_ == null) { - clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + if (tableStateBuilder_ == null) { + tableState_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance(); } else { - clusterStatusBuilder_.clear(); + tableStateBuilder_.clear(); } bitField0_ = (bitField0_ & ~0x00000001); return this; @@ -38194,32 +38417,32 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetTableStateResponse_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { to_bitField0_ |= 0x00000001; } - if (clusterStatusBuilder_ == null) { - result.clusterStatus_ = clusterStatus_; + if (tableStateBuilder_ == null) { + result.tableState_ = tableState_; } else { - result.clusterStatus_ = clusterStatusBuilder_.build(); + result.tableState_ = tableStateBuilder_.build(); } result.bitField0_ = to_bitField0_; onBuilt(); @@ -38227,29 +38450,29 @@ public final class MasterProtos { } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.getDefaultInstance()) return this; - if (other.hasClusterStatus()) { - mergeClusterStatus(other.getClusterStatus()); + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance()) return this; + if (other.hasTableState()) { + mergeTableState(other.getTableState()); } this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { - if (!hasClusterStatus()) { + if (!hasTableState()) { return false; } - if (!getClusterStatus().isInitialized()) { + if (!getTableState().isInitialized()) { return false; } @@ -38260,11 +38483,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -38275,156 +38498,156 @@ public final class MasterProtos { } private int bitField0_; - // required .ClusterStatus cluster_status = 1; - private org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + // required .TableState table_state = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState tableState_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance(); private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder> clusterStatusBuilder_; + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder> tableStateBuilder_; /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public boolean hasClusterStatus() { + public boolean hasTableState() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus() { - if (clusterStatusBuilder_ == null) { - return clusterStatus_; + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState getTableState() { + if (tableStateBuilder_ == null) { + return tableState_; } else { - return clusterStatusBuilder_.getMessage(); + return tableStateBuilder_.getMessage(); } } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public Builder setClusterStatus(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus value) { - if (clusterStatusBuilder_ == null) { + public Builder setTableState(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState value) { + if (tableStateBuilder_ == null) { if (value == null) { throw new NullPointerException(); } - clusterStatus_ = value; + tableState_ = value; onChanged(); } else { - clusterStatusBuilder_.setMessage(value); + tableStateBuilder_.setMessage(value); } bitField0_ |= 0x00000001; return this; } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public Builder setClusterStatus( - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder builderForValue) { - if (clusterStatusBuilder_ == null) { - clusterStatus_ = builderForValue.build(); + public Builder setTableState( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder builderForValue) { + if (tableStateBuilder_ == null) { + tableState_ = builderForValue.build(); onChanged(); } else { - clusterStatusBuilder_.setMessage(builderForValue.build()); + tableStateBuilder_.setMessage(builderForValue.build()); } bitField0_ |= 0x00000001; return this; } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public Builder mergeClusterStatus(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus value) { - if (clusterStatusBuilder_ == null) { + public Builder mergeTableState(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState value) { + if (tableStateBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001) && - clusterStatus_ != org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance()) { - clusterStatus_ = - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.newBuilder(clusterStatus_).mergeFrom(value).buildPartial(); + tableState_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance()) { + tableState_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.newBuilder(tableState_).mergeFrom(value).buildPartial(); } else { - clusterStatus_ = value; + tableState_ = value; } onChanged(); } else { - clusterStatusBuilder_.mergeFrom(value); + tableStateBuilder_.mergeFrom(value); } bitField0_ |= 0x00000001; return this; } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public Builder clearClusterStatus() { - if (clusterStatusBuilder_ == null) { - clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + public Builder clearTableState() { + if (tableStateBuilder_ == null) { + tableState_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.getDefaultInstance(); onChanged(); } else { - clusterStatusBuilder_.clear(); + tableStateBuilder_.clear(); } bitField0_ = (bitField0_ & ~0x00000001); return this; } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder getClusterStatusBuilder() { + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder getTableStateBuilder() { bitField0_ |= 0x00000001; onChanged(); - return getClusterStatusFieldBuilder().getBuilder(); + return getTableStateFieldBuilder().getBuilder(); } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ - public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder() { - if (clusterStatusBuilder_ != null) { - return clusterStatusBuilder_.getMessageOrBuilder(); + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder getTableStateOrBuilder() { + if (tableStateBuilder_ != null) { + return tableStateBuilder_.getMessageOrBuilder(); } else { - return clusterStatus_; + return tableState_; } } /** - * required .ClusterStatus cluster_status = 1; + * required .TableState table_state = 1; */ private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder> - getClusterStatusFieldBuilder() { - if (clusterStatusBuilder_ == null) { - clusterStatusBuilder_ = new com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder>( - clusterStatus_, + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder> + getTableStateFieldBuilder() { + if (tableStateBuilder_ == null) { + tableStateBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableState.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableStateOrBuilder>( + tableState_, getParentForChildren(), isClean()); - clusterStatus_ = null; + tableState_ = null; } - return clusterStatusBuilder_; + return tableStateBuilder_; } - // @@protoc_insertion_point(builder_scope:GetClusterStatusResponse) + // @@protoc_insertion_point(builder_scope:GetTableStateResponse) } static { - defaultInstance = new GetClusterStatusResponse(true); + defaultInstance = new GetTableStateResponse(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:GetClusterStatusResponse) + // @@protoc_insertion_point(class_scope:GetTableStateResponse) } - public interface IsMasterRunningRequestOrBuilder + public interface GetClusterStatusRequestOrBuilder extends com.google.protobuf.MessageOrBuilder { } /** - * Protobuf type {@code IsMasterRunningRequest} + * Protobuf type {@code GetClusterStatusRequest} */ - public static final class IsMasterRunningRequest extends + public static final class GetClusterStatusRequest extends com.google.protobuf.GeneratedMessage - implements IsMasterRunningRequestOrBuilder { - // Use IsMasterRunningRequest.newBuilder() to construct. - private IsMasterRunningRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + implements GetClusterStatusRequestOrBuilder { + // Use GetClusterStatusRequest.newBuilder() to construct. + private GetClusterStatusRequest(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private IsMasterRunningRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private GetClusterStatusRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final IsMasterRunningRequest defaultInstance; - public static IsMasterRunningRequest getDefaultInstance() { + private static final GetClusterStatusRequest defaultInstance; + public static GetClusterStatusRequest getDefaultInstance() { return defaultInstance; } - public IsMasterRunningRequest getDefaultInstanceForType() { + public GetClusterStatusRequest getDefaultInstanceForType() { return defaultInstance; } @@ -38434,7 +38657,7 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private IsMasterRunningRequest( + private GetClusterStatusRequest( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -38470,28 +38693,28 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public IsMasterRunningRequest parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public GetClusterStatusRequest parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new IsMasterRunningRequest(input, extensionRegistry); + return new GetClusterStatusRequest(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } @@ -38535,10 +38758,10 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) obj; boolean result = true; result = result && @@ -38559,53 +38782,53 @@ public final class MasterProtos { return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -38614,7 +38837,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -38626,24 +38849,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code IsMasterRunningRequest} + * Protobuf type {@code GetClusterStatusRequest} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequestOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequestOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -38672,38 +38895,38 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusRequest_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest(this); onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.getDefaultInstance()) return this; + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest.getDefaultInstance()) return this; this.mergeUnknownFields(other.getUnknownFields()); return this; } @@ -38716,11 +38939,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusRequest) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -38730,49 +38953,53 @@ public final class MasterProtos { return this; } - // @@protoc_insertion_point(builder_scope:IsMasterRunningRequest) + // @@protoc_insertion_point(builder_scope:GetClusterStatusRequest) } static { - defaultInstance = new IsMasterRunningRequest(true); + defaultInstance = new GetClusterStatusRequest(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:IsMasterRunningRequest) + // @@protoc_insertion_point(class_scope:GetClusterStatusRequest) } - public interface IsMasterRunningResponseOrBuilder + public interface GetClusterStatusResponseOrBuilder extends com.google.protobuf.MessageOrBuilder { - // required bool is_master_running = 1; + // required .ClusterStatus cluster_status = 1; /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - boolean hasIsMasterRunning(); + boolean hasClusterStatus(); /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - boolean getIsMasterRunning(); + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus(); + /** + * required .ClusterStatus cluster_status = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder(); } /** - * Protobuf type {@code IsMasterRunningResponse} + * Protobuf type {@code GetClusterStatusResponse} */ - public static final class IsMasterRunningResponse extends + public static final class GetClusterStatusResponse extends com.google.protobuf.GeneratedMessage - implements IsMasterRunningResponseOrBuilder { - // Use IsMasterRunningResponse.newBuilder() to construct. - private IsMasterRunningResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + implements GetClusterStatusResponseOrBuilder { + // Use GetClusterStatusResponse.newBuilder() to construct. + private GetClusterStatusResponse(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private IsMasterRunningResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private GetClusterStatusResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final IsMasterRunningResponse defaultInstance; - public static IsMasterRunningResponse getDefaultInstance() { + private static final GetClusterStatusResponse defaultInstance; + public static GetClusterStatusResponse getDefaultInstance() { return defaultInstance; } - public IsMasterRunningResponse getDefaultInstanceForType() { + public GetClusterStatusResponse getDefaultInstanceForType() { return defaultInstance; } @@ -38782,7 +39009,7 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private IsMasterRunningResponse( + private GetClusterStatusResponse( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -38805,9 +39032,17 @@ public final class MasterProtos { } break; } - case 8: { + case 10: { + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = clusterStatus_.toBuilder(); + } + clusterStatus_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(clusterStatus_); + clusterStatus_ = subBuilder.buildPartial(); + } bitField0_ |= 0x00000001; - isMasterRunning_ = input.readBool(); break; } } @@ -38824,57 +39059,67 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public IsMasterRunningResponse parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public GetClusterStatusResponse parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new IsMasterRunningResponse(input, extensionRegistry); + return new GetClusterStatusResponse(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } private int bitField0_; - // required bool is_master_running = 1; - public static final int IS_MASTER_RUNNING_FIELD_NUMBER = 1; - private boolean isMasterRunning_; + // required .ClusterStatus cluster_status = 1; + public static final int CLUSTER_STATUS_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus clusterStatus_; /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public boolean hasIsMasterRunning() { + public boolean hasClusterStatus() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public boolean getIsMasterRunning() { - return isMasterRunning_; + public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus() { + return clusterStatus_; + } + /** + * required .ClusterStatus cluster_status = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder() { + return clusterStatus_; } private void initFields() { - isMasterRunning_ = false; + clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; - if (!hasIsMasterRunning()) { + if (!hasClusterStatus()) { + memoizedIsInitialized = 0; + return false; + } + if (!getClusterStatus().isInitialized()) { memoizedIsInitialized = 0; return false; } @@ -38886,7 +39131,7 @@ public final class MasterProtos { throws java.io.IOException { getSerializedSize(); if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeBool(1, isMasterRunning_); + output.writeMessage(1, clusterStatus_); } getUnknownFields().writeTo(output); } @@ -38899,7 +39144,7 @@ public final class MasterProtos { size = 0; if (((bitField0_ & 0x00000001) == 0x00000001)) { size += com.google.protobuf.CodedOutputStream - .computeBoolSize(1, isMasterRunning_); + .computeMessageSize(1, clusterStatus_); } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; @@ -38918,16 +39163,16 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) obj; boolean result = true; - result = result && (hasIsMasterRunning() == other.hasIsMasterRunning()); - if (hasIsMasterRunning()) { - result = result && (getIsMasterRunning() - == other.getIsMasterRunning()); + result = result && (hasClusterStatus() == other.hasClusterStatus()); + if (hasClusterStatus()) { + result = result && getClusterStatus() + .equals(other.getClusterStatus()); } result = result && getUnknownFields().equals(other.getUnknownFields()); @@ -38942,62 +39187,62 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasIsMasterRunning()) { - hash = (37 * hash) + IS_MASTER_RUNNING_FIELD_NUMBER; - hash = (53 * hash) + hashBoolean(getIsMasterRunning()); + if (hasClusterStatus()) { + hash = (37 * hash) + CLUSTER_STATUS_FIELD_NUMBER; + hash = (53 * hash) + getClusterStatus().hashCode(); } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -39006,7 +39251,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -39018,24 +39263,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code IsMasterRunningResponse} + * Protobuf type {@code GetClusterStatusResponse} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponseOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponseOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -39047,6 +39292,7 @@ public final class MasterProtos { } private void maybeForceBuilderInitialization() { if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getClusterStatusFieldBuilder(); } } private static Builder create() { @@ -39055,7 +39301,11 @@ public final class MasterProtos { public Builder clear() { super.clear(); - isMasterRunning_ = false; + if (clusterStatusBuilder_ == null) { + clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + } else { + clusterStatusBuilder_.clear(); + } bitField0_ = (bitField0_ & ~0x00000001); return this; } @@ -39066,54 +39316,62 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_GetClusterStatusResponse_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { to_bitField0_ |= 0x00000001; } - result.isMasterRunning_ = isMasterRunning_; + if (clusterStatusBuilder_ == null) { + result.clusterStatus_ = clusterStatus_; + } else { + result.clusterStatus_ = clusterStatusBuilder_.build(); + } result.bitField0_ = to_bitField0_; onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.getDefaultInstance()) return this; - if (other.hasIsMasterRunning()) { - setIsMasterRunning(other.getIsMasterRunning()); + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse.getDefaultInstance()) return this; + if (other.hasClusterStatus()) { + mergeClusterStatus(other.getClusterStatus()); } this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { - if (!hasIsMasterRunning()) { + if (!hasClusterStatus()) { + + return false; + } + if (!getClusterStatus().isInitialized()) { return false; } @@ -39124,11 +39382,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetClusterStatusResponse) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -39139,86 +39397,156 @@ public final class MasterProtos { } private int bitField0_; - // required bool is_master_running = 1; - private boolean isMasterRunning_ ; + // required .ClusterStatus cluster_status = 1; + private org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder> clusterStatusBuilder_; /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public boolean hasIsMasterRunning() { + public boolean hasClusterStatus() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public boolean getIsMasterRunning() { - return isMasterRunning_; + public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus getClusterStatus() { + if (clusterStatusBuilder_ == null) { + return clusterStatus_; + } else { + return clusterStatusBuilder_.getMessage(); + } } /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public Builder setIsMasterRunning(boolean value) { + public Builder setClusterStatus(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus value) { + if (clusterStatusBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + clusterStatus_ = value; + onChanged(); + } else { + clusterStatusBuilder_.setMessage(value); + } bitField0_ |= 0x00000001; - isMasterRunning_ = value; - onChanged(); return this; } /** - * required bool is_master_running = 1; + * required .ClusterStatus cluster_status = 1; */ - public Builder clearIsMasterRunning() { + public Builder setClusterStatus( + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder builderForValue) { + if (clusterStatusBuilder_ == null) { + clusterStatus_ = builderForValue.build(); + onChanged(); + } else { + clusterStatusBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * required .ClusterStatus cluster_status = 1; + */ + public Builder mergeClusterStatus(org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus value) { + if (clusterStatusBuilder_ == null) { + if (((bitField0_ & 0x00000001) == 0x00000001) && + clusterStatus_ != org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance()) { + clusterStatus_ = + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.newBuilder(clusterStatus_).mergeFrom(value).buildPartial(); + } else { + clusterStatus_ = value; + } + onChanged(); + } else { + clusterStatusBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * required .ClusterStatus cluster_status = 1; + */ + public Builder clearClusterStatus() { + if (clusterStatusBuilder_ == null) { + clusterStatus_ = org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.getDefaultInstance(); + onChanged(); + } else { + clusterStatusBuilder_.clear(); + } bitField0_ = (bitField0_ & ~0x00000001); - isMasterRunning_ = false; - onChanged(); return this; } + /** + * required .ClusterStatus cluster_status = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder getClusterStatusBuilder() { + bitField0_ |= 0x00000001; + onChanged(); + return getClusterStatusFieldBuilder().getBuilder(); + } + /** + * required .ClusterStatus cluster_status = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder getClusterStatusOrBuilder() { + if (clusterStatusBuilder_ != null) { + return clusterStatusBuilder_.getMessageOrBuilder(); + } else { + return clusterStatus_; + } + } + /** + * required .ClusterStatus cluster_status = 1; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder> + getClusterStatusFieldBuilder() { + if (clusterStatusBuilder_ == null) { + clusterStatusBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatus.Builder, org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.ClusterStatusOrBuilder>( + clusterStatus_, + getParentForChildren(), + isClean()); + clusterStatus_ = null; + } + return clusterStatusBuilder_; + } - // @@protoc_insertion_point(builder_scope:IsMasterRunningResponse) + // @@protoc_insertion_point(builder_scope:GetClusterStatusResponse) } static { - defaultInstance = new IsMasterRunningResponse(true); + defaultInstance = new GetClusterStatusResponse(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:IsMasterRunningResponse) + // @@protoc_insertion_point(class_scope:GetClusterStatusResponse) } - public interface ExecProcedureRequestOrBuilder + public interface IsMasterRunningRequestOrBuilder extends com.google.protobuf.MessageOrBuilder { - - // required .ProcedureDescription procedure = 1; - /** - * required .ProcedureDescription procedure = 1; - */ - boolean hasProcedure(); - /** - * required .ProcedureDescription procedure = 1; - */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure(); - /** - * required .ProcedureDescription procedure = 1; - */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder(); } /** - * Protobuf type {@code ExecProcedureRequest} + * Protobuf type {@code IsMasterRunningRequest} */ - public static final class ExecProcedureRequest extends + public static final class IsMasterRunningRequest extends com.google.protobuf.GeneratedMessage - implements ExecProcedureRequestOrBuilder { - // Use ExecProcedureRequest.newBuilder() to construct. - private ExecProcedureRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + implements IsMasterRunningRequestOrBuilder { + // Use IsMasterRunningRequest.newBuilder() to construct. + private IsMasterRunningRequest(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private ExecProcedureRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private IsMasterRunningRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final ExecProcedureRequest defaultInstance; - public static ExecProcedureRequest getDefaultInstance() { + private static final IsMasterRunningRequest defaultInstance; + public static IsMasterRunningRequest getDefaultInstance() { return defaultInstance; } - public ExecProcedureRequest getDefaultInstanceForType() { + public IsMasterRunningRequest getDefaultInstanceForType() { return defaultInstance; } @@ -39228,12 +39556,11 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private ExecProcedureRequest( + private IsMasterRunningRequest( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { initFields(); - int mutable_bitField0_ = 0; com.google.protobuf.UnknownFieldSet.Builder unknownFields = com.google.protobuf.UnknownFieldSet.newBuilder(); try { @@ -39251,19 +39578,6 @@ public final class MasterProtos { } break; } - case 10: { - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder subBuilder = null; - if (((bitField0_ & 0x00000001) == 0x00000001)) { - subBuilder = procedure_.toBuilder(); - } - procedure_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.PARSER, extensionRegistry); - if (subBuilder != null) { - subBuilder.mergeFrom(procedure_); - procedure_ = subBuilder.buildPartial(); - } - bitField0_ |= 0x00000001; - break; - } } } } catch (com.google.protobuf.InvalidProtocolBufferException e) { @@ -39278,70 +39592,38 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public ExecProcedureRequest parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public IsMasterRunningRequest parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new ExecProcedureRequest(input, extensionRegistry); + return new IsMasterRunningRequest(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } - private int bitField0_; - // required .ProcedureDescription procedure = 1; - public static final int PROCEDURE_FIELD_NUMBER = 1; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_; - /** - * required .ProcedureDescription procedure = 1; - */ - public boolean hasProcedure() { - return ((bitField0_ & 0x00000001) == 0x00000001); - } - /** - * required .ProcedureDescription procedure = 1; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { - return procedure_; - } - /** - * required .ProcedureDescription procedure = 1; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { - return procedure_; - } - private void initFields() { - procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; - if (!hasProcedure()) { - memoizedIsInitialized = 0; - return false; - } - if (!getProcedure().isInitialized()) { - memoizedIsInitialized = 0; - return false; - } memoizedIsInitialized = 1; return true; } @@ -39349,9 +39631,6 @@ public final class MasterProtos { public void writeTo(com.google.protobuf.CodedOutputStream output) throws java.io.IOException { getSerializedSize(); - if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeMessage(1, procedure_); - } getUnknownFields().writeTo(output); } @@ -39361,10 +39640,6 @@ public final class MasterProtos { if (size != -1) return size; size = 0; - if (((bitField0_ & 0x00000001) == 0x00000001)) { - size += com.google.protobuf.CodedOutputStream - .computeMessageSize(1, procedure_); - } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; return size; @@ -39382,17 +39657,12 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) obj; boolean result = true; - result = result && (hasProcedure() == other.hasProcedure()); - if (hasProcedure()) { - result = result && getProcedure() - .equals(other.getProcedure()); - } result = result && getUnknownFields().equals(other.getUnknownFields()); return result; @@ -39406,62 +39676,58 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasProcedure()) { - hash = (37 * hash) + PROCEDURE_FIELD_NUMBER; - hash = (53 * hash) + getProcedure().hashCode(); - } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -39470,7 +39736,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -39482,24 +39748,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code ExecProcedureRequest} + * Protobuf type {@code IsMasterRunningRequest} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequestOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequestOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -39511,7 +39777,6 @@ public final class MasterProtos { } private void maybeForceBuilderInitialization() { if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { - getProcedureFieldBuilder(); } } private static Builder create() { @@ -39520,12 +39785,6 @@ public final class MasterProtos { public Builder clear() { super.clear(); - if (procedureBuilder_ == null) { - procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - } else { - procedureBuilder_.clear(); - } - bitField0_ = (bitField0_ & ~0x00000001); return this; } @@ -39535,65 +39794,43 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningRequest_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest(this); - int from_bitField0_ = bitField0_; - int to_bitField0_ = 0; - if (((from_bitField0_ & 0x00000001) == 0x00000001)) { - to_bitField0_ |= 0x00000001; - } - if (procedureBuilder_ == null) { - result.procedure_ = procedure_; - } else { - result.procedure_ = procedureBuilder_.build(); - } - result.bitField0_ = to_bitField0_; + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest(this); onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.getDefaultInstance()) return this; - if (other.hasProcedure()) { - mergeProcedure(other.getProcedure()); - } + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest.getDefaultInstance()) return this; this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { - if (!hasProcedure()) { - - return false; - } - if (!getProcedure().isInitialized()) { - - return false; - } return true; } @@ -39601,11 +39838,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -39614,178 +39851,50 @@ public final class MasterProtos { } return this; } - private int bitField0_; - // required .ProcedureDescription procedure = 1; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> procedureBuilder_; - /** - * required .ProcedureDescription procedure = 1; - */ - public boolean hasProcedure() { - return ((bitField0_ & 0x00000001) == 0x00000001); - } - /** - * required .ProcedureDescription procedure = 1; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { - if (procedureBuilder_ == null) { - return procedure_; - } else { - return procedureBuilder_.getMessage(); - } - } - /** - * required .ProcedureDescription procedure = 1; - */ - public Builder setProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { - if (procedureBuilder_ == null) { - if (value == null) { - throw new NullPointerException(); - } - procedure_ = value; - onChanged(); - } else { - procedureBuilder_.setMessage(value); - } - bitField0_ |= 0x00000001; - return this; - } - /** - * required .ProcedureDescription procedure = 1; - */ - public Builder setProcedure( - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder builderForValue) { - if (procedureBuilder_ == null) { - procedure_ = builderForValue.build(); - onChanged(); - } else { - procedureBuilder_.setMessage(builderForValue.build()); - } - bitField0_ |= 0x00000001; - return this; - } - /** - * required .ProcedureDescription procedure = 1; - */ - public Builder mergeProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { - if (procedureBuilder_ == null) { - if (((bitField0_ & 0x00000001) == 0x00000001) && - procedure_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance()) { - procedure_ = - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.newBuilder(procedure_).mergeFrom(value).buildPartial(); - } else { - procedure_ = value; - } - onChanged(); - } else { - procedureBuilder_.mergeFrom(value); - } - bitField0_ |= 0x00000001; - return this; - } - /** - * required .ProcedureDescription procedure = 1; - */ - public Builder clearProcedure() { - if (procedureBuilder_ == null) { - procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - onChanged(); - } else { - procedureBuilder_.clear(); - } - bitField0_ = (bitField0_ & ~0x00000001); - return this; - } - /** - * required .ProcedureDescription procedure = 1; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder getProcedureBuilder() { - bitField0_ |= 0x00000001; - onChanged(); - return getProcedureFieldBuilder().getBuilder(); - } - /** - * required .ProcedureDescription procedure = 1; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { - if (procedureBuilder_ != null) { - return procedureBuilder_.getMessageOrBuilder(); - } else { - return procedure_; - } - } - /** - * required .ProcedureDescription procedure = 1; - */ - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> - getProcedureFieldBuilder() { - if (procedureBuilder_ == null) { - procedureBuilder_ = new com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder>( - procedure_, - getParentForChildren(), - isClean()); - procedure_ = null; - } - return procedureBuilder_; - } - - // @@protoc_insertion_point(builder_scope:ExecProcedureRequest) + // @@protoc_insertion_point(builder_scope:IsMasterRunningRequest) } static { - defaultInstance = new ExecProcedureRequest(true); + defaultInstance = new IsMasterRunningRequest(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:ExecProcedureRequest) + // @@protoc_insertion_point(class_scope:IsMasterRunningRequest) } - public interface ExecProcedureResponseOrBuilder + public interface IsMasterRunningResponseOrBuilder extends com.google.protobuf.MessageOrBuilder { - // optional int64 expected_timeout = 1; - /** - * optional int64 expected_timeout = 1; - */ - boolean hasExpectedTimeout(); - /** - * optional int64 expected_timeout = 1; - */ - long getExpectedTimeout(); - - // optional bytes return_data = 2; + // required bool is_master_running = 1; /** - * optional bytes return_data = 2; + * required bool is_master_running = 1; */ - boolean hasReturnData(); + boolean hasIsMasterRunning(); /** - * optional bytes return_data = 2; + * required bool is_master_running = 1; */ - com.google.protobuf.ByteString getReturnData(); + boolean getIsMasterRunning(); } /** - * Protobuf type {@code ExecProcedureResponse} + * Protobuf type {@code IsMasterRunningResponse} */ - public static final class ExecProcedureResponse extends + public static final class IsMasterRunningResponse extends com.google.protobuf.GeneratedMessage - implements ExecProcedureResponseOrBuilder { - // Use ExecProcedureResponse.newBuilder() to construct. - private ExecProcedureResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + implements IsMasterRunningResponseOrBuilder { + // Use IsMasterRunningResponse.newBuilder() to construct. + private IsMasterRunningResponse(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private ExecProcedureResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private IsMasterRunningResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final ExecProcedureResponse defaultInstance; - public static ExecProcedureResponse getDefaultInstance() { + private static final IsMasterRunningResponse defaultInstance; + public static IsMasterRunningResponse getDefaultInstance() { return defaultInstance; } - public ExecProcedureResponse getDefaultInstanceForType() { + public IsMasterRunningResponse getDefaultInstanceForType() { return defaultInstance; } @@ -39795,7 +39904,7 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private ExecProcedureResponse( + private IsMasterRunningResponse( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -39820,12 +39929,7 @@ public final class MasterProtos { } case 8: { bitField0_ |= 0x00000001; - expectedTimeout_ = input.readInt64(); - break; - } - case 18: { - bitField0_ |= 0x00000002; - returnData_ = input.readBytes(); + isMasterRunning_ = input.readBool(); break; } } @@ -39842,73 +39946,60 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public ExecProcedureResponse parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public IsMasterRunningResponse parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new ExecProcedureResponse(input, extensionRegistry); + return new IsMasterRunningResponse(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } private int bitField0_; - // optional int64 expected_timeout = 1; - public static final int EXPECTED_TIMEOUT_FIELD_NUMBER = 1; - private long expectedTimeout_; + // required bool is_master_running = 1; + public static final int IS_MASTER_RUNNING_FIELD_NUMBER = 1; + private boolean isMasterRunning_; /** - * optional int64 expected_timeout = 1; + * required bool is_master_running = 1; */ - public boolean hasExpectedTimeout() { + public boolean hasIsMasterRunning() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional int64 expected_timeout = 1; - */ - public long getExpectedTimeout() { - return expectedTimeout_; - } - - // optional bytes return_data = 2; - public static final int RETURN_DATA_FIELD_NUMBER = 2; - private com.google.protobuf.ByteString returnData_; - /** - * optional bytes return_data = 2; - */ - public boolean hasReturnData() { - return ((bitField0_ & 0x00000002) == 0x00000002); - } - /** - * optional bytes return_data = 2; + * required bool is_master_running = 1; */ - public com.google.protobuf.ByteString getReturnData() { - return returnData_; + public boolean getIsMasterRunning() { + return isMasterRunning_; } private void initFields() { - expectedTimeout_ = 0L; - returnData_ = com.google.protobuf.ByteString.EMPTY; + isMasterRunning_ = false; } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; + if (!hasIsMasterRunning()) { + memoizedIsInitialized = 0; + return false; + } memoizedIsInitialized = 1; return true; } @@ -39917,10 +40008,7 @@ public final class MasterProtos { throws java.io.IOException { getSerializedSize(); if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeInt64(1, expectedTimeout_); - } - if (((bitField0_ & 0x00000002) == 0x00000002)) { - output.writeBytes(2, returnData_); + output.writeBool(1, isMasterRunning_); } getUnknownFields().writeTo(output); } @@ -39933,11 +40021,7 @@ public final class MasterProtos { size = 0; if (((bitField0_ & 0x00000001) == 0x00000001)) { size += com.google.protobuf.CodedOutputStream - .computeInt64Size(1, expectedTimeout_); - } - if (((bitField0_ & 0x00000002) == 0x00000002)) { - size += com.google.protobuf.CodedOutputStream - .computeBytesSize(2, returnData_); + .computeBoolSize(1, isMasterRunning_); } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; @@ -39956,21 +40040,16 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) obj; boolean result = true; - result = result && (hasExpectedTimeout() == other.hasExpectedTimeout()); - if (hasExpectedTimeout()) { - result = result && (getExpectedTimeout() - == other.getExpectedTimeout()); - } - result = result && (hasReturnData() == other.hasReturnData()); - if (hasReturnData()) { - result = result && getReturnData() - .equals(other.getReturnData()); + result = result && (hasIsMasterRunning() == other.hasIsMasterRunning()); + if (hasIsMasterRunning()) { + result = result && (getIsMasterRunning() + == other.getIsMasterRunning()); } result = result && getUnknownFields().equals(other.getUnknownFields()); @@ -39985,66 +40064,62 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasExpectedTimeout()) { - hash = (37 * hash) + EXPECTED_TIMEOUT_FIELD_NUMBER; - hash = (53 * hash) + hashLong(getExpectedTimeout()); - } - if (hasReturnData()) { - hash = (37 * hash) + RETURN_DATA_FIELD_NUMBER; - hash = (53 * hash) + getReturnData().hashCode(); + if (hasIsMasterRunning()) { + hash = (37 * hash) + IS_MASTER_RUNNING_FIELD_NUMBER; + hash = (53 * hash) + hashBoolean(getIsMasterRunning()); } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -40053,7 +40128,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -40065,24 +40140,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code ExecProcedureResponse} + * Protobuf type {@code IsMasterRunningResponse} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponseOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponseOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -40102,10 +40177,8 @@ public final class MasterProtos { public Builder clear() { super.clear(); - expectedTimeout_ = 0L; + isMasterRunning_ = false; bitField0_ = (bitField0_ & ~0x00000001); - returnData_ = com.google.protobuf.ByteString.EMPTY; - bitField0_ = (bitField0_ & ~0x00000002); return this; } @@ -40115,60 +40188,57 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsMasterRunningResponse_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { to_bitField0_ |= 0x00000001; } - result.expectedTimeout_ = expectedTimeout_; - if (((from_bitField0_ & 0x00000002) == 0x00000002)) { - to_bitField0_ |= 0x00000002; - } - result.returnData_ = returnData_; + result.isMasterRunning_ = isMasterRunning_; result.bitField0_ = to_bitField0_; onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.getDefaultInstance()) return this; - if (other.hasExpectedTimeout()) { - setExpectedTimeout(other.getExpectedTimeout()); - } - if (other.hasReturnData()) { - setReturnData(other.getReturnData()); + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse.getDefaultInstance()) return this; + if (other.hasIsMasterRunning()) { + setIsMasterRunning(other.getIsMasterRunning()); } this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { + if (!hasIsMasterRunning()) { + + return false; + } return true; } @@ -40176,11 +40246,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningResponse) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -40191,122 +40261,86 @@ public final class MasterProtos { } private int bitField0_; - // optional int64 expected_timeout = 1; - private long expectedTimeout_ ; + // required bool is_master_running = 1; + private boolean isMasterRunning_ ; /** - * optional int64 expected_timeout = 1; + * required bool is_master_running = 1; */ - public boolean hasExpectedTimeout() { + public boolean hasIsMasterRunning() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional int64 expected_timeout = 1; + * required bool is_master_running = 1; */ - public long getExpectedTimeout() { - return expectedTimeout_; + public boolean getIsMasterRunning() { + return isMasterRunning_; } /** - * optional int64 expected_timeout = 1; + * required bool is_master_running = 1; */ - public Builder setExpectedTimeout(long value) { + public Builder setIsMasterRunning(boolean value) { bitField0_ |= 0x00000001; - expectedTimeout_ = value; + isMasterRunning_ = value; onChanged(); return this; } /** - * optional int64 expected_timeout = 1; + * required bool is_master_running = 1; */ - public Builder clearExpectedTimeout() { + public Builder clearIsMasterRunning() { bitField0_ = (bitField0_ & ~0x00000001); - expectedTimeout_ = 0L; - onChanged(); - return this; - } - - // optional bytes return_data = 2; - private com.google.protobuf.ByteString returnData_ = com.google.protobuf.ByteString.EMPTY; - /** - * optional bytes return_data = 2; - */ - public boolean hasReturnData() { - return ((bitField0_ & 0x00000002) == 0x00000002); - } - /** - * optional bytes return_data = 2; - */ - public com.google.protobuf.ByteString getReturnData() { - return returnData_; - } - /** - * optional bytes return_data = 2; - */ - public Builder setReturnData(com.google.protobuf.ByteString value) { - if (value == null) { - throw new NullPointerException(); - } - bitField0_ |= 0x00000002; - returnData_ = value; - onChanged(); - return this; - } - /** - * optional bytes return_data = 2; - */ - public Builder clearReturnData() { - bitField0_ = (bitField0_ & ~0x00000002); - returnData_ = getDefaultInstance().getReturnData(); + isMasterRunning_ = false; onChanged(); return this; } - // @@protoc_insertion_point(builder_scope:ExecProcedureResponse) + // @@protoc_insertion_point(builder_scope:IsMasterRunningResponse) } static { - defaultInstance = new ExecProcedureResponse(true); + defaultInstance = new IsMasterRunningResponse(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:ExecProcedureResponse) + // @@protoc_insertion_point(class_scope:IsMasterRunningResponse) } - public interface IsProcedureDoneRequestOrBuilder + public interface ExecProcedureRequestOrBuilder extends com.google.protobuf.MessageOrBuilder { - // optional .ProcedureDescription procedure = 1; + // required .ProcedureDescription procedure = 1; /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ boolean hasProcedure(); /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure(); /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder(); } /** - * Protobuf type {@code IsProcedureDoneRequest} + * Protobuf type {@code ExecProcedureRequest} */ - public static final class IsProcedureDoneRequest extends + public static final class ExecProcedureRequest extends com.google.protobuf.GeneratedMessage - implements IsProcedureDoneRequestOrBuilder { - // Use IsProcedureDoneRequest.newBuilder() to construct. - private IsProcedureDoneRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + implements ExecProcedureRequestOrBuilder { + // Use ExecProcedureRequest.newBuilder() to construct. + private ExecProcedureRequest(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private IsProcedureDoneRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private ExecProcedureRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final IsProcedureDoneRequest defaultInstance; - public static IsProcedureDoneRequest getDefaultInstance() { + private static final ExecProcedureRequest defaultInstance; + public static ExecProcedureRequest getDefaultInstance() { return defaultInstance; } - public IsProcedureDoneRequest getDefaultInstanceForType() { + public ExecProcedureRequest getDefaultInstanceForType() { return defaultInstance; } @@ -40316,7 +40350,7 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private IsProcedureDoneRequest( + private ExecProcedureRequest( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -40366,49 +40400,49 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public IsProcedureDoneRequest parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public ExecProcedureRequest parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new IsProcedureDoneRequest(input, extensionRegistry); + return new ExecProcedureRequest(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } private int bitField0_; - // optional .ProcedureDescription procedure = 1; + // required .ProcedureDescription procedure = 1; public static final int PROCEDURE_FIELD_NUMBER = 1; private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_; /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public boolean hasProcedure() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { return procedure_; } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { return procedure_; @@ -40422,11 +40456,13 @@ public final class MasterProtos { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; - if (hasProcedure()) { - if (!getProcedure().isInitialized()) { - memoizedIsInitialized = 0; - return false; - } + if (!hasProcedure()) { + memoizedIsInitialized = 0; + return false; + } + if (!getProcedure().isInitialized()) { + memoizedIsInitialized = 0; + return false; } memoizedIsInitialized = 1; return true; @@ -40468,10 +40504,10 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) obj; boolean result = true; result = result && (hasProcedure() == other.hasProcedure()); @@ -40501,53 +40537,53 @@ public final class MasterProtos { return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -40556,7 +40592,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -40568,24 +40604,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code IsProcedureDoneRequest} + * Protobuf type {@code ExecProcedureRequest} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequestOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequestOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -40621,23 +40657,23 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureRequest_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { @@ -40654,16 +40690,16 @@ public final class MasterProtos { } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.getDefaultInstance()) return this; + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest.getDefaultInstance()) return this; if (other.hasProcedure()) { mergeProcedure(other.getProcedure()); } @@ -40672,11 +40708,13 @@ public final class MasterProtos { } public final boolean isInitialized() { - if (hasProcedure()) { - if (!getProcedure().isInitialized()) { - - return false; - } + if (!hasProcedure()) { + + return false; + } + if (!getProcedure().isInitialized()) { + + return false; } return true; } @@ -40685,11 +40723,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureRequest) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -40700,18 +40738,18 @@ public final class MasterProtos { } private int bitField0_; - // optional .ProcedureDescription procedure = 1; + // required .ProcedureDescription procedure = 1; private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); private com.google.protobuf.SingleFieldBuilder< org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> procedureBuilder_; /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public boolean hasProcedure() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { if (procedureBuilder_ == null) { @@ -40721,7 +40759,7 @@ public final class MasterProtos { } } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public Builder setProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { if (procedureBuilder_ == null) { @@ -40737,7 +40775,7 @@ public final class MasterProtos { return this; } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public Builder setProcedure( org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder builderForValue) { @@ -40751,7 +40789,7 @@ public final class MasterProtos { return this; } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public Builder mergeProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { if (procedureBuilder_ == null) { @@ -40770,7 +40808,7 @@ public final class MasterProtos { return this; } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public Builder clearProcedure() { if (procedureBuilder_ == null) { @@ -40783,7 +40821,7 @@ public final class MasterProtos { return this; } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder getProcedureBuilder() { bitField0_ |= 0x00000001; @@ -40791,7 +40829,7 @@ public final class MasterProtos { return getProcedureFieldBuilder().getBuilder(); } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { if (procedureBuilder_ != null) { @@ -40801,7 +40839,7 @@ public final class MasterProtos { } } /** - * optional .ProcedureDescription procedure = 1; + * required .ProcedureDescription procedure = 1; */ private com.google.protobuf.SingleFieldBuilder< org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> @@ -40817,63 +40855,59 @@ public final class MasterProtos { return procedureBuilder_; } - // @@protoc_insertion_point(builder_scope:IsProcedureDoneRequest) + // @@protoc_insertion_point(builder_scope:ExecProcedureRequest) } static { - defaultInstance = new IsProcedureDoneRequest(true); + defaultInstance = new ExecProcedureRequest(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:IsProcedureDoneRequest) + // @@protoc_insertion_point(class_scope:ExecProcedureRequest) } - public interface IsProcedureDoneResponseOrBuilder + public interface ExecProcedureResponseOrBuilder extends com.google.protobuf.MessageOrBuilder { - // optional bool done = 1 [default = false]; + // optional int64 expected_timeout = 1; /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - boolean hasDone(); + boolean hasExpectedTimeout(); /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - boolean getDone(); + long getExpectedTimeout(); - // optional .ProcedureDescription snapshot = 2; - /** - * optional .ProcedureDescription snapshot = 2; - */ - boolean hasSnapshot(); + // optional bytes return_data = 2; /** - * optional .ProcedureDescription snapshot = 2; + * optional bytes return_data = 2; */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot(); + boolean hasReturnData(); /** - * optional .ProcedureDescription snapshot = 2; + * optional bytes return_data = 2; */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder(); + com.google.protobuf.ByteString getReturnData(); } /** - * Protobuf type {@code IsProcedureDoneResponse} + * Protobuf type {@code ExecProcedureResponse} */ - public static final class IsProcedureDoneResponse extends + public static final class ExecProcedureResponse extends com.google.protobuf.GeneratedMessage - implements IsProcedureDoneResponseOrBuilder { - // Use IsProcedureDoneResponse.newBuilder() to construct. - private IsProcedureDoneResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + implements ExecProcedureResponseOrBuilder { + // Use ExecProcedureResponse.newBuilder() to construct. + private ExecProcedureResponse(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private IsProcedureDoneResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private ExecProcedureResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final IsProcedureDoneResponse defaultInstance; - public static IsProcedureDoneResponse getDefaultInstance() { + private static final ExecProcedureResponse defaultInstance; + public static ExecProcedureResponse getDefaultInstance() { return defaultInstance; } - public IsProcedureDoneResponse getDefaultInstanceForType() { + public ExecProcedureResponse getDefaultInstanceForType() { return defaultInstance; } @@ -40883,7 +40917,7 @@ public final class MasterProtos { getUnknownFields() { return this.unknownFields; } - private IsProcedureDoneResponse( + private ExecProcedureResponse( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -40908,20 +40942,12 @@ public final class MasterProtos { } case 8: { bitField0_ |= 0x00000001; - done_ = input.readBool(); + expectedTimeout_ = input.readInt64(); break; } case 18: { - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder subBuilder = null; - if (((bitField0_ & 0x00000002) == 0x00000002)) { - subBuilder = snapshot_.toBuilder(); - } - snapshot_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.PARSER, extensionRegistry); - if (subBuilder != null) { - subBuilder.mergeFrom(snapshot_); - snapshot_ = subBuilder.buildPartial(); - } bitField0_ |= 0x00000002; + returnData_ = input.readBytes(); break; } } @@ -40938,85 +40964,73 @@ public final class MasterProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.Builder.class); } - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public IsProcedureDoneResponse parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public ExecProcedureResponse parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new IsProcedureDoneResponse(input, extensionRegistry); + return new ExecProcedureResponse(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } private int bitField0_; - // optional bool done = 1 [default = false]; - public static final int DONE_FIELD_NUMBER = 1; - private boolean done_; + // optional int64 expected_timeout = 1; + public static final int EXPECTED_TIMEOUT_FIELD_NUMBER = 1; + private long expectedTimeout_; /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public boolean hasDone() { + public boolean hasExpectedTimeout() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public boolean getDone() { - return done_; + public long getExpectedTimeout() { + return expectedTimeout_; } - // optional .ProcedureDescription snapshot = 2; - public static final int SNAPSHOT_FIELD_NUMBER = 2; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription snapshot_; + // optional bytes return_data = 2; + public static final int RETURN_DATA_FIELD_NUMBER = 2; + private com.google.protobuf.ByteString returnData_; /** - * optional .ProcedureDescription snapshot = 2; + * optional bytes return_data = 2; */ - public boolean hasSnapshot() { + public boolean hasReturnData() { return ((bitField0_ & 0x00000002) == 0x00000002); } /** - * optional .ProcedureDescription snapshot = 2; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot() { - return snapshot_; - } - /** - * optional .ProcedureDescription snapshot = 2; + * optional bytes return_data = 2; */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder() { - return snapshot_; + public com.google.protobuf.ByteString getReturnData() { + return returnData_; } private void initFields() { - done_ = false; - snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + expectedTimeout_ = 0L; + returnData_ = com.google.protobuf.ByteString.EMPTY; } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { byte isInitialized = memoizedIsInitialized; if (isInitialized != -1) return isInitialized == 1; - if (hasSnapshot()) { - if (!getSnapshot().isInitialized()) { - memoizedIsInitialized = 0; - return false; - } - } memoizedIsInitialized = 1; return true; } @@ -41025,10 +41039,10 @@ public final class MasterProtos { throws java.io.IOException { getSerializedSize(); if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeBool(1, done_); + output.writeInt64(1, expectedTimeout_); } if (((bitField0_ & 0x00000002) == 0x00000002)) { - output.writeMessage(2, snapshot_); + output.writeBytes(2, returnData_); } getUnknownFields().writeTo(output); } @@ -41041,11 +41055,11 @@ public final class MasterProtos { size = 0; if (((bitField0_ & 0x00000001) == 0x00000001)) { size += com.google.protobuf.CodedOutputStream - .computeBoolSize(1, done_); + .computeInt64Size(1, expectedTimeout_); } if (((bitField0_ & 0x00000002) == 0x00000002)) { size += com.google.protobuf.CodedOutputStream - .computeMessageSize(2, snapshot_); + .computeBytesSize(2, returnData_); } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; @@ -41064,21 +41078,21 @@ public final class MasterProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) obj; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) obj; boolean result = true; - result = result && (hasDone() == other.hasDone()); - if (hasDone()) { - result = result && (getDone() - == other.getDone()); + result = result && (hasExpectedTimeout() == other.hasExpectedTimeout()); + if (hasExpectedTimeout()) { + result = result && (getExpectedTimeout() + == other.getExpectedTimeout()); } - result = result && (hasSnapshot() == other.hasSnapshot()); - if (hasSnapshot()) { - result = result && getSnapshot() - .equals(other.getSnapshot()); + result = result && (hasReturnData() == other.hasReturnData()); + if (hasReturnData()) { + result = result && getReturnData() + .equals(other.getReturnData()); } result = result && getUnknownFields().equals(other.getUnknownFields()); @@ -41093,66 +41107,66 @@ public final class MasterProtos { } int hash = 41; hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasDone()) { - hash = (37 * hash) + DONE_FIELD_NUMBER; - hash = (53 * hash) + hashBoolean(getDone()); + if (hasExpectedTimeout()) { + hash = (37 * hash) + EXPECTED_TIMEOUT_FIELD_NUMBER; + hash = (53 * hash) + hashLong(getExpectedTimeout()); } - if (hasSnapshot()) { - hash = (37 * hash) + SNAPSHOT_FIELD_NUMBER; - hash = (53 * hash) + getSnapshot().hashCode(); + if (hasReturnData()) { + hash = (37 * hash) + RETURN_DATA_FIELD_NUMBER; + hash = (53 * hash) + getReturnData().hashCode(); } hash = (29 * hash) + getUnknownFields().hashCode(); memoizedHashCode = hash; return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -41161,7 +41175,7 @@ public final class MasterProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -41173,24 +41187,24 @@ public final class MasterProtos { return builder; } /** - * Protobuf type {@code IsProcedureDoneResponse} + * Protobuf type {@code ExecProcedureResponse} */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponseOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponseOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -41202,7 +41216,6 @@ public final class MasterProtos { } private void maybeForceBuilderInitialization() { if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { - getSnapshotFieldBuilder(); } } private static Builder create() { @@ -41211,13 +41224,9 @@ public final class MasterProtos { public Builder clear() { super.clear(); - done_ = false; + expectedTimeout_ = 0L; bitField0_ = (bitField0_ & ~0x00000001); - if (snapshotBuilder_ == null) { - snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - } else { - snapshotBuilder_.clear(); - } + returnData_ = com.google.protobuf.ByteString.EMPTY; bitField0_ = (bitField0_ & ~0x00000002); return this; } @@ -41228,70 +41237,60 @@ public final class MasterProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_ExecProcedureResponse_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse build() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse(this); + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { to_bitField0_ |= 0x00000001; } - result.done_ = done_; + result.expectedTimeout_ = expectedTimeout_; if (((from_bitField0_ & 0x00000002) == 0x00000002)) { to_bitField0_ |= 0x00000002; } - if (snapshotBuilder_ == null) { - result.snapshot_ = snapshot_; - } else { - result.snapshot_ = snapshotBuilder_.build(); - } + result.returnData_ = returnData_; result.bitField0_ = to_bitField0_; onBuilt(); return result; } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.getDefaultInstance()) return this; - if (other.hasDone()) { - setDone(other.getDone()); + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse.getDefaultInstance()) return this; + if (other.hasExpectedTimeout()) { + setExpectedTimeout(other.getExpectedTimeout()); } - if (other.hasSnapshot()) { - mergeSnapshot(other.getSnapshot()); + if (other.hasReturnData()) { + setReturnData(other.getReturnData()); } this.mergeUnknownFields(other.getUnknownFields()); return this; } public final boolean isInitialized() { - if (hasSnapshot()) { - if (!getSnapshot().isInitialized()) { - - return false; - } - } return true; } @@ -41299,11 +41298,11 @@ public final class MasterProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ExecProcedureResponse) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -41314,165 +41313,3071 @@ public final class MasterProtos { } private int bitField0_; - // optional bool done = 1 [default = false]; - private boolean done_ ; + // optional int64 expected_timeout = 1; + private long expectedTimeout_ ; /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public boolean hasDone() { + public boolean hasExpectedTimeout() { return ((bitField0_ & 0x00000001) == 0x00000001); } /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public boolean getDone() { - return done_; + public long getExpectedTimeout() { + return expectedTimeout_; } /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public Builder setDone(boolean value) { + public Builder setExpectedTimeout(long value) { bitField0_ |= 0x00000001; - done_ = value; + expectedTimeout_ = value; onChanged(); return this; } /** - * optional bool done = 1 [default = false]; + * optional int64 expected_timeout = 1; */ - public Builder clearDone() { + public Builder clearExpectedTimeout() { bitField0_ = (bitField0_ & ~0x00000001); - done_ = false; + expectedTimeout_ = 0L; + onChanged(); + return this; + } + + // optional bytes return_data = 2; + private com.google.protobuf.ByteString returnData_ = com.google.protobuf.ByteString.EMPTY; + /** + * optional bytes return_data = 2; + */ + public boolean hasReturnData() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional bytes return_data = 2; + */ + public com.google.protobuf.ByteString getReturnData() { + return returnData_; + } + /** + * optional bytes return_data = 2; + */ + public Builder setReturnData(com.google.protobuf.ByteString value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000002; + returnData_ = value; + onChanged(); + return this; + } + /** + * optional bytes return_data = 2; + */ + public Builder clearReturnData() { + bitField0_ = (bitField0_ & ~0x00000002); + returnData_ = getDefaultInstance().getReturnData(); onChanged(); return this; } - // optional .ProcedureDescription snapshot = 2; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> snapshotBuilder_; - /** - * optional .ProcedureDescription snapshot = 2; - */ - public boolean hasSnapshot() { - return ((bitField0_ & 0x00000002) == 0x00000002); + // @@protoc_insertion_point(builder_scope:ExecProcedureResponse) + } + + static { + defaultInstance = new ExecProcedureResponse(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:ExecProcedureResponse) + } + + public interface IsProcedureDoneRequestOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional .ProcedureDescription procedure = 1; + /** + * optional .ProcedureDescription procedure = 1; + */ + boolean hasProcedure(); + /** + * optional .ProcedureDescription procedure = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure(); + /** + * optional .ProcedureDescription procedure = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder(); + } + /** + * Protobuf type {@code IsProcedureDoneRequest} + */ + public static final class IsProcedureDoneRequest extends + com.google.protobuf.GeneratedMessage + implements IsProcedureDoneRequestOrBuilder { + // Use IsProcedureDoneRequest.newBuilder() to construct. + private IsProcedureDoneRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private IsProcedureDoneRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final IsProcedureDoneRequest defaultInstance; + public static IsProcedureDoneRequest getDefaultInstance() { + return defaultInstance; + } + + public IsProcedureDoneRequest getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private IsProcedureDoneRequest( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = procedure_.toBuilder(); + } + procedure_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(procedure_); + procedure_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public IsProcedureDoneRequest parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new IsProcedureDoneRequest(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional .ProcedureDescription procedure = 1; + public static final int PROCEDURE_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_; + /** + * optional .ProcedureDescription procedure = 1; + */ + public boolean hasProcedure() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { + return procedure_; + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { + return procedure_; + } + + private void initFields() { + procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasProcedure()) { + if (!getProcedure().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeMessage(1, procedure_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(1, procedure_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) obj; + + boolean result = true; + result = result && (hasProcedure() == other.hasProcedure()); + if (hasProcedure()) { + result = result && getProcedure() + .equals(other.getProcedure()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasProcedure()) { + hash = (37 * hash) + PROCEDURE_FIELD_NUMBER; + hash = (53 * hash) + getProcedure().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code IsProcedureDoneRequest} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequestOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getProcedureFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + if (procedureBuilder_ == null) { + procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + } else { + procedureBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneRequest_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + if (procedureBuilder_ == null) { + result.procedure_ = procedure_; + } else { + result.procedure_ = procedureBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest.getDefaultInstance()) return this; + if (other.hasProcedure()) { + mergeProcedure(other.getProcedure()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasProcedure()) { + if (!getProcedure().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneRequest) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional .ProcedureDescription procedure = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> procedureBuilder_; + /** + * optional .ProcedureDescription procedure = 1; + */ + public boolean hasProcedure() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getProcedure() { + if (procedureBuilder_ == null) { + return procedure_; + } else { + return procedureBuilder_.getMessage(); + } + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public Builder setProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { + if (procedureBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + procedure_ = value; + onChanged(); + } else { + procedureBuilder_.setMessage(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public Builder setProcedure( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder builderForValue) { + if (procedureBuilder_ == null) { + procedure_ = builderForValue.build(); + onChanged(); + } else { + procedureBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public Builder mergeProcedure(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { + if (procedureBuilder_ == null) { + if (((bitField0_ & 0x00000001) == 0x00000001) && + procedure_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance()) { + procedure_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.newBuilder(procedure_).mergeFrom(value).buildPartial(); + } else { + procedure_ = value; + } + onChanged(); + } else { + procedureBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public Builder clearProcedure() { + if (procedureBuilder_ == null) { + procedure_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + onChanged(); + } else { + procedureBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + return this; + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder getProcedureBuilder() { + bitField0_ |= 0x00000001; + onChanged(); + return getProcedureFieldBuilder().getBuilder(); + } + /** + * optional .ProcedureDescription procedure = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getProcedureOrBuilder() { + if (procedureBuilder_ != null) { + return procedureBuilder_.getMessageOrBuilder(); + } else { + return procedure_; + } + } + /** + * optional .ProcedureDescription procedure = 1; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> + getProcedureFieldBuilder() { + if (procedureBuilder_ == null) { + procedureBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder>( + procedure_, + getParentForChildren(), + isClean()); + procedure_ = null; + } + return procedureBuilder_; + } + + // @@protoc_insertion_point(builder_scope:IsProcedureDoneRequest) + } + + static { + defaultInstance = new IsProcedureDoneRequest(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:IsProcedureDoneRequest) + } + + public interface IsProcedureDoneResponseOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional bool done = 1 [default = false]; + /** + * optional bool done = 1 [default = false]; + */ + boolean hasDone(); + /** + * optional bool done = 1 [default = false]; + */ + boolean getDone(); + + // optional .ProcedureDescription snapshot = 2; + /** + * optional .ProcedureDescription snapshot = 2; + */ + boolean hasSnapshot(); + /** + * optional .ProcedureDescription snapshot = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot(); + /** + * optional .ProcedureDescription snapshot = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder(); + } + /** + * Protobuf type {@code IsProcedureDoneResponse} + */ + public static final class IsProcedureDoneResponse extends + com.google.protobuf.GeneratedMessage + implements IsProcedureDoneResponseOrBuilder { + // Use IsProcedureDoneResponse.newBuilder() to construct. + private IsProcedureDoneResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private IsProcedureDoneResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final IsProcedureDoneResponse defaultInstance; + public static IsProcedureDoneResponse getDefaultInstance() { + return defaultInstance; + } + + public IsProcedureDoneResponse getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private IsProcedureDoneResponse( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + bitField0_ |= 0x00000001; + done_ = input.readBool(); + break; + } + case 18: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder subBuilder = null; + if (((bitField0_ & 0x00000002) == 0x00000002)) { + subBuilder = snapshot_.toBuilder(); + } + snapshot_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(snapshot_); + snapshot_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000002; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public IsProcedureDoneResponse parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new IsProcedureDoneResponse(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional bool done = 1 [default = false]; + public static final int DONE_FIELD_NUMBER = 1; + private boolean done_; + /** + * optional bool done = 1 [default = false]; + */ + public boolean hasDone() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional bool done = 1 [default = false]; + */ + public boolean getDone() { + return done_; + } + + // optional .ProcedureDescription snapshot = 2; + public static final int SNAPSHOT_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription snapshot_; + /** + * optional .ProcedureDescription snapshot = 2; + */ + public boolean hasSnapshot() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot() { + return snapshot_; + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder() { + return snapshot_; + } + + private void initFields() { + done_ = false; + snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasSnapshot()) { + if (!getSnapshot().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeBool(1, done_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeMessage(2, snapshot_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeBoolSize(1, done_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(2, snapshot_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) obj; + + boolean result = true; + result = result && (hasDone() == other.hasDone()); + if (hasDone()) { + result = result && (getDone() + == other.getDone()); + } + result = result && (hasSnapshot() == other.hasSnapshot()); + if (hasSnapshot()) { + result = result && getSnapshot() + .equals(other.getSnapshot()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasDone()) { + hash = (37 * hash) + DONE_FIELD_NUMBER; + hash = (53 * hash) + hashBoolean(getDone()); + } + if (hasSnapshot()) { + hash = (37 * hash) + SNAPSHOT_FIELD_NUMBER; + hash = (53 * hash) + getSnapshot().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code IsProcedureDoneResponse} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponseOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getSnapshotFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + done_ = false; + bitField0_ = (bitField0_ & ~0x00000001); + if (snapshotBuilder_ == null) { + snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + } else { + snapshotBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_IsProcedureDoneResponse_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.done_ = done_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + if (snapshotBuilder_ == null) { + result.snapshot_ = snapshot_; + } else { + result.snapshot_ = snapshotBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse.getDefaultInstance()) return this; + if (other.hasDone()) { + setDone(other.getDone()); + } + if (other.hasSnapshot()) { + mergeSnapshot(other.getSnapshot()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasSnapshot()) { + if (!getSnapshot().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsProcedureDoneResponse) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional bool done = 1 [default = false]; + private boolean done_ ; + /** + * optional bool done = 1 [default = false]; + */ + public boolean hasDone() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional bool done = 1 [default = false]; + */ + public boolean getDone() { + return done_; + } + /** + * optional bool done = 1 [default = false]; + */ + public Builder setDone(boolean value) { + bitField0_ |= 0x00000001; + done_ = value; + onChanged(); + return this; + } + /** + * optional bool done = 1 [default = false]; + */ + public Builder clearDone() { + bitField0_ = (bitField0_ & ~0x00000001); + done_ = false; + onChanged(); + return this; + } + + // optional .ProcedureDescription snapshot = 2; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> snapshotBuilder_; + /** + * optional .ProcedureDescription snapshot = 2; + */ + public boolean hasSnapshot() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot() { + if (snapshotBuilder_ == null) { + return snapshot_; + } else { + return snapshotBuilder_.getMessage(); + } + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public Builder setSnapshot(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { + if (snapshotBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + snapshot_ = value; + onChanged(); + } else { + snapshotBuilder_.setMessage(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public Builder setSnapshot( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder builderForValue) { + if (snapshotBuilder_ == null) { + snapshot_ = builderForValue.build(); + onChanged(); + } else { + snapshotBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public Builder mergeSnapshot(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { + if (snapshotBuilder_ == null) { + if (((bitField0_ & 0x00000002) == 0x00000002) && + snapshot_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance()) { + snapshot_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.newBuilder(snapshot_).mergeFrom(value).buildPartial(); + } else { + snapshot_ = value; + } + onChanged(); + } else { + snapshotBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public Builder clearSnapshot() { + if (snapshotBuilder_ == null) { + snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); + onChanged(); + } else { + snapshotBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder getSnapshotBuilder() { + bitField0_ |= 0x00000002; + onChanged(); + return getSnapshotFieldBuilder().getBuilder(); + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder() { + if (snapshotBuilder_ != null) { + return snapshotBuilder_.getMessageOrBuilder(); + } else { + return snapshot_; + } + } + /** + * optional .ProcedureDescription snapshot = 2; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> + getSnapshotFieldBuilder() { + if (snapshotBuilder_ == null) { + snapshotBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder>( + snapshot_, + getParentForChildren(), + isClean()); + snapshot_ = null; + } + return snapshotBuilder_; + } + + // @@protoc_insertion_point(builder_scope:IsProcedureDoneResponse) + } + + static { + defaultInstance = new IsProcedureDoneResponse(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:IsProcedureDoneResponse) + } + + public interface SetQuotaRequestOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional string user_name = 1; + /** + * optional string user_name = 1; + */ + boolean hasUserName(); + /** + * optional string user_name = 1; + */ + java.lang.String getUserName(); + /** + * optional string user_name = 1; + */ + com.google.protobuf.ByteString + getUserNameBytes(); + + // optional string user_group = 2; + /** + * optional string user_group = 2; + */ + boolean hasUserGroup(); + /** + * optional string user_group = 2; + */ + java.lang.String getUserGroup(); + /** + * optional string user_group = 2; + */ + com.google.protobuf.ByteString + getUserGroupBytes(); + + // optional string namespace = 3; + /** + * optional string namespace = 3; + */ + boolean hasNamespace(); + /** + * optional string namespace = 3; + */ + java.lang.String getNamespace(); + /** + * optional string namespace = 3; + */ + com.google.protobuf.ByteString + getNamespaceBytes(); + + // optional .TableName table_name = 4; + /** + * optional .TableName table_name = 4; + */ + boolean hasTableName(); + /** + * optional .TableName table_name = 4; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName(); + /** + * optional .TableName table_name = 4; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder(); + + // optional bool remove_all = 5; + /** + * optional bool remove_all = 5; + */ + boolean hasRemoveAll(); + /** + * optional bool remove_all = 5; + */ + boolean getRemoveAll(); + + // optional bool bypass_globals = 6; + /** + * optional bool bypass_globals = 6; + */ + boolean hasBypassGlobals(); + /** + * optional bool bypass_globals = 6; + */ + boolean getBypassGlobals(); + + // optional .ThrottleRequest throttle = 7; + /** + * optional .ThrottleRequest throttle = 7; + */ + boolean hasThrottle(); + /** + * optional .ThrottleRequest throttle = 7; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest getThrottle(); + /** + * optional .ThrottleRequest throttle = 7; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder getThrottleOrBuilder(); + } + /** + * Protobuf type {@code SetQuotaRequest} + */ + public static final class SetQuotaRequest extends + com.google.protobuf.GeneratedMessage + implements SetQuotaRequestOrBuilder { + // Use SetQuotaRequest.newBuilder() to construct. + private SetQuotaRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private SetQuotaRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final SetQuotaRequest defaultInstance; + public static SetQuotaRequest getDefaultInstance() { + return defaultInstance; + } + + public SetQuotaRequest getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private SetQuotaRequest( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + bitField0_ |= 0x00000001; + userName_ = input.readBytes(); + break; + } + case 18: { + bitField0_ |= 0x00000002; + userGroup_ = input.readBytes(); + break; + } + case 26: { + bitField0_ |= 0x00000004; + namespace_ = input.readBytes(); + break; + } + case 34: { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder subBuilder = null; + if (((bitField0_ & 0x00000008) == 0x00000008)) { + subBuilder = tableName_.toBuilder(); + } + tableName_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(tableName_); + tableName_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000008; + break; + } + case 40: { + bitField0_ |= 0x00000010; + removeAll_ = input.readBool(); + break; + } + case 48: { + bitField0_ |= 0x00000020; + bypassGlobals_ = input.readBool(); + break; + } + case 58: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder subBuilder = null; + if (((bitField0_ & 0x00000040) == 0x00000040)) { + subBuilder = throttle_.toBuilder(); + } + throttle_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(throttle_); + throttle_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000040; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public SetQuotaRequest parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new SetQuotaRequest(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional string user_name = 1; + public static final int USER_NAME_FIELD_NUMBER = 1; + private java.lang.Object userName_; + /** + * optional string user_name = 1; + */ + public boolean hasUserName() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional string user_name = 1; + */ + public java.lang.String getUserName() { + java.lang.Object ref = userName_; + if (ref instanceof java.lang.String) { + return (java.lang.String) ref; + } else { + com.google.protobuf.ByteString bs = + (com.google.protobuf.ByteString) ref; + java.lang.String s = bs.toStringUtf8(); + if (bs.isValidUtf8()) { + userName_ = s; + } + return s; + } + } + /** + * optional string user_name = 1; + */ + public com.google.protobuf.ByteString + getUserNameBytes() { + java.lang.Object ref = userName_; + if (ref instanceof java.lang.String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + userName_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + + // optional string user_group = 2; + public static final int USER_GROUP_FIELD_NUMBER = 2; + private java.lang.Object userGroup_; + /** + * optional string user_group = 2; + */ + public boolean hasUserGroup() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional string user_group = 2; + */ + public java.lang.String getUserGroup() { + java.lang.Object ref = userGroup_; + if (ref instanceof java.lang.String) { + return (java.lang.String) ref; + } else { + com.google.protobuf.ByteString bs = + (com.google.protobuf.ByteString) ref; + java.lang.String s = bs.toStringUtf8(); + if (bs.isValidUtf8()) { + userGroup_ = s; + } + return s; + } + } + /** + * optional string user_group = 2; + */ + public com.google.protobuf.ByteString + getUserGroupBytes() { + java.lang.Object ref = userGroup_; + if (ref instanceof java.lang.String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + userGroup_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + + // optional string namespace = 3; + public static final int NAMESPACE_FIELD_NUMBER = 3; + private java.lang.Object namespace_; + /** + * optional string namespace = 3; + */ + public boolean hasNamespace() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional string namespace = 3; + */ + public java.lang.String getNamespace() { + java.lang.Object ref = namespace_; + if (ref instanceof java.lang.String) { + return (java.lang.String) ref; + } else { + com.google.protobuf.ByteString bs = + (com.google.protobuf.ByteString) ref; + java.lang.String s = bs.toStringUtf8(); + if (bs.isValidUtf8()) { + namespace_ = s; + } + return s; + } + } + /** + * optional string namespace = 3; + */ + public com.google.protobuf.ByteString + getNamespaceBytes() { + java.lang.Object ref = namespace_; + if (ref instanceof java.lang.String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + namespace_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + + // optional .TableName table_name = 4; + public static final int TABLE_NAME_FIELD_NUMBER = 4; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName tableName_; + /** + * optional .TableName table_name = 4; + */ + public boolean hasTableName() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .TableName table_name = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName() { + return tableName_; + } + /** + * optional .TableName table_name = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder() { + return tableName_; + } + + // optional bool remove_all = 5; + public static final int REMOVE_ALL_FIELD_NUMBER = 5; + private boolean removeAll_; + /** + * optional bool remove_all = 5; + */ + public boolean hasRemoveAll() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional bool remove_all = 5; + */ + public boolean getRemoveAll() { + return removeAll_; + } + + // optional bool bypass_globals = 6; + public static final int BYPASS_GLOBALS_FIELD_NUMBER = 6; + private boolean bypassGlobals_; + /** + * optional bool bypass_globals = 6; + */ + public boolean hasBypassGlobals() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional bool bypass_globals = 6; + */ + public boolean getBypassGlobals() { + return bypassGlobals_; + } + + // optional .ThrottleRequest throttle = 7; + public static final int THROTTLE_FIELD_NUMBER = 7; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest throttle_; + /** + * optional .ThrottleRequest throttle = 7; + */ + public boolean hasThrottle() { + return ((bitField0_ & 0x00000040) == 0x00000040); + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest getThrottle() { + return throttle_; + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder getThrottleOrBuilder() { + return throttle_; + } + + private void initFields() { + userName_ = ""; + userGroup_ = ""; + namespace_ = ""; + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + removeAll_ = false; + bypassGlobals_ = false; + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasTableName()) { + if (!getTableName().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasThrottle()) { + if (!getThrottle().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeBytes(1, getUserNameBytes()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeBytes(2, getUserGroupBytes()); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + output.writeBytes(3, getNamespaceBytes()); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + output.writeMessage(4, tableName_); + } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + output.writeBool(5, removeAll_); + } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + output.writeBool(6, bypassGlobals_); + } + if (((bitField0_ & 0x00000040) == 0x00000040)) { + output.writeMessage(7, throttle_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeBytesSize(1, getUserNameBytes()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeBytesSize(2, getUserGroupBytes()); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + size += com.google.protobuf.CodedOutputStream + .computeBytesSize(3, getNamespaceBytes()); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(4, tableName_); + } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + size += com.google.protobuf.CodedOutputStream + .computeBoolSize(5, removeAll_); + } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + size += com.google.protobuf.CodedOutputStream + .computeBoolSize(6, bypassGlobals_); + } + if (((bitField0_ & 0x00000040) == 0x00000040)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(7, throttle_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest) obj; + + boolean result = true; + result = result && (hasUserName() == other.hasUserName()); + if (hasUserName()) { + result = result && getUserName() + .equals(other.getUserName()); + } + result = result && (hasUserGroup() == other.hasUserGroup()); + if (hasUserGroup()) { + result = result && getUserGroup() + .equals(other.getUserGroup()); + } + result = result && (hasNamespace() == other.hasNamespace()); + if (hasNamespace()) { + result = result && getNamespace() + .equals(other.getNamespace()); + } + result = result && (hasTableName() == other.hasTableName()); + if (hasTableName()) { + result = result && getTableName() + .equals(other.getTableName()); + } + result = result && (hasRemoveAll() == other.hasRemoveAll()); + if (hasRemoveAll()) { + result = result && (getRemoveAll() + == other.getRemoveAll()); + } + result = result && (hasBypassGlobals() == other.hasBypassGlobals()); + if (hasBypassGlobals()) { + result = result && (getBypassGlobals() + == other.getBypassGlobals()); + } + result = result && (hasThrottle() == other.hasThrottle()); + if (hasThrottle()) { + result = result && getThrottle() + .equals(other.getThrottle()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasUserName()) { + hash = (37 * hash) + USER_NAME_FIELD_NUMBER; + hash = (53 * hash) + getUserName().hashCode(); + } + if (hasUserGroup()) { + hash = (37 * hash) + USER_GROUP_FIELD_NUMBER; + hash = (53 * hash) + getUserGroup().hashCode(); + } + if (hasNamespace()) { + hash = (37 * hash) + NAMESPACE_FIELD_NUMBER; + hash = (53 * hash) + getNamespace().hashCode(); + } + if (hasTableName()) { + hash = (37 * hash) + TABLE_NAME_FIELD_NUMBER; + hash = (53 * hash) + getTableName().hashCode(); + } + if (hasRemoveAll()) { + hash = (37 * hash) + REMOVE_ALL_FIELD_NUMBER; + hash = (53 * hash) + hashBoolean(getRemoveAll()); + } + if (hasBypassGlobals()) { + hash = (37 * hash) + BYPASS_GLOBALS_FIELD_NUMBER; + hash = (53 * hash) + hashBoolean(getBypassGlobals()); + } + if (hasThrottle()) { + hash = (37 * hash) + THROTTLE_FIELD_NUMBER; + hash = (53 * hash) + getThrottle().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code SetQuotaRequest} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequestOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getTableNameFieldBuilder(); + getThrottleFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + userName_ = ""; + bitField0_ = (bitField0_ & ~0x00000001); + userGroup_ = ""; + bitField0_ = (bitField0_ & ~0x00000002); + namespace_ = ""; + bitField0_ = (bitField0_ & ~0x00000004); + if (tableNameBuilder_ == null) { + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + } else { + tableNameBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000008); + removeAll_ = false; + bitField0_ = (bitField0_ & ~0x00000010); + bypassGlobals_ = false; + bitField0_ = (bitField0_ & ~0x00000020); + if (throttleBuilder_ == null) { + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance(); + } else { + throttleBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000040); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaRequest_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.userName_ = userName_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + result.userGroup_ = userGroup_; + if (((from_bitField0_ & 0x00000004) == 0x00000004)) { + to_bitField0_ |= 0x00000004; + } + result.namespace_ = namespace_; + if (((from_bitField0_ & 0x00000008) == 0x00000008)) { + to_bitField0_ |= 0x00000008; + } + if (tableNameBuilder_ == null) { + result.tableName_ = tableName_; + } else { + result.tableName_ = tableNameBuilder_.build(); + } + if (((from_bitField0_ & 0x00000010) == 0x00000010)) { + to_bitField0_ |= 0x00000010; + } + result.removeAll_ = removeAll_; + if (((from_bitField0_ & 0x00000020) == 0x00000020)) { + to_bitField0_ |= 0x00000020; + } + result.bypassGlobals_ = bypassGlobals_; + if (((from_bitField0_ & 0x00000040) == 0x00000040)) { + to_bitField0_ |= 0x00000040; + } + if (throttleBuilder_ == null) { + result.throttle_ = throttle_; + } else { + result.throttle_ = throttleBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.getDefaultInstance()) return this; + if (other.hasUserName()) { + bitField0_ |= 0x00000001; + userName_ = other.userName_; + onChanged(); + } + if (other.hasUserGroup()) { + bitField0_ |= 0x00000002; + userGroup_ = other.userGroup_; + onChanged(); + } + if (other.hasNamespace()) { + bitField0_ |= 0x00000004; + namespace_ = other.namespace_; + onChanged(); + } + if (other.hasTableName()) { + mergeTableName(other.getTableName()); + } + if (other.hasRemoveAll()) { + setRemoveAll(other.getRemoveAll()); + } + if (other.hasBypassGlobals()) { + setBypassGlobals(other.getBypassGlobals()); + } + if (other.hasThrottle()) { + mergeThrottle(other.getThrottle()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasTableName()) { + if (!getTableName().isInitialized()) { + + return false; + } + } + if (hasThrottle()) { + if (!getThrottle().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional string user_name = 1; + private java.lang.Object userName_ = ""; + /** + * optional string user_name = 1; + */ + public boolean hasUserName() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional string user_name = 1; + */ + public java.lang.String getUserName() { + java.lang.Object ref = userName_; + if (!(ref instanceof java.lang.String)) { + java.lang.String s = ((com.google.protobuf.ByteString) ref) + .toStringUtf8(); + userName_ = s; + return s; + } else { + return (java.lang.String) ref; + } + } + /** + * optional string user_name = 1; + */ + public com.google.protobuf.ByteString + getUserNameBytes() { + java.lang.Object ref = userName_; + if (ref instanceof String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + userName_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + /** + * optional string user_name = 1; + */ + public Builder setUserName( + java.lang.String value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000001; + userName_ = value; + onChanged(); + return this; + } + /** + * optional string user_name = 1; + */ + public Builder clearUserName() { + bitField0_ = (bitField0_ & ~0x00000001); + userName_ = getDefaultInstance().getUserName(); + onChanged(); + return this; + } + /** + * optional string user_name = 1; + */ + public Builder setUserNameBytes( + com.google.protobuf.ByteString value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000001; + userName_ = value; + onChanged(); + return this; + } + + // optional string user_group = 2; + private java.lang.Object userGroup_ = ""; + /** + * optional string user_group = 2; + */ + public boolean hasUserGroup() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional string user_group = 2; + */ + public java.lang.String getUserGroup() { + java.lang.Object ref = userGroup_; + if (!(ref instanceof java.lang.String)) { + java.lang.String s = ((com.google.protobuf.ByteString) ref) + .toStringUtf8(); + userGroup_ = s; + return s; + } else { + return (java.lang.String) ref; + } + } + /** + * optional string user_group = 2; + */ + public com.google.protobuf.ByteString + getUserGroupBytes() { + java.lang.Object ref = userGroup_; + if (ref instanceof String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + userGroup_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + /** + * optional string user_group = 2; + */ + public Builder setUserGroup( + java.lang.String value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000002; + userGroup_ = value; + onChanged(); + return this; + } + /** + * optional string user_group = 2; + */ + public Builder clearUserGroup() { + bitField0_ = (bitField0_ & ~0x00000002); + userGroup_ = getDefaultInstance().getUserGroup(); + onChanged(); + return this; + } + /** + * optional string user_group = 2; + */ + public Builder setUserGroupBytes( + com.google.protobuf.ByteString value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000002; + userGroup_ = value; + onChanged(); + return this; + } + + // optional string namespace = 3; + private java.lang.Object namespace_ = ""; + /** + * optional string namespace = 3; + */ + public boolean hasNamespace() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional string namespace = 3; + */ + public java.lang.String getNamespace() { + java.lang.Object ref = namespace_; + if (!(ref instanceof java.lang.String)) { + java.lang.String s = ((com.google.protobuf.ByteString) ref) + .toStringUtf8(); + namespace_ = s; + return s; + } else { + return (java.lang.String) ref; + } + } + /** + * optional string namespace = 3; + */ + public com.google.protobuf.ByteString + getNamespaceBytes() { + java.lang.Object ref = namespace_; + if (ref instanceof String) { + com.google.protobuf.ByteString b = + com.google.protobuf.ByteString.copyFromUtf8( + (java.lang.String) ref); + namespace_ = b; + return b; + } else { + return (com.google.protobuf.ByteString) ref; + } + } + /** + * optional string namespace = 3; + */ + public Builder setNamespace( + java.lang.String value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000004; + namespace_ = value; + onChanged(); + return this; + } + /** + * optional string namespace = 3; + */ + public Builder clearNamespace() { + bitField0_ = (bitField0_ & ~0x00000004); + namespace_ = getDefaultInstance().getNamespace(); + onChanged(); + return this; + } + /** + * optional string namespace = 3; + */ + public Builder setNamespaceBytes( + com.google.protobuf.ByteString value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000004; + namespace_ = value; + onChanged(); + return this; + } + + // optional .TableName table_name = 4; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> tableNameBuilder_; + /** + * optional .TableName table_name = 4; + */ + public boolean hasTableName() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .TableName table_name = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName getTableName() { + if (tableNameBuilder_ == null) { + return tableName_; + } else { + return tableNameBuilder_.getMessage(); + } + } + /** + * optional .TableName table_name = 4; + */ + public Builder setTableName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableNameBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + tableName_ = value; + onChanged(); + } else { + tableNameBuilder_.setMessage(value); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TableName table_name = 4; + */ + public Builder setTableName( + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder builderForValue) { + if (tableNameBuilder_ == null) { + tableName_ = builderForValue.build(); + onChanged(); + } else { + tableNameBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TableName table_name = 4; + */ + public Builder mergeTableName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName value) { + if (tableNameBuilder_ == null) { + if (((bitField0_ & 0x00000008) == 0x00000008) && + tableName_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance()) { + tableName_ = + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.newBuilder(tableName_).mergeFrom(value).buildPartial(); + } else { + tableName_ = value; + } + onChanged(); + } else { + tableNameBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TableName table_name = 4; + */ + public Builder clearTableName() { + if (tableNameBuilder_ == null) { + tableName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.getDefaultInstance(); + onChanged(); + } else { + tableNameBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000008); + return this; + } + /** + * optional .TableName table_name = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder getTableNameBuilder() { + bitField0_ |= 0x00000008; + onChanged(); + return getTableNameFieldBuilder().getBuilder(); + } + /** + * optional .TableName table_name = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder getTableNameOrBuilder() { + if (tableNameBuilder_ != null) { + return tableNameBuilder_.getMessageOrBuilder(); + } else { + return tableName_; + } + } + /** + * optional .TableName table_name = 4; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder> + getTableNameFieldBuilder() { + if (tableNameBuilder_ == null) { + tableNameBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TableNameOrBuilder>( + tableName_, + getParentForChildren(), + isClean()); + tableName_ = null; + } + return tableNameBuilder_; + } + + // optional bool remove_all = 5; + private boolean removeAll_ ; + /** + * optional bool remove_all = 5; + */ + public boolean hasRemoveAll() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional bool remove_all = 5; + */ + public boolean getRemoveAll() { + return removeAll_; + } + /** + * optional bool remove_all = 5; + */ + public Builder setRemoveAll(boolean value) { + bitField0_ |= 0x00000010; + removeAll_ = value; + onChanged(); + return this; + } + /** + * optional bool remove_all = 5; + */ + public Builder clearRemoveAll() { + bitField0_ = (bitField0_ & ~0x00000010); + removeAll_ = false; + onChanged(); + return this; + } + + // optional bool bypass_globals = 6; + private boolean bypassGlobals_ ; + /** + * optional bool bypass_globals = 6; + */ + public boolean hasBypassGlobals() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional bool bypass_globals = 6; + */ + public boolean getBypassGlobals() { + return bypassGlobals_; + } + /** + * optional bool bypass_globals = 6; + */ + public Builder setBypassGlobals(boolean value) { + bitField0_ |= 0x00000020; + bypassGlobals_ = value; + onChanged(); + return this; + } + /** + * optional bool bypass_globals = 6; + */ + public Builder clearBypassGlobals() { + bitField0_ = (bitField0_ & ~0x00000020); + bypassGlobals_ = false; + onChanged(); + return this; + } + + // optional .ThrottleRequest throttle = 7; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder> throttleBuilder_; + /** + * optional .ThrottleRequest throttle = 7; + */ + public boolean hasThrottle() { + return ((bitField0_ & 0x00000040) == 0x00000040); + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest getThrottle() { + if (throttleBuilder_ == null) { + return throttle_; + } else { + return throttleBuilder_.getMessage(); + } + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public Builder setThrottle(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest value) { + if (throttleBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + throttle_ = value; + onChanged(); + } else { + throttleBuilder_.setMessage(value); + } + bitField0_ |= 0x00000040; + return this; + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public Builder setThrottle( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder builderForValue) { + if (throttleBuilder_ == null) { + throttle_ = builderForValue.build(); + onChanged(); + } else { + throttleBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000040; + return this; + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public Builder mergeThrottle(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest value) { + if (throttleBuilder_ == null) { + if (((bitField0_ & 0x00000040) == 0x00000040) && + throttle_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance()) { + throttle_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.newBuilder(throttle_).mergeFrom(value).buildPartial(); + } else { + throttle_ = value; + } + onChanged(); + } else { + throttleBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000040; + return this; + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public Builder clearThrottle() { + if (throttleBuilder_ == null) { + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance(); + onChanged(); + } else { + throttleBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000040); + return this; + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder getThrottleBuilder() { + bitField0_ |= 0x00000040; + onChanged(); + return getThrottleFieldBuilder().getBuilder(); + } + /** + * optional .ThrottleRequest throttle = 7; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder getThrottleOrBuilder() { + if (throttleBuilder_ != null) { + return throttleBuilder_.getMessageOrBuilder(); + } else { + return throttle_; + } + } + /** + * optional .ThrottleRequest throttle = 7; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder> + getThrottleFieldBuilder() { + if (throttleBuilder_ == null) { + throttleBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder>( + throttle_, + getParentForChildren(), + isClean()); + throttle_ = null; + } + return throttleBuilder_; + } + + // @@protoc_insertion_point(builder_scope:SetQuotaRequest) + } + + static { + defaultInstance = new SetQuotaRequest(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:SetQuotaRequest) + } + + public interface SetQuotaResponseOrBuilder + extends com.google.protobuf.MessageOrBuilder { + } + /** + * Protobuf type {@code SetQuotaResponse} + */ + public static final class SetQuotaResponse extends + com.google.protobuf.GeneratedMessage + implements SetQuotaResponseOrBuilder { + // Use SetQuotaResponse.newBuilder() to construct. + private SetQuotaResponse(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private SetQuotaResponse(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final SetQuotaResponse defaultInstance; + public static SetQuotaResponse getDefaultInstance() { + return defaultInstance; + } + + public SetQuotaResponse getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private SetQuotaResponse( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaResponse_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaResponse_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public SetQuotaResponse parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new SetQuotaResponse(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private void initFields() { + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse other = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse) obj; + + boolean result = true; + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code SetQuotaResponse} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponseOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaResponse_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaResponse_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription getSnapshot() { - if (snapshotBuilder_ == null) { - return snapshot_; - } else { - return snapshotBuilder_.getMessage(); - } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.internal_static_SetQuotaResponse_descriptor; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public Builder setSnapshot(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { - if (snapshotBuilder_ == null) { - if (value == null) { - throw new NullPointerException(); - } - snapshot_ = value; - onChanged(); - } else { - snapshotBuilder_.setMessage(value); - } - bitField0_ |= 0x00000002; - return this; + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance(); } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public Builder setSnapshot( - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder builderForValue) { - if (snapshotBuilder_ == null) { - snapshot_ = builderForValue.build(); - onChanged(); - } else { - snapshotBuilder_.setMessage(builderForValue.build()); + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse build() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); } - bitField0_ |= 0x00000002; - return this; + return result; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public Builder mergeSnapshot(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription value) { - if (snapshotBuilder_ == null) { - if (((bitField0_ & 0x00000002) == 0x00000002) && - snapshot_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance()) { - snapshot_ = - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.newBuilder(snapshot_).mergeFrom(value).buildPartial(); - } else { - snapshot_ = value; - } - onChanged(); - } else { - snapshotBuilder_.mergeFrom(value); - } - bitField0_ |= 0x00000002; - return this; + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse result = new org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse(this); + onBuilt(); + return result; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public Builder clearSnapshot() { - if (snapshotBuilder_ == null) { - snapshot_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.getDefaultInstance(); - onChanged(); + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse)other); } else { - snapshotBuilder_.clear(); + super.mergeFrom(other); + return this; } - bitField0_ = (bitField0_ & ~0x00000002); - return this; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder getSnapshotBuilder() { - bitField0_ |= 0x00000002; - onChanged(); - return getSnapshotFieldBuilder().getBuilder(); + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance()) return this; + this.mergeUnknownFields(other.getUnknownFields()); + return this; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder getSnapshotOrBuilder() { - if (snapshotBuilder_ != null) { - return snapshotBuilder_.getMessageOrBuilder(); - } else { - return snapshot_; - } + + public final boolean isInitialized() { + return true; } - /** - * optional .ProcedureDescription snapshot = 2; - */ - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder> - getSnapshotFieldBuilder() { - if (snapshotBuilder_ == null) { - snapshotBuilder_ = new com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescriptionOrBuilder>( - snapshot_, - getParentForChildren(), - isClean()); - snapshot_ = null; + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } } - return snapshotBuilder_; + return this; } - // @@protoc_insertion_point(builder_scope:IsProcedureDoneResponse) + // @@protoc_insertion_point(builder_scope:SetQuotaResponse) } static { - defaultInstance = new IsProcedureDoneResponse(true); + defaultInstance = new SetQuotaResponse(true); defaultInstance.initFields(); } - // @@protoc_insertion_point(class_scope:IsProcedureDoneResponse) + // @@protoc_insertion_point(class_scope:SetQuotaResponse) } /** @@ -42025,6 +44930,30 @@ public final class MasterProtos { org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest request, com.google.protobuf.RpcCallback done); + /** + * rpc GetTableState(.GetTableStateRequest) returns (.GetTableStateResponse); + * + *
    +       ** returns table state 
    +       * 
    + */ + public abstract void getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request, + com.google.protobuf.RpcCallback done); + + /** + * rpc SetQuota(.SetQuotaRequest) returns (.SetQuotaResponse); + * + *
    +       ** Apply the new quota settings 
    +       * 
    + */ + public abstract void setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request, + com.google.protobuf.RpcCallback done); + } public static com.google.protobuf.Service newReflectiveService( @@ -42374,6 +45303,22 @@ public final class MasterProtos { impl.listTableNamesByNamespace(controller, request, done); } + @java.lang.Override + public void getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request, + com.google.protobuf.RpcCallback done) { + impl.getTableState(controller, request, done); + } + + @java.lang.Override + public void setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request, + com.google.protobuf.RpcCallback done) { + impl.setQuota(controller, request, done); + } + }; } @@ -42482,6 +45427,10 @@ public final class MasterProtos { return impl.listTableDescriptorsByNamespace(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableDescriptorsByNamespaceRequest)request); case 42: return impl.listTableNamesByNamespace(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest)request); + case 43: + return impl.getTableState(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest)request); + case 44: + return impl.setQuota(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest)request); default: throw new java.lang.AssertionError("Can't get here."); } @@ -42582,6 +45531,10 @@ public final class MasterProtos { return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableDescriptorsByNamespaceRequest.getDefaultInstance(); case 42: return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest.getDefaultInstance(); + case 43: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.getDefaultInstance(); + case 44: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.getDefaultInstance(); default: throw new java.lang.AssertionError("Can't get here."); } @@ -42682,6 +45635,10 @@ public final class MasterProtos { return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableDescriptorsByNamespaceResponse.getDefaultInstance(); case 42: return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceResponse.getDefaultInstance(); + case 43: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance(); + case 44: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance(); default: throw new java.lang.AssertionError("Can't get here."); } @@ -43232,6 +46189,30 @@ public final class MasterProtos { org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest request, com.google.protobuf.RpcCallback done); + /** + * rpc GetTableState(.GetTableStateRequest) returns (.GetTableStateResponse); + * + *
    +     ** returns table state 
    +     * 
    + */ + public abstract void getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request, + com.google.protobuf.RpcCallback done); + + /** + * rpc SetQuota(.SetQuotaRequest) returns (.SetQuotaResponse); + * + *
    +     ** Apply the new quota settings 
    +     * 
    + */ + public abstract void setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request, + com.google.protobuf.RpcCallback done); + public static final com.google.protobuf.Descriptors.ServiceDescriptor getDescriptor() { @@ -43469,6 +46450,16 @@ public final class MasterProtos { com.google.protobuf.RpcUtil.specializeCallback( done)); return; + case 43: + this.getTableState(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest)request, + com.google.protobuf.RpcUtil.specializeCallback( + done)); + return; + case 44: + this.setQuota(controller, (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest)request, + com.google.protobuf.RpcUtil.specializeCallback( + done)); + return; default: throw new java.lang.AssertionError("Can't get here."); } @@ -43569,6 +46560,10 @@ public final class MasterProtos { return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableDescriptorsByNamespaceRequest.getDefaultInstance(); case 42: return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest.getDefaultInstance(); + case 43: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest.getDefaultInstance(); + case 44: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest.getDefaultInstance(); default: throw new java.lang.AssertionError("Can't get here."); } @@ -43669,6 +46664,10 @@ public final class MasterProtos { return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableDescriptorsByNamespaceResponse.getDefaultInstance(); case 42: return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceResponse.getDefaultInstance(); + case 43: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance(); + case 44: + return org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance(); default: throw new java.lang.AssertionError("Can't get here."); } @@ -44334,6 +47333,36 @@ public final class MasterProtos { org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceResponse.class, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceResponse.getDefaultInstance())); } + + public void getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request, + com.google.protobuf.RpcCallback done) { + channel.callMethod( + getDescriptor().getMethods().get(43), + controller, + request, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance(), + com.google.protobuf.RpcUtil.generalizeCallback( + done, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.class, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance())); + } + + public void setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request, + com.google.protobuf.RpcCallback done) { + channel.callMethod( + getDescriptor().getMethods().get(44), + controller, + request, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance(), + com.google.protobuf.RpcUtil.generalizeCallback( + done, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.class, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance())); + } } public static BlockingInterface newBlockingStub( @@ -44556,6 +47585,16 @@ public final class MasterProtos { com.google.protobuf.RpcController controller, org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceRequest request) throws com.google.protobuf.ServiceException; + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request) + throws com.google.protobuf.ServiceException; + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request) + throws com.google.protobuf.ServiceException; } private static final class BlockingStub implements BlockingInterface { @@ -45080,6 +48119,30 @@ public final class MasterProtos { org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ListTableNamesByNamespaceResponse.getDefaultInstance()); } + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse getTableState( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateRequest request) + throws com.google.protobuf.ServiceException { + return (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse) channel.callBlockingMethod( + getDescriptor().getMethods().get(43), + controller, + request, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableStateResponse.getDefaultInstance()); + } + + + public org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse setQuota( + com.google.protobuf.RpcController controller, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest request) + throws com.google.protobuf.ServiceException { + return (org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse) channel.callBlockingMethod( + getDescriptor().getMethods().get(44), + controller, + request, + org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse.getDefaultInstance()); + } + } // @@protoc_insertion_point(class_scope:MasterService) @@ -45456,6 +48519,16 @@ public final class MasterProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable internal_static_GetTableNamesResponse_fieldAccessorTable; private static com.google.protobuf.Descriptors.Descriptor + internal_static_GetTableStateRequest_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_GetTableStateRequest_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_GetTableStateResponse_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_GetTableStateResponse_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor internal_static_GetClusterStatusRequest_descriptor; private static com.google.protobuf.GeneratedMessage.FieldAccessorTable @@ -45495,6 +48568,16 @@ public final class MasterProtos { private static com.google.protobuf.GeneratedMessage.FieldAccessorTable internal_static_IsProcedureDoneResponse_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_SetQuotaRequest_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_SetQuotaRequest_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_SetQuotaResponse_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_SetQuotaResponse_fieldAccessorTable; public static com.google.protobuf.Descriptors.FileDescriptor getDescriptor() { @@ -45505,196 +48588,207 @@ public final class MasterProtos { static { java.lang.String[] descriptorData = { "\n\014Master.proto\032\013HBase.proto\032\014Client.prot" + - "o\032\023ClusterStatus.proto\"`\n\020AddColumnReque" + - "st\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\022,\n\017co" + - "lumn_families\030\002 \002(\0132\023.ColumnFamilySchema" + - "\"\023\n\021AddColumnResponse\"J\n\023DeleteColumnReq" + - "uest\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\022\023\n\013" + - "column_name\030\002 \002(\014\"\026\n\024DeleteColumnRespons" + - "e\"c\n\023ModifyColumnRequest\022\036\n\ntable_name\030\001" + - " \002(\0132\n.TableName\022,\n\017column_families\030\002 \002(" + - "\0132\023.ColumnFamilySchema\"\026\n\024ModifyColumnRe", - "sponse\"\\\n\021MoveRegionRequest\022 \n\006region\030\001 " + - "\002(\0132\020.RegionSpecifier\022%\n\020dest_server_nam" + - "e\030\002 \001(\0132\013.ServerName\"\024\n\022MoveRegionRespon" + - "se\"\200\001\n\035DispatchMergingRegionsRequest\022\"\n\010" + - "region_a\030\001 \002(\0132\020.RegionSpecifier\022\"\n\010regi" + - "on_b\030\002 \002(\0132\020.RegionSpecifier\022\027\n\010forcible" + - "\030\003 \001(\010:\005false\" \n\036DispatchMergingRegionsR" + - "esponse\"7\n\023AssignRegionRequest\022 \n\006region" + - "\030\001 \002(\0132\020.RegionSpecifier\"\026\n\024AssignRegion" + - "Response\"O\n\025UnassignRegionRequest\022 \n\006reg", - "ion\030\001 \002(\0132\020.RegionSpecifier\022\024\n\005force\030\002 \001" + - "(\010:\005false\"\030\n\026UnassignRegionResponse\"8\n\024O" + - "fflineRegionRequest\022 \n\006region\030\001 \002(\0132\020.Re" + - "gionSpecifier\"\027\n\025OfflineRegionResponse\"L" + - "\n\022CreateTableRequest\022\"\n\014table_schema\030\001 \002" + - "(\0132\014.TableSchema\022\022\n\nsplit_keys\030\002 \003(\014\"\025\n\023" + - "CreateTableResponse\"4\n\022DeleteTableReques" + - "t\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\"\025\n\023Del" + - "eteTableResponse\"T\n\024TruncateTableRequest" + - "\022\035\n\ttableName\030\001 \002(\0132\n.TableName\022\035\n\016prese", - "rveSplits\030\002 \001(\010:\005false\"\027\n\025TruncateTableR" + - "esponse\"4\n\022EnableTableRequest\022\036\n\ntable_n" + - "ame\030\001 \002(\0132\n.TableName\"\025\n\023EnableTableResp" + - "onse\"5\n\023DisableTableRequest\022\036\n\ntable_nam" + - "e\030\001 \002(\0132\n.TableName\"\026\n\024DisableTableRespo" + - "nse\"X\n\022ModifyTableRequest\022\036\n\ntable_name\030" + - "\001 \002(\0132\n.TableName\022\"\n\014table_schema\030\002 \002(\0132" + - "\014.TableSchema\"\025\n\023ModifyTableResponse\"K\n\026" + - "CreateNamespaceRequest\0221\n\023namespaceDescr" + - "iptor\030\001 \002(\0132\024.NamespaceDescriptor\"\031\n\027Cre", - "ateNamespaceResponse\"/\n\026DeleteNamespaceR" + - "equest\022\025\n\rnamespaceName\030\001 \002(\t\"\031\n\027DeleteN" + - "amespaceResponse\"K\n\026ModifyNamespaceReque" + - "st\0221\n\023namespaceDescriptor\030\001 \002(\0132\024.Namesp" + - "aceDescriptor\"\031\n\027ModifyNamespaceResponse" + - "\"6\n\035GetNamespaceDescriptorRequest\022\025\n\rnam" + - "espaceName\030\001 \002(\t\"S\n\036GetNamespaceDescript" + - "orResponse\0221\n\023namespaceDescriptor\030\001 \002(\0132" + - "\024.NamespaceDescriptor\"!\n\037ListNamespaceDe" + - "scriptorsRequest\"U\n ListNamespaceDescrip", - "torsResponse\0221\n\023namespaceDescriptor\030\001 \003(" + - "\0132\024.NamespaceDescriptor\"?\n&ListTableDesc" + - "riptorsByNamespaceRequest\022\025\n\rnamespaceNa" + - "me\030\001 \002(\t\"L\n\'ListTableDescriptorsByNamesp" + - "aceResponse\022!\n\013tableSchema\030\001 \003(\0132\014.Table" + - "Schema\"9\n ListTableNamesByNamespaceReque" + - "st\022\025\n\rnamespaceName\030\001 \002(\t\"B\n!ListTableNa" + - "mesByNamespaceResponse\022\035\n\ttableName\030\001 \003(" + - "\0132\n.TableName\"\021\n\017ShutdownRequest\"\022\n\020Shut" + - "downResponse\"\023\n\021StopMasterRequest\"\024\n\022Sto", - "pMasterResponse\"\020\n\016BalanceRequest\"\'\n\017Bal" + - "anceResponse\022\024\n\014balancer_ran\030\001 \002(\010\"<\n\031Se" + - "tBalancerRunningRequest\022\n\n\002on\030\001 \002(\010\022\023\n\013s" + - "ynchronous\030\002 \001(\010\"8\n\032SetBalancerRunningRe" + - "sponse\022\032\n\022prev_balance_value\030\001 \001(\010\"\027\n\025Ru" + - "nCatalogScanRequest\"-\n\026RunCatalogScanRes" + - "ponse\022\023\n\013scan_result\030\001 \001(\005\"-\n\033EnableCata" + - "logJanitorRequest\022\016\n\006enable\030\001 \002(\010\"2\n\034Ena" + - "bleCatalogJanitorResponse\022\022\n\nprev_value\030" + - "\001 \001(\010\" \n\036IsCatalogJanitorEnabledRequest\"", - "0\n\037IsCatalogJanitorEnabledResponse\022\r\n\005va" + - "lue\030\001 \002(\010\"9\n\017SnapshotRequest\022&\n\010snapshot" + - "\030\001 \002(\0132\024.SnapshotDescription\",\n\020Snapshot" + - "Response\022\030\n\020expected_timeout\030\001 \002(\003\"\036\n\034Ge" + - "tCompletedSnapshotsRequest\"H\n\035GetComplet" + - "edSnapshotsResponse\022\'\n\tsnapshots\030\001 \003(\0132\024" + - ".SnapshotDescription\"?\n\025DeleteSnapshotRe" + - "quest\022&\n\010snapshot\030\001 \002(\0132\024.SnapshotDescri" + - "ption\"\030\n\026DeleteSnapshotResponse\"@\n\026Resto" + - "reSnapshotRequest\022&\n\010snapshot\030\001 \002(\0132\024.Sn", - "apshotDescription\"\031\n\027RestoreSnapshotResp" + - "onse\"?\n\025IsSnapshotDoneRequest\022&\n\010snapsho" + - "t\030\001 \001(\0132\024.SnapshotDescription\"U\n\026IsSnaps" + - "hotDoneResponse\022\023\n\004done\030\001 \001(\010:\005false\022&\n\010" + - "snapshot\030\002 \001(\0132\024.SnapshotDescription\"F\n\034" + - "IsRestoreSnapshotDoneRequest\022&\n\010snapshot" + - "\030\001 \001(\0132\024.SnapshotDescription\"4\n\035IsRestor" + - "eSnapshotDoneResponse\022\023\n\004done\030\001 \001(\010:\005fal" + - "se\"=\n\033GetSchemaAlterStatusRequest\022\036\n\ntab" + - "le_name\030\001 \002(\0132\n.TableName\"T\n\034GetSchemaAl", - "terStatusResponse\022\035\n\025yet_to_update_regio" + - "ns\030\001 \001(\r\022\025\n\rtotal_regions\030\002 \001(\r\"\202\001\n\032GetT" + - "ableDescriptorsRequest\022\037\n\013table_names\030\001 " + - "\003(\0132\n.TableName\022\r\n\005regex\030\002 \001(\t\022!\n\022includ" + - "e_sys_tables\030\003 \001(\010:\005false\022\021\n\tnamespace\030\004" + - " \001(\t\"A\n\033GetTableDescriptorsResponse\022\"\n\014t" + - "able_schema\030\001 \003(\0132\014.TableSchema\"[\n\024GetTa" + - "bleNamesRequest\022\r\n\005regex\030\001 \001(\t\022!\n\022includ" + - "e_sys_tables\030\002 \001(\010:\005false\022\021\n\tnamespace\030\003" + - " \001(\t\"8\n\025GetTableNamesResponse\022\037\n\013table_n", - "ames\030\001 \003(\0132\n.TableName\"\031\n\027GetClusterStat" + - "usRequest\"B\n\030GetClusterStatusResponse\022&\n" + - "\016cluster_status\030\001 \002(\0132\016.ClusterStatus\"\030\n" + - "\026IsMasterRunningRequest\"4\n\027IsMasterRunni" + - "ngResponse\022\031\n\021is_master_running\030\001 \002(\010\"@\n" + - "\024ExecProcedureRequest\022(\n\tprocedure\030\001 \002(\013" + - "2\025.ProcedureDescription\"F\n\025ExecProcedure" + - "Response\022\030\n\020expected_timeout\030\001 \001(\003\022\023\n\013re" + - "turn_data\030\002 \001(\014\"B\n\026IsProcedureDoneReques" + - "t\022(\n\tprocedure\030\001 \001(\0132\025.ProcedureDescript", - "ion\"W\n\027IsProcedureDoneResponse\022\023\n\004done\030\001" + - " \001(\010:\005false\022\'\n\010snapshot\030\002 \001(\0132\025.Procedur" + - "eDescription2\365\027\n\rMasterService\022S\n\024GetSch" + - "emaAlterStatus\022\034.GetSchemaAlterStatusReq" + - "uest\032\035.GetSchemaAlterStatusResponse\022P\n\023G" + - "etTableDescriptors\022\033.GetTableDescriptors" + - "Request\032\034.GetTableDescriptorsResponse\022>\n" + - "\rGetTableNames\022\025.GetTableNamesRequest\032\026." + - "GetTableNamesResponse\022G\n\020GetClusterStatu" + - "s\022\030.GetClusterStatusRequest\032\031.GetCluster", - "StatusResponse\022D\n\017IsMasterRunning\022\027.IsMa" + - "sterRunningRequest\032\030.IsMasterRunningResp" + - "onse\0222\n\tAddColumn\022\021.AddColumnRequest\032\022.A" + - "ddColumnResponse\022;\n\014DeleteColumn\022\024.Delet" + - "eColumnRequest\032\025.DeleteColumnResponse\022;\n" + - "\014ModifyColumn\022\024.ModifyColumnRequest\032\025.Mo" + - "difyColumnResponse\0225\n\nMoveRegion\022\022.MoveR" + - "egionRequest\032\023.MoveRegionResponse\022Y\n\026Dis" + - "patchMergingRegions\022\036.DispatchMergingReg" + - "ionsRequest\032\037.DispatchMergingRegionsResp", - "onse\022;\n\014AssignRegion\022\024.AssignRegionReque" + - "st\032\025.AssignRegionResponse\022A\n\016UnassignReg" + - "ion\022\026.UnassignRegionRequest\032\027.UnassignRe" + - "gionResponse\022>\n\rOfflineRegion\022\025.OfflineR" + - "egionRequest\032\026.OfflineRegionResponse\0228\n\013" + - "DeleteTable\022\023.DeleteTableRequest\032\024.Delet" + - "eTableResponse\022>\n\rtruncateTable\022\025.Trunca" + - "teTableRequest\032\026.TruncateTableResponse\0228" + - "\n\013EnableTable\022\023.EnableTableRequest\032\024.Ena" + - "bleTableResponse\022;\n\014DisableTable\022\024.Disab", - "leTableRequest\032\025.DisableTableResponse\0228\n" + - "\013ModifyTable\022\023.ModifyTableRequest\032\024.Modi" + - "fyTableResponse\0228\n\013CreateTable\022\023.CreateT" + - "ableRequest\032\024.CreateTableResponse\022/\n\010Shu" + - "tdown\022\020.ShutdownRequest\032\021.ShutdownRespon" + - "se\0225\n\nStopMaster\022\022.StopMasterRequest\032\023.S" + - "topMasterResponse\022,\n\007Balance\022\017.BalanceRe" + - "quest\032\020.BalanceResponse\022M\n\022SetBalancerRu" + - "nning\022\032.SetBalancerRunningRequest\032\033.SetB" + - "alancerRunningResponse\022A\n\016RunCatalogScan", - "\022\026.RunCatalogScanRequest\032\027.RunCatalogSca" + - "nResponse\022S\n\024EnableCatalogJanitor\022\034.Enab" + - "leCatalogJanitorRequest\032\035.EnableCatalogJ" + - "anitorResponse\022\\\n\027IsCatalogJanitorEnable" + - "d\022\037.IsCatalogJanitorEnabledRequest\032 .IsC" + - "atalogJanitorEnabledResponse\022L\n\021ExecMast" + - "erService\022\032.CoprocessorServiceRequest\032\033." + - "CoprocessorServiceResponse\022/\n\010Snapshot\022\020" + - ".SnapshotRequest\032\021.SnapshotResponse\022V\n\025G" + - "etCompletedSnapshots\022\035.GetCompletedSnaps", - "hotsRequest\032\036.GetCompletedSnapshotsRespo" + - "nse\022A\n\016DeleteSnapshot\022\026.DeleteSnapshotRe" + - "quest\032\027.DeleteSnapshotResponse\022A\n\016IsSnap" + - "shotDone\022\026.IsSnapshotDoneRequest\032\027.IsSna" + - "pshotDoneResponse\022D\n\017RestoreSnapshot\022\027.R" + - "estoreSnapshotRequest\032\030.RestoreSnapshotR" + - "esponse\022V\n\025IsRestoreSnapshotDone\022\035.IsRes" + - "toreSnapshotDoneRequest\032\036.IsRestoreSnaps" + - "hotDoneResponse\022>\n\rExecProcedure\022\025.ExecP" + - "rocedureRequest\032\026.ExecProcedureResponse\022", - "E\n\024ExecProcedureWithRet\022\025.ExecProcedureR" + - "equest\032\026.ExecProcedureResponse\022D\n\017IsProc" + - "edureDone\022\027.IsProcedureDoneRequest\032\030.IsP" + - "rocedureDoneResponse\022D\n\017ModifyNamespace\022" + - "\027.ModifyNamespaceRequest\032\030.ModifyNamespa" + - "ceResponse\022D\n\017CreateNamespace\022\027.CreateNa" + - "mespaceRequest\032\030.CreateNamespaceResponse" + - "\022D\n\017DeleteNamespace\022\027.DeleteNamespaceReq" + - "uest\032\030.DeleteNamespaceResponse\022Y\n\026GetNam" + - "espaceDescriptor\022\036.GetNamespaceDescripto", - "rRequest\032\037.GetNamespaceDescriptorRespons" + - "e\022_\n\030ListNamespaceDescriptors\022 .ListName" + - "spaceDescriptorsRequest\032!.ListNamespaceD" + - "escriptorsResponse\022t\n\037ListTableDescripto" + - "rsByNamespace\022\'.ListTableDescriptorsByNa" + - "mespaceRequest\032(.ListTableDescriptorsByN" + - "amespaceResponse\022b\n\031ListTableNamesByName" + - "space\022!.ListTableNamesByNamespaceRequest" + - "\032\".ListTableNamesByNamespaceResponseBB\n*" + - "org.apache.hadoop.hbase.protobuf.generat", - "edB\014MasterProtosH\001\210\001\001\240\001\001" + "o\032\023ClusterStatus.proto\032\013Quota.proto\"`\n\020A" + + "ddColumnRequest\022\036\n\ntable_name\030\001 \002(\0132\n.Ta" + + "bleName\022,\n\017column_families\030\002 \002(\0132\023.Colum" + + "nFamilySchema\"\023\n\021AddColumnResponse\"J\n\023De" + + "leteColumnRequest\022\036\n\ntable_name\030\001 \002(\0132\n." + + "TableName\022\023\n\013column_name\030\002 \002(\014\"\026\n\024Delete" + + "ColumnResponse\"c\n\023ModifyColumnRequest\022\036\n" + + "\ntable_name\030\001 \002(\0132\n.TableName\022,\n\017column_" + + "families\030\002 \002(\0132\023.ColumnFamilySchema\"\026\n\024M", + "odifyColumnResponse\"\\\n\021MoveRegionRequest" + + "\022 \n\006region\030\001 \002(\0132\020.RegionSpecifier\022%\n\020de" + + "st_server_name\030\002 \001(\0132\013.ServerName\"\024\n\022Mov" + + "eRegionResponse\"\200\001\n\035DispatchMergingRegio" + + "nsRequest\022\"\n\010region_a\030\001 \002(\0132\020.RegionSpec" + + "ifier\022\"\n\010region_b\030\002 \002(\0132\020.RegionSpecifie" + + "r\022\027\n\010forcible\030\003 \001(\010:\005false\" \n\036DispatchMe" + + "rgingRegionsResponse\"7\n\023AssignRegionRequ" + + "est\022 \n\006region\030\001 \002(\0132\020.RegionSpecifier\"\026\n" + + "\024AssignRegionResponse\"O\n\025UnassignRegionR", + "equest\022 \n\006region\030\001 \002(\0132\020.RegionSpecifier" + + "\022\024\n\005force\030\002 \001(\010:\005false\"\030\n\026UnassignRegion" + + "Response\"8\n\024OfflineRegionRequest\022 \n\006regi" + + "on\030\001 \002(\0132\020.RegionSpecifier\"\027\n\025OfflineReg" + + "ionResponse\"L\n\022CreateTableRequest\022\"\n\014tab" + + "le_schema\030\001 \002(\0132\014.TableSchema\022\022\n\nsplit_k" + + "eys\030\002 \003(\014\"\025\n\023CreateTableResponse\"4\n\022Dele" + + "teTableRequest\022\036\n\ntable_name\030\001 \002(\0132\n.Tab" + + "leName\"\025\n\023DeleteTableResponse\"T\n\024Truncat" + + "eTableRequest\022\035\n\ttableName\030\001 \002(\0132\n.Table", + "Name\022\035\n\016preserveSplits\030\002 \001(\010:\005false\"\027\n\025T" + + "runcateTableResponse\"4\n\022EnableTableReque" + + "st\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\"\025\n\023En" + + "ableTableResponse\"5\n\023DisableTableRequest" + + "\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\"\026\n\024Disa" + + "bleTableResponse\"X\n\022ModifyTableRequest\022\036" + + "\n\ntable_name\030\001 \002(\0132\n.TableName\022\"\n\014table_" + + "schema\030\002 \002(\0132\014.TableSchema\"\025\n\023ModifyTabl" + + "eResponse\"K\n\026CreateNamespaceRequest\0221\n\023n" + + "amespaceDescriptor\030\001 \002(\0132\024.NamespaceDesc", + "riptor\"\031\n\027CreateNamespaceResponse\"/\n\026Del" + + "eteNamespaceRequest\022\025\n\rnamespaceName\030\001 \002" + + "(\t\"\031\n\027DeleteNamespaceResponse\"K\n\026ModifyN" + + "amespaceRequest\0221\n\023namespaceDescriptor\030\001" + + " \002(\0132\024.NamespaceDescriptor\"\031\n\027ModifyName" + + "spaceResponse\"6\n\035GetNamespaceDescriptorR" + + "equest\022\025\n\rnamespaceName\030\001 \002(\t\"S\n\036GetName" + + "spaceDescriptorResponse\0221\n\023namespaceDesc" + + "riptor\030\001 \002(\0132\024.NamespaceDescriptor\"!\n\037Li" + + "stNamespaceDescriptorsRequest\"U\n ListNam", + "espaceDescriptorsResponse\0221\n\023namespaceDe" + + "scriptor\030\001 \003(\0132\024.NamespaceDescriptor\"?\n&" + + "ListTableDescriptorsByNamespaceRequest\022\025" + + "\n\rnamespaceName\030\001 \002(\t\"L\n\'ListTableDescri" + + "ptorsByNamespaceResponse\022!\n\013tableSchema\030" + + "\001 \003(\0132\014.TableSchema\"9\n ListTableNamesByN" + + "amespaceRequest\022\025\n\rnamespaceName\030\001 \002(\t\"B" + + "\n!ListTableNamesByNamespaceResponse\022\035\n\tt" + + "ableName\030\001 \003(\0132\n.TableName\"\021\n\017ShutdownRe" + + "quest\"\022\n\020ShutdownResponse\"\023\n\021StopMasterR", + "equest\"\024\n\022StopMasterResponse\"\020\n\016BalanceR" + + "equest\"\'\n\017BalanceResponse\022\024\n\014balancer_ra" + + "n\030\001 \002(\010\"<\n\031SetBalancerRunningRequest\022\n\n\002" + + "on\030\001 \002(\010\022\023\n\013synchronous\030\002 \001(\010\"8\n\032SetBala" + + "ncerRunningResponse\022\032\n\022prev_balance_valu" + + "e\030\001 \001(\010\"\027\n\025RunCatalogScanRequest\"-\n\026RunC" + + "atalogScanResponse\022\023\n\013scan_result\030\001 \001(\005\"" + + "-\n\033EnableCatalogJanitorRequest\022\016\n\006enable" + + "\030\001 \002(\010\"2\n\034EnableCatalogJanitorResponse\022\022" + + "\n\nprev_value\030\001 \001(\010\" \n\036IsCatalogJanitorEn", + "abledRequest\"0\n\037IsCatalogJanitorEnabledR" + + "esponse\022\r\n\005value\030\001 \002(\010\"9\n\017SnapshotReques" + + "t\022&\n\010snapshot\030\001 \002(\0132\024.SnapshotDescriptio" + + "n\",\n\020SnapshotResponse\022\030\n\020expected_timeou" + + "t\030\001 \002(\003\"\036\n\034GetCompletedSnapshotsRequest\"" + + "H\n\035GetCompletedSnapshotsResponse\022\'\n\tsnap" + + "shots\030\001 \003(\0132\024.SnapshotDescription\"?\n\025Del" + + "eteSnapshotRequest\022&\n\010snapshot\030\001 \002(\0132\024.S" + + "napshotDescription\"\030\n\026DeleteSnapshotResp" + + "onse\"@\n\026RestoreSnapshotRequest\022&\n\010snapsh", + "ot\030\001 \002(\0132\024.SnapshotDescription\"\031\n\027Restor" + + "eSnapshotResponse\"?\n\025IsSnapshotDoneReque" + + "st\022&\n\010snapshot\030\001 \001(\0132\024.SnapshotDescripti" + + "on\"U\n\026IsSnapshotDoneResponse\022\023\n\004done\030\001 \001" + + "(\010:\005false\022&\n\010snapshot\030\002 \001(\0132\024.SnapshotDe" + + "scription\"F\n\034IsRestoreSnapshotDoneReques" + + "t\022&\n\010snapshot\030\001 \001(\0132\024.SnapshotDescriptio" + + "n\"4\n\035IsRestoreSnapshotDoneResponse\022\023\n\004do" + + "ne\030\001 \001(\010:\005false\"=\n\033GetSchemaAlterStatusR" + + "equest\022\036\n\ntable_name\030\001 \002(\0132\n.TableName\"T", + "\n\034GetSchemaAlterStatusResponse\022\035\n\025yet_to" + + "_update_regions\030\001 \001(\r\022\025\n\rtotal_regions\030\002" + + " \001(\r\"\202\001\n\032GetTableDescriptorsRequest\022\037\n\013t" + + "able_names\030\001 \003(\0132\n.TableName\022\r\n\005regex\030\002 " + + "\001(\t\022!\n\022include_sys_tables\030\003 \001(\010:\005false\022\021" + + "\n\tnamespace\030\004 \001(\t\"A\n\033GetTableDescriptors" + + "Response\022\"\n\014table_schema\030\001 \003(\0132\014.TableSc" + + "hema\"[\n\024GetTableNamesRequest\022\r\n\005regex\030\001 " + + "\001(\t\022!\n\022include_sys_tables\030\002 \001(\010:\005false\022\021" + + "\n\tnamespace\030\003 \001(\t\"8\n\025GetTableNamesRespon", + "se\022\037\n\013table_names\030\001 \003(\0132\n.TableName\"6\n\024G" + + "etTableStateRequest\022\036\n\ntable_name\030\001 \002(\0132" + + "\n.TableName\"9\n\025GetTableStateResponse\022 \n\013" + + "table_state\030\001 \002(\0132\013.TableState\"\031\n\027GetClu" + + "sterStatusRequest\"B\n\030GetClusterStatusRes" + + "ponse\022&\n\016cluster_status\030\001 \002(\0132\016.ClusterS" + + "tatus\"\030\n\026IsMasterRunningRequest\"4\n\027IsMas" + + "terRunningResponse\022\031\n\021is_master_running\030" + + "\001 \002(\010\"@\n\024ExecProcedureRequest\022(\n\tprocedu" + + "re\030\001 \002(\0132\025.ProcedureDescription\"F\n\025ExecP", + "rocedureResponse\022\030\n\020expected_timeout\030\001 \001" + + "(\003\022\023\n\013return_data\030\002 \001(\014\"B\n\026IsProcedureDo" + + "neRequest\022(\n\tprocedure\030\001 \001(\0132\025.Procedure" + + "Description\"W\n\027IsProcedureDoneResponse\022\023" + + "\n\004done\030\001 \001(\010:\005false\022\'\n\010snapshot\030\002 \001(\0132\025." + + "ProcedureDescription\"\273\001\n\017SetQuotaRequest" + + "\022\021\n\tuser_name\030\001 \001(\t\022\022\n\nuser_group\030\002 \001(\t\022" + + "\021\n\tnamespace\030\003 \001(\t\022\036\n\ntable_name\030\004 \001(\0132\n" + + ".TableName\022\022\n\nremove_all\030\005 \001(\010\022\026\n\016bypass" + + "_globals\030\006 \001(\010\022\"\n\010throttle\030\007 \001(\0132\020.Throt", + "tleRequest\"\022\n\020SetQuotaResponse2\346\030\n\rMaste" + + "rService\022S\n\024GetSchemaAlterStatus\022\034.GetSc" + + "hemaAlterStatusRequest\032\035.GetSchemaAlterS" + + "tatusResponse\022P\n\023GetTableDescriptors\022\033.G" + + "etTableDescriptorsRequest\032\034.GetTableDesc" + + "riptorsResponse\022>\n\rGetTableNames\022\025.GetTa" + + "bleNamesRequest\032\026.GetTableNamesResponse\022" + + "G\n\020GetClusterStatus\022\030.GetClusterStatusRe" + + "quest\032\031.GetClusterStatusResponse\022D\n\017IsMa" + + "sterRunning\022\027.IsMasterRunningRequest\032\030.I", + "sMasterRunningResponse\0222\n\tAddColumn\022\021.Ad" + + "dColumnRequest\032\022.AddColumnResponse\022;\n\014De" + + "leteColumn\022\024.DeleteColumnRequest\032\025.Delet" + + "eColumnResponse\022;\n\014ModifyColumn\022\024.Modify" + + "ColumnRequest\032\025.ModifyColumnResponse\0225\n\n" + + "MoveRegion\022\022.MoveRegionRequest\032\023.MoveReg" + + "ionResponse\022Y\n\026DispatchMergingRegions\022\036." + + "DispatchMergingRegionsRequest\032\037.Dispatch" + + "MergingRegionsResponse\022;\n\014AssignRegion\022\024" + + ".AssignRegionRequest\032\025.AssignRegionRespo", + "nse\022A\n\016UnassignRegion\022\026.UnassignRegionRe" + + "quest\032\027.UnassignRegionResponse\022>\n\rOfflin" + + "eRegion\022\025.OfflineRegionRequest\032\026.Offline" + + "RegionResponse\0228\n\013DeleteTable\022\023.DeleteTa" + + "bleRequest\032\024.DeleteTableResponse\022>\n\rtrun" + + "cateTable\022\025.TruncateTableRequest\032\026.Trunc" + + "ateTableResponse\0228\n\013EnableTable\022\023.Enable" + + "TableRequest\032\024.EnableTableResponse\022;\n\014Di" + + "sableTable\022\024.DisableTableRequest\032\025.Disab" + + "leTableResponse\0228\n\013ModifyTable\022\023.ModifyT", + "ableRequest\032\024.ModifyTableResponse\0228\n\013Cre" + + "ateTable\022\023.CreateTableRequest\032\024.CreateTa" + + "bleResponse\022/\n\010Shutdown\022\020.ShutdownReques" + + "t\032\021.ShutdownResponse\0225\n\nStopMaster\022\022.Sto" + + "pMasterRequest\032\023.StopMasterResponse\022,\n\007B" + + "alance\022\017.BalanceRequest\032\020.BalanceRespons" + + "e\022M\n\022SetBalancerRunning\022\032.SetBalancerRun" + + "ningRequest\032\033.SetBalancerRunningResponse" + + "\022A\n\016RunCatalogScan\022\026.RunCatalogScanReque" + + "st\032\027.RunCatalogScanResponse\022S\n\024EnableCat", + "alogJanitor\022\034.EnableCatalogJanitorReques" + + "t\032\035.EnableCatalogJanitorResponse\022\\\n\027IsCa" + + "talogJanitorEnabled\022\037.IsCatalogJanitorEn" + + "abledRequest\032 .IsCatalogJanitorEnabledRe" + + "sponse\022L\n\021ExecMasterService\022\032.Coprocesso" + + "rServiceRequest\032\033.CoprocessorServiceResp" + + "onse\022/\n\010Snapshot\022\020.SnapshotRequest\032\021.Sna" + + "pshotResponse\022V\n\025GetCompletedSnapshots\022\035" + + ".GetCompletedSnapshotsRequest\032\036.GetCompl" + + "etedSnapshotsResponse\022A\n\016DeleteSnapshot\022", + "\026.DeleteSnapshotRequest\032\027.DeleteSnapshot" + + "Response\022A\n\016IsSnapshotDone\022\026.IsSnapshotD" + + "oneRequest\032\027.IsSnapshotDoneResponse\022D\n\017R" + + "estoreSnapshot\022\027.RestoreSnapshotRequest\032" + + "\030.RestoreSnapshotResponse\022V\n\025IsRestoreSn" + + "apshotDone\022\035.IsRestoreSnapshotDoneReques" + + "t\032\036.IsRestoreSnapshotDoneResponse\022>\n\rExe" + + "cProcedure\022\025.ExecProcedureRequest\032\026.Exec" + + "ProcedureResponse\022E\n\024ExecProcedureWithRe" + + "t\022\025.ExecProcedureRequest\032\026.ExecProcedure", + "Response\022D\n\017IsProcedureDone\022\027.IsProcedur" + + "eDoneRequest\032\030.IsProcedureDoneResponse\022D" + + "\n\017ModifyNamespace\022\027.ModifyNamespaceReque" + + "st\032\030.ModifyNamespaceResponse\022D\n\017CreateNa" + + "mespace\022\027.CreateNamespaceRequest\032\030.Creat" + + "eNamespaceResponse\022D\n\017DeleteNamespace\022\027." + + "DeleteNamespaceRequest\032\030.DeleteNamespace" + + "Response\022Y\n\026GetNamespaceDescriptor\022\036.Get" + + "NamespaceDescriptorRequest\032\037.GetNamespac" + + "eDescriptorResponse\022_\n\030ListNamespaceDesc", + "riptors\022 .ListNamespaceDescriptorsReques" + + "t\032!.ListNamespaceDescriptorsResponse\022t\n\037" + + "ListTableDescriptorsByNamespace\022\'.ListTa" + + "bleDescriptorsByNamespaceRequest\032(.ListT" + + "ableDescriptorsByNamespaceResponse\022b\n\031Li" + + "stTableNamesByNamespace\022!.ListTableNames" + + "ByNamespaceRequest\032\".ListTableNamesByNam" + + "espaceResponse\022>\n\rGetTableState\022\025.GetTab" + + "leStateRequest\032\026.GetTableStateResponse\022/" + + "\n\010SetQuota\022\020.SetQuotaRequest\032\021.SetQuotaR", + "esponseBB\n*org.apache.hadoop.hbase.proto" + + "buf.generatedB\014MasterProtosH\001\210\001\001\240\001\001" }; com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { @@ -46145,54 +49239,78 @@ public final class MasterProtos { com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_GetTableNamesResponse_descriptor, new java.lang.String[] { "TableNames", }); - internal_static_GetClusterStatusRequest_descriptor = + internal_static_GetTableStateRequest_descriptor = getDescriptor().getMessageTypes().get(74); + internal_static_GetTableStateRequest_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_GetTableStateRequest_descriptor, + new java.lang.String[] { "TableName", }); + internal_static_GetTableStateResponse_descriptor = + getDescriptor().getMessageTypes().get(75); + internal_static_GetTableStateResponse_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_GetTableStateResponse_descriptor, + new java.lang.String[] { "TableState", }); + internal_static_GetClusterStatusRequest_descriptor = + getDescriptor().getMessageTypes().get(76); internal_static_GetClusterStatusRequest_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_GetClusterStatusRequest_descriptor, new java.lang.String[] { }); internal_static_GetClusterStatusResponse_descriptor = - getDescriptor().getMessageTypes().get(75); + getDescriptor().getMessageTypes().get(77); internal_static_GetClusterStatusResponse_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_GetClusterStatusResponse_descriptor, new java.lang.String[] { "ClusterStatus", }); internal_static_IsMasterRunningRequest_descriptor = - getDescriptor().getMessageTypes().get(76); + getDescriptor().getMessageTypes().get(78); internal_static_IsMasterRunningRequest_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_IsMasterRunningRequest_descriptor, new java.lang.String[] { }); internal_static_IsMasterRunningResponse_descriptor = - getDescriptor().getMessageTypes().get(77); + getDescriptor().getMessageTypes().get(79); internal_static_IsMasterRunningResponse_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_IsMasterRunningResponse_descriptor, new java.lang.String[] { "IsMasterRunning", }); internal_static_ExecProcedureRequest_descriptor = - getDescriptor().getMessageTypes().get(78); + getDescriptor().getMessageTypes().get(80); internal_static_ExecProcedureRequest_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ExecProcedureRequest_descriptor, new java.lang.String[] { "Procedure", }); internal_static_ExecProcedureResponse_descriptor = - getDescriptor().getMessageTypes().get(79); + getDescriptor().getMessageTypes().get(81); internal_static_ExecProcedureResponse_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_ExecProcedureResponse_descriptor, new java.lang.String[] { "ExpectedTimeout", "ReturnData", }); internal_static_IsProcedureDoneRequest_descriptor = - getDescriptor().getMessageTypes().get(80); + getDescriptor().getMessageTypes().get(82); internal_static_IsProcedureDoneRequest_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_IsProcedureDoneRequest_descriptor, new java.lang.String[] { "Procedure", }); internal_static_IsProcedureDoneResponse_descriptor = - getDescriptor().getMessageTypes().get(81); + getDescriptor().getMessageTypes().get(83); internal_static_IsProcedureDoneResponse_fieldAccessorTable = new com.google.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_IsProcedureDoneResponse_descriptor, new java.lang.String[] { "Done", "Snapshot", }); + internal_static_SetQuotaRequest_descriptor = + getDescriptor().getMessageTypes().get(84); + internal_static_SetQuotaRequest_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_SetQuotaRequest_descriptor, + new java.lang.String[] { "UserName", "UserGroup", "Namespace", "TableName", "RemoveAll", "BypassGlobals", "Throttle", }); + internal_static_SetQuotaResponse_descriptor = + getDescriptor().getMessageTypes().get(85); + internal_static_SetQuotaResponse_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_SetQuotaResponse_descriptor, + new java.lang.String[] { }); return null; } }; @@ -46202,6 +49320,7 @@ public final class MasterProtos { org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.getDescriptor(), org.apache.hadoop.hbase.protobuf.generated.ClientProtos.getDescriptor(), org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos.getDescriptor(), + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.getDescriptor(), }, assigner); } diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java new file mode 100644 index 0000000..5eac192 --- /dev/null +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/QuotaProtos.java @@ -0,0 +1,4378 @@ +// Generated by the protocol buffer compiler. DO NOT EDIT! +// source: Quota.proto + +package org.apache.hadoop.hbase.protobuf.generated; + +public final class QuotaProtos { + private QuotaProtos() {} + public static void registerAllExtensions( + com.google.protobuf.ExtensionRegistry registry) { + } + /** + * Protobuf enum {@code QuotaScope} + */ + public enum QuotaScope + implements com.google.protobuf.ProtocolMessageEnum { + /** + * CLUSTER = 1; + */ + CLUSTER(0, 1), + /** + * MACHINE = 2; + */ + MACHINE(1, 2), + ; + + /** + * CLUSTER = 1; + */ + public static final int CLUSTER_VALUE = 1; + /** + * MACHINE = 2; + */ + public static final int MACHINE_VALUE = 2; + + + public final int getNumber() { return value; } + + public static QuotaScope valueOf(int value) { + switch (value) { + case 1: return CLUSTER; + case 2: return MACHINE; + default: return null; + } + } + + public static com.google.protobuf.Internal.EnumLiteMap + internalGetValueMap() { + return internalValueMap; + } + private static com.google.protobuf.Internal.EnumLiteMap + internalValueMap = + new com.google.protobuf.Internal.EnumLiteMap() { + public QuotaScope findValueByNumber(int number) { + return QuotaScope.valueOf(number); + } + }; + + public final com.google.protobuf.Descriptors.EnumValueDescriptor + getValueDescriptor() { + return getDescriptor().getValues().get(index); + } + public final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptorForType() { + return getDescriptor(); + } + public static final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.getDescriptor().getEnumTypes().get(0); + } + + private static final QuotaScope[] VALUES = values(); + + public static QuotaScope valueOf( + com.google.protobuf.Descriptors.EnumValueDescriptor desc) { + if (desc.getType() != getDescriptor()) { + throw new java.lang.IllegalArgumentException( + "EnumValueDescriptor is not for this type."); + } + return VALUES[desc.getIndex()]; + } + + private final int index; + private final int value; + + private QuotaScope(int index, int value) { + this.index = index; + this.value = value; + } + + // @@protoc_insertion_point(enum_scope:QuotaScope) + } + + /** + * Protobuf enum {@code ThrottleType} + */ + public enum ThrottleType + implements com.google.protobuf.ProtocolMessageEnum { + /** + * REQUEST_NUMBER = 1; + */ + REQUEST_NUMBER(0, 1), + /** + * REQUEST_SIZE = 2; + */ + REQUEST_SIZE(1, 2), + /** + * WRITE_NUMBER = 3; + */ + WRITE_NUMBER(2, 3), + /** + * WRITE_SIZE = 4; + */ + WRITE_SIZE(3, 4), + /** + * READ_NUMBER = 5; + */ + READ_NUMBER(4, 5), + /** + * READ_SIZE = 6; + */ + READ_SIZE(5, 6), + ; + + /** + * REQUEST_NUMBER = 1; + */ + public static final int REQUEST_NUMBER_VALUE = 1; + /** + * REQUEST_SIZE = 2; + */ + public static final int REQUEST_SIZE_VALUE = 2; + /** + * WRITE_NUMBER = 3; + */ + public static final int WRITE_NUMBER_VALUE = 3; + /** + * WRITE_SIZE = 4; + */ + public static final int WRITE_SIZE_VALUE = 4; + /** + * READ_NUMBER = 5; + */ + public static final int READ_NUMBER_VALUE = 5; + /** + * READ_SIZE = 6; + */ + public static final int READ_SIZE_VALUE = 6; + + + public final int getNumber() { return value; } + + public static ThrottleType valueOf(int value) { + switch (value) { + case 1: return REQUEST_NUMBER; + case 2: return REQUEST_SIZE; + case 3: return WRITE_NUMBER; + case 4: return WRITE_SIZE; + case 5: return READ_NUMBER; + case 6: return READ_SIZE; + default: return null; + } + } + + public static com.google.protobuf.Internal.EnumLiteMap + internalGetValueMap() { + return internalValueMap; + } + private static com.google.protobuf.Internal.EnumLiteMap + internalValueMap = + new com.google.protobuf.Internal.EnumLiteMap() { + public ThrottleType findValueByNumber(int number) { + return ThrottleType.valueOf(number); + } + }; + + public final com.google.protobuf.Descriptors.EnumValueDescriptor + getValueDescriptor() { + return getDescriptor().getValues().get(index); + } + public final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptorForType() { + return getDescriptor(); + } + public static final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.getDescriptor().getEnumTypes().get(1); + } + + private static final ThrottleType[] VALUES = values(); + + public static ThrottleType valueOf( + com.google.protobuf.Descriptors.EnumValueDescriptor desc) { + if (desc.getType() != getDescriptor()) { + throw new java.lang.IllegalArgumentException( + "EnumValueDescriptor is not for this type."); + } + return VALUES[desc.getIndex()]; + } + + private final int index; + private final int value; + + private ThrottleType(int index, int value) { + this.index = index; + this.value = value; + } + + // @@protoc_insertion_point(enum_scope:ThrottleType) + } + + /** + * Protobuf enum {@code QuotaType} + */ + public enum QuotaType + implements com.google.protobuf.ProtocolMessageEnum { + /** + * THROTTLE = 1; + */ + THROTTLE(0, 1), + ; + + /** + * THROTTLE = 1; + */ + public static final int THROTTLE_VALUE = 1; + + + public final int getNumber() { return value; } + + public static QuotaType valueOf(int value) { + switch (value) { + case 1: return THROTTLE; + default: return null; + } + } + + public static com.google.protobuf.Internal.EnumLiteMap + internalGetValueMap() { + return internalValueMap; + } + private static com.google.protobuf.Internal.EnumLiteMap + internalValueMap = + new com.google.protobuf.Internal.EnumLiteMap() { + public QuotaType findValueByNumber(int number) { + return QuotaType.valueOf(number); + } + }; + + public final com.google.protobuf.Descriptors.EnumValueDescriptor + getValueDescriptor() { + return getDescriptor().getValues().get(index); + } + public final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptorForType() { + return getDescriptor(); + } + public static final com.google.protobuf.Descriptors.EnumDescriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.getDescriptor().getEnumTypes().get(2); + } + + private static final QuotaType[] VALUES = values(); + + public static QuotaType valueOf( + com.google.protobuf.Descriptors.EnumValueDescriptor desc) { + if (desc.getType() != getDescriptor()) { + throw new java.lang.IllegalArgumentException( + "EnumValueDescriptor is not for this type."); + } + return VALUES[desc.getIndex()]; + } + + private final int index; + private final int value; + + private QuotaType(int index, int value) { + this.index = index; + this.value = value; + } + + // @@protoc_insertion_point(enum_scope:QuotaType) + } + + public interface TimedQuotaOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // required .TimeUnit time_unit = 1; + /** + * required .TimeUnit time_unit = 1; + */ + boolean hasTimeUnit(); + /** + * required .TimeUnit time_unit = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit getTimeUnit(); + + // optional uint64 soft_limit = 2; + /** + * optional uint64 soft_limit = 2; + */ + boolean hasSoftLimit(); + /** + * optional uint64 soft_limit = 2; + */ + long getSoftLimit(); + + // optional float share = 3; + /** + * optional float share = 3; + */ + boolean hasShare(); + /** + * optional float share = 3; + */ + float getShare(); + + // optional .QuotaScope scope = 4 [default = MACHINE]; + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + boolean hasScope(); + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope getScope(); + } + /** + * Protobuf type {@code TimedQuota} + */ + public static final class TimedQuota extends + com.google.protobuf.GeneratedMessage + implements TimedQuotaOrBuilder { + // Use TimedQuota.newBuilder() to construct. + private TimedQuota(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private TimedQuota(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final TimedQuota defaultInstance; + public static TimedQuota getDefaultInstance() { + return defaultInstance; + } + + public TimedQuota getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private TimedQuota( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + int rawValue = input.readEnum(); + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit value = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit.valueOf(rawValue); + if (value == null) { + unknownFields.mergeVarintField(1, rawValue); + } else { + bitField0_ |= 0x00000001; + timeUnit_ = value; + } + break; + } + case 16: { + bitField0_ |= 0x00000002; + softLimit_ = input.readUInt64(); + break; + } + case 29: { + bitField0_ |= 0x00000004; + share_ = input.readFloat(); + break; + } + case 32: { + int rawValue = input.readEnum(); + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope value = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope.valueOf(rawValue); + if (value == null) { + unknownFields.mergeVarintField(4, rawValue); + } else { + bitField0_ |= 0x00000008; + scope_ = value; + } + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_TimedQuota_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_TimedQuota_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public TimedQuota parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new TimedQuota(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // required .TimeUnit time_unit = 1; + public static final int TIME_UNIT_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit timeUnit_; + /** + * required .TimeUnit time_unit = 1; + */ + public boolean hasTimeUnit() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TimeUnit time_unit = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit getTimeUnit() { + return timeUnit_; + } + + // optional uint64 soft_limit = 2; + public static final int SOFT_LIMIT_FIELD_NUMBER = 2; + private long softLimit_; + /** + * optional uint64 soft_limit = 2; + */ + public boolean hasSoftLimit() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional uint64 soft_limit = 2; + */ + public long getSoftLimit() { + return softLimit_; + } + + // optional float share = 3; + public static final int SHARE_FIELD_NUMBER = 3; + private float share_; + /** + * optional float share = 3; + */ + public boolean hasShare() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional float share = 3; + */ + public float getShare() { + return share_; + } + + // optional .QuotaScope scope = 4 [default = MACHINE]; + public static final int SCOPE_FIELD_NUMBER = 4; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope scope_; + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public boolean hasScope() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope getScope() { + return scope_; + } + + private void initFields() { + timeUnit_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit.NANOSECONDS; + softLimit_ = 0L; + share_ = 0F; + scope_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope.MACHINE; + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (!hasTimeUnit()) { + memoizedIsInitialized = 0; + return false; + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeEnum(1, timeUnit_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeUInt64(2, softLimit_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + output.writeFloat(3, share_); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + output.writeEnum(4, scope_.getNumber()); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeEnumSize(1, timeUnit_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeUInt64Size(2, softLimit_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + size += com.google.protobuf.CodedOutputStream + .computeFloatSize(3, share_); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + size += com.google.protobuf.CodedOutputStream + .computeEnumSize(4, scope_.getNumber()); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota other = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota) obj; + + boolean result = true; + result = result && (hasTimeUnit() == other.hasTimeUnit()); + if (hasTimeUnit()) { + result = result && + (getTimeUnit() == other.getTimeUnit()); + } + result = result && (hasSoftLimit() == other.hasSoftLimit()); + if (hasSoftLimit()) { + result = result && (getSoftLimit() + == other.getSoftLimit()); + } + result = result && (hasShare() == other.hasShare()); + if (hasShare()) { + result = result && (Float.floatToIntBits(getShare()) == Float.floatToIntBits(other.getShare())); + } + result = result && (hasScope() == other.hasScope()); + if (hasScope()) { + result = result && + (getScope() == other.getScope()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasTimeUnit()) { + hash = (37 * hash) + TIME_UNIT_FIELD_NUMBER; + hash = (53 * hash) + hashEnum(getTimeUnit()); + } + if (hasSoftLimit()) { + hash = (37 * hash) + SOFT_LIMIT_FIELD_NUMBER; + hash = (53 * hash) + hashLong(getSoftLimit()); + } + if (hasShare()) { + hash = (37 * hash) + SHARE_FIELD_NUMBER; + hash = (53 * hash) + Float.floatToIntBits( + getShare()); + } + if (hasScope()) { + hash = (37 * hash) + SCOPE_FIELD_NUMBER; + hash = (53 * hash) + hashEnum(getScope()); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code TimedQuota} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_TimedQuota_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_TimedQuota_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + timeUnit_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit.NANOSECONDS; + bitField0_ = (bitField0_ & ~0x00000001); + softLimit_ = 0L; + bitField0_ = (bitField0_ & ~0x00000002); + share_ = 0F; + bitField0_ = (bitField0_ & ~0x00000004); + scope_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope.MACHINE; + bitField0_ = (bitField0_ & ~0x00000008); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_TimedQuota_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota build() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota result = new org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.timeUnit_ = timeUnit_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + result.softLimit_ = softLimit_; + if (((from_bitField0_ & 0x00000004) == 0x00000004)) { + to_bitField0_ |= 0x00000004; + } + result.share_ = share_; + if (((from_bitField0_ & 0x00000008) == 0x00000008)) { + to_bitField0_ |= 0x00000008; + } + result.scope_ = scope_; + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) return this; + if (other.hasTimeUnit()) { + setTimeUnit(other.getTimeUnit()); + } + if (other.hasSoftLimit()) { + setSoftLimit(other.getSoftLimit()); + } + if (other.hasShare()) { + setShare(other.getShare()); + } + if (other.hasScope()) { + setScope(other.getScope()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (!hasTimeUnit()) { + + return false; + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // required .TimeUnit time_unit = 1; + private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit timeUnit_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit.NANOSECONDS; + /** + * required .TimeUnit time_unit = 1; + */ + public boolean hasTimeUnit() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * required .TimeUnit time_unit = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit getTimeUnit() { + return timeUnit_; + } + /** + * required .TimeUnit time_unit = 1; + */ + public Builder setTimeUnit(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000001; + timeUnit_ = value; + onChanged(); + return this; + } + /** + * required .TimeUnit time_unit = 1; + */ + public Builder clearTimeUnit() { + bitField0_ = (bitField0_ & ~0x00000001); + timeUnit_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.TimeUnit.NANOSECONDS; + onChanged(); + return this; + } + + // optional uint64 soft_limit = 2; + private long softLimit_ ; + /** + * optional uint64 soft_limit = 2; + */ + public boolean hasSoftLimit() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional uint64 soft_limit = 2; + */ + public long getSoftLimit() { + return softLimit_; + } + /** + * optional uint64 soft_limit = 2; + */ + public Builder setSoftLimit(long value) { + bitField0_ |= 0x00000002; + softLimit_ = value; + onChanged(); + return this; + } + /** + * optional uint64 soft_limit = 2; + */ + public Builder clearSoftLimit() { + bitField0_ = (bitField0_ & ~0x00000002); + softLimit_ = 0L; + onChanged(); + return this; + } + + // optional float share = 3; + private float share_ ; + /** + * optional float share = 3; + */ + public boolean hasShare() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional float share = 3; + */ + public float getShare() { + return share_; + } + /** + * optional float share = 3; + */ + public Builder setShare(float value) { + bitField0_ |= 0x00000004; + share_ = value; + onChanged(); + return this; + } + /** + * optional float share = 3; + */ + public Builder clearShare() { + bitField0_ = (bitField0_ & ~0x00000004); + share_ = 0F; + onChanged(); + return this; + } + + // optional .QuotaScope scope = 4 [default = MACHINE]; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope scope_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope.MACHINE; + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public boolean hasScope() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope getScope() { + return scope_; + } + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public Builder setScope(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000008; + scope_ = value; + onChanged(); + return this; + } + /** + * optional .QuotaScope scope = 4 [default = MACHINE]; + */ + public Builder clearScope() { + bitField0_ = (bitField0_ & ~0x00000008); + scope_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaScope.MACHINE; + onChanged(); + return this; + } + + // @@protoc_insertion_point(builder_scope:TimedQuota) + } + + static { + defaultInstance = new TimedQuota(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:TimedQuota) + } + + public interface ThrottleOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional .TimedQuota req_num = 1; + /** + * optional .TimedQuota req_num = 1; + */ + boolean hasReqNum(); + /** + * optional .TimedQuota req_num = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqNum(); + /** + * optional .TimedQuota req_num = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqNumOrBuilder(); + + // optional .TimedQuota req_size = 2; + /** + * optional .TimedQuota req_size = 2; + */ + boolean hasReqSize(); + /** + * optional .TimedQuota req_size = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqSize(); + /** + * optional .TimedQuota req_size = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqSizeOrBuilder(); + + // optional .TimedQuota write_num = 3; + /** + * optional .TimedQuota write_num = 3; + */ + boolean hasWriteNum(); + /** + * optional .TimedQuota write_num = 3; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteNum(); + /** + * optional .TimedQuota write_num = 3; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteNumOrBuilder(); + + // optional .TimedQuota write_size = 4; + /** + * optional .TimedQuota write_size = 4; + */ + boolean hasWriteSize(); + /** + * optional .TimedQuota write_size = 4; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteSize(); + /** + * optional .TimedQuota write_size = 4; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteSizeOrBuilder(); + + // optional .TimedQuota read_num = 5; + /** + * optional .TimedQuota read_num = 5; + */ + boolean hasReadNum(); + /** + * optional .TimedQuota read_num = 5; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadNum(); + /** + * optional .TimedQuota read_num = 5; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadNumOrBuilder(); + + // optional .TimedQuota read_size = 6; + /** + * optional .TimedQuota read_size = 6; + */ + boolean hasReadSize(); + /** + * optional .TimedQuota read_size = 6; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadSize(); + /** + * optional .TimedQuota read_size = 6; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadSizeOrBuilder(); + } + /** + * Protobuf type {@code Throttle} + */ + public static final class Throttle extends + com.google.protobuf.GeneratedMessage + implements ThrottleOrBuilder { + // Use Throttle.newBuilder() to construct. + private Throttle(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private Throttle(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final Throttle defaultInstance; + public static Throttle getDefaultInstance() { + return defaultInstance; + } + + public Throttle getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private Throttle( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = reqNum_.toBuilder(); + } + reqNum_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(reqNum_); + reqNum_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } + case 18: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000002) == 0x00000002)) { + subBuilder = reqSize_.toBuilder(); + } + reqSize_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(reqSize_); + reqSize_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000002; + break; + } + case 26: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000004) == 0x00000004)) { + subBuilder = writeNum_.toBuilder(); + } + writeNum_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(writeNum_); + writeNum_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000004; + break; + } + case 34: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000008) == 0x00000008)) { + subBuilder = writeSize_.toBuilder(); + } + writeSize_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(writeSize_); + writeSize_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000008; + break; + } + case 42: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000010) == 0x00000010)) { + subBuilder = readNum_.toBuilder(); + } + readNum_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(readNum_); + readNum_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000010; + break; + } + case 50: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000020) == 0x00000020)) { + subBuilder = readSize_.toBuilder(); + } + readSize_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(readSize_); + readSize_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000020; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Throttle_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Throttle_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public Throttle parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new Throttle(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional .TimedQuota req_num = 1; + public static final int REQ_NUM_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota reqNum_; + /** + * optional .TimedQuota req_num = 1; + */ + public boolean hasReqNum() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .TimedQuota req_num = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqNum() { + return reqNum_; + } + /** + * optional .TimedQuota req_num = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqNumOrBuilder() { + return reqNum_; + } + + // optional .TimedQuota req_size = 2; + public static final int REQ_SIZE_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota reqSize_; + /** + * optional .TimedQuota req_size = 2; + */ + public boolean hasReqSize() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .TimedQuota req_size = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqSize() { + return reqSize_; + } + /** + * optional .TimedQuota req_size = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqSizeOrBuilder() { + return reqSize_; + } + + // optional .TimedQuota write_num = 3; + public static final int WRITE_NUM_FIELD_NUMBER = 3; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota writeNum_; + /** + * optional .TimedQuota write_num = 3; + */ + public boolean hasWriteNum() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional .TimedQuota write_num = 3; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteNum() { + return writeNum_; + } + /** + * optional .TimedQuota write_num = 3; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteNumOrBuilder() { + return writeNum_; + } + + // optional .TimedQuota write_size = 4; + public static final int WRITE_SIZE_FIELD_NUMBER = 4; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota writeSize_; + /** + * optional .TimedQuota write_size = 4; + */ + public boolean hasWriteSize() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .TimedQuota write_size = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteSize() { + return writeSize_; + } + /** + * optional .TimedQuota write_size = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteSizeOrBuilder() { + return writeSize_; + } + + // optional .TimedQuota read_num = 5; + public static final int READ_NUM_FIELD_NUMBER = 5; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota readNum_; + /** + * optional .TimedQuota read_num = 5; + */ + public boolean hasReadNum() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional .TimedQuota read_num = 5; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadNum() { + return readNum_; + } + /** + * optional .TimedQuota read_num = 5; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadNumOrBuilder() { + return readNum_; + } + + // optional .TimedQuota read_size = 6; + public static final int READ_SIZE_FIELD_NUMBER = 6; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota readSize_; + /** + * optional .TimedQuota read_size = 6; + */ + public boolean hasReadSize() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional .TimedQuota read_size = 6; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadSize() { + return readSize_; + } + /** + * optional .TimedQuota read_size = 6; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadSizeOrBuilder() { + return readSize_; + } + + private void initFields() { + reqNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + reqSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + writeNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + writeSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + readNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + readSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasReqNum()) { + if (!getReqNum().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasReqSize()) { + if (!getReqSize().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasWriteNum()) { + if (!getWriteNum().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasWriteSize()) { + if (!getWriteSize().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasReadNum()) { + if (!getReadNum().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + if (hasReadSize()) { + if (!getReadSize().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeMessage(1, reqNum_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeMessage(2, reqSize_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + output.writeMessage(3, writeNum_); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + output.writeMessage(4, writeSize_); + } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + output.writeMessage(5, readNum_); + } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + output.writeMessage(6, readSize_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(1, reqNum_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(2, reqSize_); + } + if (((bitField0_ & 0x00000004) == 0x00000004)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(3, writeNum_); + } + if (((bitField0_ & 0x00000008) == 0x00000008)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(4, writeSize_); + } + if (((bitField0_ & 0x00000010) == 0x00000010)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(5, readNum_); + } + if (((bitField0_ & 0x00000020) == 0x00000020)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(6, readSize_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle other = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle) obj; + + boolean result = true; + result = result && (hasReqNum() == other.hasReqNum()); + if (hasReqNum()) { + result = result && getReqNum() + .equals(other.getReqNum()); + } + result = result && (hasReqSize() == other.hasReqSize()); + if (hasReqSize()) { + result = result && getReqSize() + .equals(other.getReqSize()); + } + result = result && (hasWriteNum() == other.hasWriteNum()); + if (hasWriteNum()) { + result = result && getWriteNum() + .equals(other.getWriteNum()); + } + result = result && (hasWriteSize() == other.hasWriteSize()); + if (hasWriteSize()) { + result = result && getWriteSize() + .equals(other.getWriteSize()); + } + result = result && (hasReadNum() == other.hasReadNum()); + if (hasReadNum()) { + result = result && getReadNum() + .equals(other.getReadNum()); + } + result = result && (hasReadSize() == other.hasReadSize()); + if (hasReadSize()) { + result = result && getReadSize() + .equals(other.getReadSize()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasReqNum()) { + hash = (37 * hash) + REQ_NUM_FIELD_NUMBER; + hash = (53 * hash) + getReqNum().hashCode(); + } + if (hasReqSize()) { + hash = (37 * hash) + REQ_SIZE_FIELD_NUMBER; + hash = (53 * hash) + getReqSize().hashCode(); + } + if (hasWriteNum()) { + hash = (37 * hash) + WRITE_NUM_FIELD_NUMBER; + hash = (53 * hash) + getWriteNum().hashCode(); + } + if (hasWriteSize()) { + hash = (37 * hash) + WRITE_SIZE_FIELD_NUMBER; + hash = (53 * hash) + getWriteSize().hashCode(); + } + if (hasReadNum()) { + hash = (37 * hash) + READ_NUM_FIELD_NUMBER; + hash = (53 * hash) + getReadNum().hashCode(); + } + if (hasReadSize()) { + hash = (37 * hash) + READ_SIZE_FIELD_NUMBER; + hash = (53 * hash) + getReadSize().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code Throttle} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Throttle_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Throttle_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getReqNumFieldBuilder(); + getReqSizeFieldBuilder(); + getWriteNumFieldBuilder(); + getWriteSizeFieldBuilder(); + getReadNumFieldBuilder(); + getReadSizeFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + if (reqNumBuilder_ == null) { + reqNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + reqNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + if (reqSizeBuilder_ == null) { + reqSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + reqSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + if (writeNumBuilder_ == null) { + writeNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + writeNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000004); + if (writeSizeBuilder_ == null) { + writeSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + writeSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000008); + if (readNumBuilder_ == null) { + readNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + readNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000010); + if (readSizeBuilder_ == null) { + readSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + readSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000020); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Throttle_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle build() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle result = new org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + if (reqNumBuilder_ == null) { + result.reqNum_ = reqNum_; + } else { + result.reqNum_ = reqNumBuilder_.build(); + } + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + if (reqSizeBuilder_ == null) { + result.reqSize_ = reqSize_; + } else { + result.reqSize_ = reqSizeBuilder_.build(); + } + if (((from_bitField0_ & 0x00000004) == 0x00000004)) { + to_bitField0_ |= 0x00000004; + } + if (writeNumBuilder_ == null) { + result.writeNum_ = writeNum_; + } else { + result.writeNum_ = writeNumBuilder_.build(); + } + if (((from_bitField0_ & 0x00000008) == 0x00000008)) { + to_bitField0_ |= 0x00000008; + } + if (writeSizeBuilder_ == null) { + result.writeSize_ = writeSize_; + } else { + result.writeSize_ = writeSizeBuilder_.build(); + } + if (((from_bitField0_ & 0x00000010) == 0x00000010)) { + to_bitField0_ |= 0x00000010; + } + if (readNumBuilder_ == null) { + result.readNum_ = readNum_; + } else { + result.readNum_ = readNumBuilder_.build(); + } + if (((from_bitField0_ & 0x00000020) == 0x00000020)) { + to_bitField0_ |= 0x00000020; + } + if (readSizeBuilder_ == null) { + result.readSize_ = readSize_; + } else { + result.readSize_ = readSizeBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance()) return this; + if (other.hasReqNum()) { + mergeReqNum(other.getReqNum()); + } + if (other.hasReqSize()) { + mergeReqSize(other.getReqSize()); + } + if (other.hasWriteNum()) { + mergeWriteNum(other.getWriteNum()); + } + if (other.hasWriteSize()) { + mergeWriteSize(other.getWriteSize()); + } + if (other.hasReadNum()) { + mergeReadNum(other.getReadNum()); + } + if (other.hasReadSize()) { + mergeReadSize(other.getReadSize()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasReqNum()) { + if (!getReqNum().isInitialized()) { + + return false; + } + } + if (hasReqSize()) { + if (!getReqSize().isInitialized()) { + + return false; + } + } + if (hasWriteNum()) { + if (!getWriteNum().isInitialized()) { + + return false; + } + } + if (hasWriteSize()) { + if (!getWriteSize().isInitialized()) { + + return false; + } + } + if (hasReadNum()) { + if (!getReadNum().isInitialized()) { + + return false; + } + } + if (hasReadSize()) { + if (!getReadSize().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional .TimedQuota req_num = 1; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota reqNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> reqNumBuilder_; + /** + * optional .TimedQuota req_num = 1; + */ + public boolean hasReqNum() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .TimedQuota req_num = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqNum() { + if (reqNumBuilder_ == null) { + return reqNum_; + } else { + return reqNumBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota req_num = 1; + */ + public Builder setReqNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (reqNumBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + reqNum_ = value; + onChanged(); + } else { + reqNumBuilder_.setMessage(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .TimedQuota req_num = 1; + */ + public Builder setReqNum( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (reqNumBuilder_ == null) { + reqNum_ = builderForValue.build(); + onChanged(); + } else { + reqNumBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .TimedQuota req_num = 1; + */ + public Builder mergeReqNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (reqNumBuilder_ == null) { + if (((bitField0_ & 0x00000001) == 0x00000001) && + reqNum_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + reqNum_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(reqNum_).mergeFrom(value).buildPartial(); + } else { + reqNum_ = value; + } + onChanged(); + } else { + reqNumBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000001; + return this; + } + /** + * optional .TimedQuota req_num = 1; + */ + public Builder clearReqNum() { + if (reqNumBuilder_ == null) { + reqNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + reqNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000001); + return this; + } + /** + * optional .TimedQuota req_num = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getReqNumBuilder() { + bitField0_ |= 0x00000001; + onChanged(); + return getReqNumFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota req_num = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqNumOrBuilder() { + if (reqNumBuilder_ != null) { + return reqNumBuilder_.getMessageOrBuilder(); + } else { + return reqNum_; + } + } + /** + * optional .TimedQuota req_num = 1; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getReqNumFieldBuilder() { + if (reqNumBuilder_ == null) { + reqNumBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + reqNum_, + getParentForChildren(), + isClean()); + reqNum_ = null; + } + return reqNumBuilder_; + } + + // optional .TimedQuota req_size = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota reqSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> reqSizeBuilder_; + /** + * optional .TimedQuota req_size = 2; + */ + public boolean hasReqSize() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .TimedQuota req_size = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReqSize() { + if (reqSizeBuilder_ == null) { + return reqSize_; + } else { + return reqSizeBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota req_size = 2; + */ + public Builder setReqSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (reqSizeBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + reqSize_ = value; + onChanged(); + } else { + reqSizeBuilder_.setMessage(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota req_size = 2; + */ + public Builder setReqSize( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (reqSizeBuilder_ == null) { + reqSize_ = builderForValue.build(); + onChanged(); + } else { + reqSizeBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota req_size = 2; + */ + public Builder mergeReqSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (reqSizeBuilder_ == null) { + if (((bitField0_ & 0x00000002) == 0x00000002) && + reqSize_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + reqSize_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(reqSize_).mergeFrom(value).buildPartial(); + } else { + reqSize_ = value; + } + onChanged(); + } else { + reqSizeBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota req_size = 2; + */ + public Builder clearReqSize() { + if (reqSizeBuilder_ == null) { + reqSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + reqSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + /** + * optional .TimedQuota req_size = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getReqSizeBuilder() { + bitField0_ |= 0x00000002; + onChanged(); + return getReqSizeFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota req_size = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReqSizeOrBuilder() { + if (reqSizeBuilder_ != null) { + return reqSizeBuilder_.getMessageOrBuilder(); + } else { + return reqSize_; + } + } + /** + * optional .TimedQuota req_size = 2; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getReqSizeFieldBuilder() { + if (reqSizeBuilder_ == null) { + reqSizeBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + reqSize_, + getParentForChildren(), + isClean()); + reqSize_ = null; + } + return reqSizeBuilder_; + } + + // optional .TimedQuota write_num = 3; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota writeNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> writeNumBuilder_; + /** + * optional .TimedQuota write_num = 3; + */ + public boolean hasWriteNum() { + return ((bitField0_ & 0x00000004) == 0x00000004); + } + /** + * optional .TimedQuota write_num = 3; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteNum() { + if (writeNumBuilder_ == null) { + return writeNum_; + } else { + return writeNumBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota write_num = 3; + */ + public Builder setWriteNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (writeNumBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + writeNum_ = value; + onChanged(); + } else { + writeNumBuilder_.setMessage(value); + } + bitField0_ |= 0x00000004; + return this; + } + /** + * optional .TimedQuota write_num = 3; + */ + public Builder setWriteNum( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (writeNumBuilder_ == null) { + writeNum_ = builderForValue.build(); + onChanged(); + } else { + writeNumBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000004; + return this; + } + /** + * optional .TimedQuota write_num = 3; + */ + public Builder mergeWriteNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (writeNumBuilder_ == null) { + if (((bitField0_ & 0x00000004) == 0x00000004) && + writeNum_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + writeNum_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(writeNum_).mergeFrom(value).buildPartial(); + } else { + writeNum_ = value; + } + onChanged(); + } else { + writeNumBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000004; + return this; + } + /** + * optional .TimedQuota write_num = 3; + */ + public Builder clearWriteNum() { + if (writeNumBuilder_ == null) { + writeNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + writeNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000004); + return this; + } + /** + * optional .TimedQuota write_num = 3; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getWriteNumBuilder() { + bitField0_ |= 0x00000004; + onChanged(); + return getWriteNumFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota write_num = 3; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteNumOrBuilder() { + if (writeNumBuilder_ != null) { + return writeNumBuilder_.getMessageOrBuilder(); + } else { + return writeNum_; + } + } + /** + * optional .TimedQuota write_num = 3; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getWriteNumFieldBuilder() { + if (writeNumBuilder_ == null) { + writeNumBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + writeNum_, + getParentForChildren(), + isClean()); + writeNum_ = null; + } + return writeNumBuilder_; + } + + // optional .TimedQuota write_size = 4; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota writeSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> writeSizeBuilder_; + /** + * optional .TimedQuota write_size = 4; + */ + public boolean hasWriteSize() { + return ((bitField0_ & 0x00000008) == 0x00000008); + } + /** + * optional .TimedQuota write_size = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getWriteSize() { + if (writeSizeBuilder_ == null) { + return writeSize_; + } else { + return writeSizeBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota write_size = 4; + */ + public Builder setWriteSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (writeSizeBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + writeSize_ = value; + onChanged(); + } else { + writeSizeBuilder_.setMessage(value); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TimedQuota write_size = 4; + */ + public Builder setWriteSize( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (writeSizeBuilder_ == null) { + writeSize_ = builderForValue.build(); + onChanged(); + } else { + writeSizeBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TimedQuota write_size = 4; + */ + public Builder mergeWriteSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (writeSizeBuilder_ == null) { + if (((bitField0_ & 0x00000008) == 0x00000008) && + writeSize_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + writeSize_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(writeSize_).mergeFrom(value).buildPartial(); + } else { + writeSize_ = value; + } + onChanged(); + } else { + writeSizeBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000008; + return this; + } + /** + * optional .TimedQuota write_size = 4; + */ + public Builder clearWriteSize() { + if (writeSizeBuilder_ == null) { + writeSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + writeSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000008); + return this; + } + /** + * optional .TimedQuota write_size = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getWriteSizeBuilder() { + bitField0_ |= 0x00000008; + onChanged(); + return getWriteSizeFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota write_size = 4; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getWriteSizeOrBuilder() { + if (writeSizeBuilder_ != null) { + return writeSizeBuilder_.getMessageOrBuilder(); + } else { + return writeSize_; + } + } + /** + * optional .TimedQuota write_size = 4; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getWriteSizeFieldBuilder() { + if (writeSizeBuilder_ == null) { + writeSizeBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + writeSize_, + getParentForChildren(), + isClean()); + writeSize_ = null; + } + return writeSizeBuilder_; + } + + // optional .TimedQuota read_num = 5; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota readNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> readNumBuilder_; + /** + * optional .TimedQuota read_num = 5; + */ + public boolean hasReadNum() { + return ((bitField0_ & 0x00000010) == 0x00000010); + } + /** + * optional .TimedQuota read_num = 5; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadNum() { + if (readNumBuilder_ == null) { + return readNum_; + } else { + return readNumBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota read_num = 5; + */ + public Builder setReadNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (readNumBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + readNum_ = value; + onChanged(); + } else { + readNumBuilder_.setMessage(value); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .TimedQuota read_num = 5; + */ + public Builder setReadNum( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (readNumBuilder_ == null) { + readNum_ = builderForValue.build(); + onChanged(); + } else { + readNumBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .TimedQuota read_num = 5; + */ + public Builder mergeReadNum(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (readNumBuilder_ == null) { + if (((bitField0_ & 0x00000010) == 0x00000010) && + readNum_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + readNum_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(readNum_).mergeFrom(value).buildPartial(); + } else { + readNum_ = value; + } + onChanged(); + } else { + readNumBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000010; + return this; + } + /** + * optional .TimedQuota read_num = 5; + */ + public Builder clearReadNum() { + if (readNumBuilder_ == null) { + readNum_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + readNumBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000010); + return this; + } + /** + * optional .TimedQuota read_num = 5; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getReadNumBuilder() { + bitField0_ |= 0x00000010; + onChanged(); + return getReadNumFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota read_num = 5; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadNumOrBuilder() { + if (readNumBuilder_ != null) { + return readNumBuilder_.getMessageOrBuilder(); + } else { + return readNum_; + } + } + /** + * optional .TimedQuota read_num = 5; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getReadNumFieldBuilder() { + if (readNumBuilder_ == null) { + readNumBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + readNum_, + getParentForChildren(), + isClean()); + readNum_ = null; + } + return readNumBuilder_; + } + + // optional .TimedQuota read_size = 6; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota readSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> readSizeBuilder_; + /** + * optional .TimedQuota read_size = 6; + */ + public boolean hasReadSize() { + return ((bitField0_ & 0x00000020) == 0x00000020); + } + /** + * optional .TimedQuota read_size = 6; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getReadSize() { + if (readSizeBuilder_ == null) { + return readSize_; + } else { + return readSizeBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota read_size = 6; + */ + public Builder setReadSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (readSizeBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + readSize_ = value; + onChanged(); + } else { + readSizeBuilder_.setMessage(value); + } + bitField0_ |= 0x00000020; + return this; + } + /** + * optional .TimedQuota read_size = 6; + */ + public Builder setReadSize( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (readSizeBuilder_ == null) { + readSize_ = builderForValue.build(); + onChanged(); + } else { + readSizeBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000020; + return this; + } + /** + * optional .TimedQuota read_size = 6; + */ + public Builder mergeReadSize(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (readSizeBuilder_ == null) { + if (((bitField0_ & 0x00000020) == 0x00000020) && + readSize_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + readSize_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(readSize_).mergeFrom(value).buildPartial(); + } else { + readSize_ = value; + } + onChanged(); + } else { + readSizeBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000020; + return this; + } + /** + * optional .TimedQuota read_size = 6; + */ + public Builder clearReadSize() { + if (readSizeBuilder_ == null) { + readSize_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + readSizeBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000020); + return this; + } + /** + * optional .TimedQuota read_size = 6; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getReadSizeBuilder() { + bitField0_ |= 0x00000020; + onChanged(); + return getReadSizeFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota read_size = 6; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getReadSizeOrBuilder() { + if (readSizeBuilder_ != null) { + return readSizeBuilder_.getMessageOrBuilder(); + } else { + return readSize_; + } + } + /** + * optional .TimedQuota read_size = 6; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getReadSizeFieldBuilder() { + if (readSizeBuilder_ == null) { + readSizeBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + readSize_, + getParentForChildren(), + isClean()); + readSize_ = null; + } + return readSizeBuilder_; + } + + // @@protoc_insertion_point(builder_scope:Throttle) + } + + static { + defaultInstance = new Throttle(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:Throttle) + } + + public interface ThrottleRequestOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional .ThrottleType type = 1; + /** + * optional .ThrottleType type = 1; + */ + boolean hasType(); + /** + * optional .ThrottleType type = 1; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType getType(); + + // optional .TimedQuota timed_quota = 2; + /** + * optional .TimedQuota timed_quota = 2; + */ + boolean hasTimedQuota(); + /** + * optional .TimedQuota timed_quota = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getTimedQuota(); + /** + * optional .TimedQuota timed_quota = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getTimedQuotaOrBuilder(); + } + /** + * Protobuf type {@code ThrottleRequest} + */ + public static final class ThrottleRequest extends + com.google.protobuf.GeneratedMessage + implements ThrottleRequestOrBuilder { + // Use ThrottleRequest.newBuilder() to construct. + private ThrottleRequest(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private ThrottleRequest(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final ThrottleRequest defaultInstance; + public static ThrottleRequest getDefaultInstance() { + return defaultInstance; + } + + public ThrottleRequest getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private ThrottleRequest( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + int rawValue = input.readEnum(); + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType value = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType.valueOf(rawValue); + if (value == null) { + unknownFields.mergeVarintField(1, rawValue); + } else { + bitField0_ |= 0x00000001; + type_ = value; + } + break; + } + case 18: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder subBuilder = null; + if (((bitField0_ & 0x00000002) == 0x00000002)) { + subBuilder = timedQuota_.toBuilder(); + } + timedQuota_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(timedQuota_); + timedQuota_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000002; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_ThrottleRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_ThrottleRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public ThrottleRequest parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new ThrottleRequest(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional .ThrottleType type = 1; + public static final int TYPE_FIELD_NUMBER = 1; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType type_; + /** + * optional .ThrottleType type = 1; + */ + public boolean hasType() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .ThrottleType type = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType getType() { + return type_; + } + + // optional .TimedQuota timed_quota = 2; + public static final int TIMED_QUOTA_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota timedQuota_; + /** + * optional .TimedQuota timed_quota = 2; + */ + public boolean hasTimedQuota() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getTimedQuota() { + return timedQuota_; + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getTimedQuotaOrBuilder() { + return timedQuota_; + } + + private void initFields() { + type_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType.REQUEST_NUMBER; + timedQuota_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasTimedQuota()) { + if (!getTimedQuota().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeEnum(1, type_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeMessage(2, timedQuota_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeEnumSize(1, type_.getNumber()); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(2, timedQuota_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest other = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest) obj; + + boolean result = true; + result = result && (hasType() == other.hasType()); + if (hasType()) { + result = result && + (getType() == other.getType()); + } + result = result && (hasTimedQuota() == other.hasTimedQuota()); + if (hasTimedQuota()) { + result = result && getTimedQuota() + .equals(other.getTimedQuota()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasType()) { + hash = (37 * hash) + TYPE_FIELD_NUMBER; + hash = (53 * hash) + hashEnum(getType()); + } + if (hasTimedQuota()) { + hash = (37 * hash) + TIMED_QUOTA_FIELD_NUMBER; + hash = (53 * hash) + getTimedQuota().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code ThrottleRequest} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequestOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_ThrottleRequest_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_ThrottleRequest_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getTimedQuotaFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + type_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType.REQUEST_NUMBER; + bitField0_ = (bitField0_ & ~0x00000001); + if (timedQuotaBuilder_ == null) { + timedQuota_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + } else { + timedQuotaBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_ThrottleRequest_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest build() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest result = new org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.type_ = type_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + if (timedQuotaBuilder_ == null) { + result.timedQuota_ = timedQuota_; + } else { + result.timedQuota_ = timedQuotaBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest.getDefaultInstance()) return this; + if (other.hasType()) { + setType(other.getType()); + } + if (other.hasTimedQuota()) { + mergeTimedQuota(other.getTimedQuota()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasTimedQuota()) { + if (!getTimedQuota().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional .ThrottleType type = 1; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType type_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType.REQUEST_NUMBER; + /** + * optional .ThrottleType type = 1; + */ + public boolean hasType() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional .ThrottleType type = 1; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType getType() { + return type_; + } + /** + * optional .ThrottleType type = 1; + */ + public Builder setType(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType value) { + if (value == null) { + throw new NullPointerException(); + } + bitField0_ |= 0x00000001; + type_ = value; + onChanged(); + return this; + } + /** + * optional .ThrottleType type = 1; + */ + public Builder clearType() { + bitField0_ = (bitField0_ & ~0x00000001); + type_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleType.REQUEST_NUMBER; + onChanged(); + return this; + } + + // optional .TimedQuota timed_quota = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota timedQuota_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> timedQuotaBuilder_; + /** + * optional .TimedQuota timed_quota = 2; + */ + public boolean hasTimedQuota() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota getTimedQuota() { + if (timedQuotaBuilder_ == null) { + return timedQuota_; + } else { + return timedQuotaBuilder_.getMessage(); + } + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public Builder setTimedQuota(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (timedQuotaBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + timedQuota_ = value; + onChanged(); + } else { + timedQuotaBuilder_.setMessage(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public Builder setTimedQuota( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder builderForValue) { + if (timedQuotaBuilder_ == null) { + timedQuota_ = builderForValue.build(); + onChanged(); + } else { + timedQuotaBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public Builder mergeTimedQuota(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota value) { + if (timedQuotaBuilder_ == null) { + if (((bitField0_ & 0x00000002) == 0x00000002) && + timedQuota_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance()) { + timedQuota_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.newBuilder(timedQuota_).mergeFrom(value).buildPartial(); + } else { + timedQuota_ = value; + } + onChanged(); + } else { + timedQuotaBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public Builder clearTimedQuota() { + if (timedQuotaBuilder_ == null) { + timedQuota_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.getDefaultInstance(); + onChanged(); + } else { + timedQuotaBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder getTimedQuotaBuilder() { + bitField0_ |= 0x00000002; + onChanged(); + return getTimedQuotaFieldBuilder().getBuilder(); + } + /** + * optional .TimedQuota timed_quota = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder getTimedQuotaOrBuilder() { + if (timedQuotaBuilder_ != null) { + return timedQuotaBuilder_.getMessageOrBuilder(); + } else { + return timedQuota_; + } + } + /** + * optional .TimedQuota timed_quota = 2; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder> + getTimedQuotaFieldBuilder() { + if (timedQuotaBuilder_ == null) { + timedQuotaBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuotaOrBuilder>( + timedQuota_, + getParentForChildren(), + isClean()); + timedQuota_ = null; + } + return timedQuotaBuilder_; + } + + // @@protoc_insertion_point(builder_scope:ThrottleRequest) + } + + static { + defaultInstance = new ThrottleRequest(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:ThrottleRequest) + } + + public interface QuotasOrBuilder + extends com.google.protobuf.MessageOrBuilder { + + // optional bool bypass_globals = 1 [default = false]; + /** + * optional bool bypass_globals = 1 [default = false]; + */ + boolean hasBypassGlobals(); + /** + * optional bool bypass_globals = 1 [default = false]; + */ + boolean getBypassGlobals(); + + // optional .Throttle throttle = 2; + /** + * optional .Throttle throttle = 2; + */ + boolean hasThrottle(); + /** + * optional .Throttle throttle = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle getThrottle(); + /** + * optional .Throttle throttle = 2; + */ + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder getThrottleOrBuilder(); + } + /** + * Protobuf type {@code Quotas} + */ + public static final class Quotas extends + com.google.protobuf.GeneratedMessage + implements QuotasOrBuilder { + // Use Quotas.newBuilder() to construct. + private Quotas(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private Quotas(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final Quotas defaultInstance; + public static Quotas getDefaultInstance() { + return defaultInstance; + } + + public Quotas getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private Quotas( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 8: { + bitField0_ |= 0x00000001; + bypassGlobals_ = input.readBool(); + break; + } + case 18: { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder subBuilder = null; + if (((bitField0_ & 0x00000002) == 0x00000002)) { + subBuilder = throttle_.toBuilder(); + } + throttle_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(throttle_); + throttle_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000002; + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Quotas_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Quotas_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public Quotas parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new Quotas(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private int bitField0_; + // optional bool bypass_globals = 1 [default = false]; + public static final int BYPASS_GLOBALS_FIELD_NUMBER = 1; + private boolean bypassGlobals_; + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public boolean hasBypassGlobals() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public boolean getBypassGlobals() { + return bypassGlobals_; + } + + // optional .Throttle throttle = 2; + public static final int THROTTLE_FIELD_NUMBER = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle throttle_; + /** + * optional .Throttle throttle = 2; + */ + public boolean hasThrottle() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .Throttle throttle = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle getThrottle() { + return throttle_; + } + /** + * optional .Throttle throttle = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder getThrottleOrBuilder() { + return throttle_; + } + + private void initFields() { + bypassGlobals_ = false; + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance(); + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + if (hasThrottle()) { + if (!getThrottle().isInitialized()) { + memoizedIsInitialized = 0; + return false; + } + } + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeBool(1, bypassGlobals_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + output.writeMessage(2, throttle_); + } + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += com.google.protobuf.CodedOutputStream + .computeBoolSize(1, bypassGlobals_); + } + if (((bitField0_ & 0x00000002) == 0x00000002)) { + size += com.google.protobuf.CodedOutputStream + .computeMessageSize(2, throttle_); + } + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas other = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas) obj; + + boolean result = true; + result = result && (hasBypassGlobals() == other.hasBypassGlobals()); + if (hasBypassGlobals()) { + result = result && (getBypassGlobals() + == other.getBypassGlobals()); + } + result = result && (hasThrottle() == other.hasThrottle()); + if (hasThrottle()) { + result = result && getThrottle() + .equals(other.getThrottle()); + } + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + if (hasBypassGlobals()) { + hash = (37 * hash) + BYPASS_GLOBALS_FIELD_NUMBER; + hash = (53 * hash) + hashBoolean(getBypassGlobals()); + } + if (hasThrottle()) { + hash = (37 * hash) + THROTTLE_FIELD_NUMBER; + hash = (53 * hash) + getThrottle().hashCode(); + } + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code Quotas} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotasOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Quotas_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Quotas_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + getThrottleFieldBuilder(); + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + bypassGlobals_ = false; + bitField0_ = (bitField0_ & ~0x00000001); + if (throttleBuilder_ == null) { + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance(); + } else { + throttleBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_Quotas_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas build() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas result = new org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas(this); + int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; + if (((from_bitField0_ & 0x00000001) == 0x00000001)) { + to_bitField0_ |= 0x00000001; + } + result.bypassGlobals_ = bypassGlobals_; + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000002; + } + if (throttleBuilder_ == null) { + result.throttle_ = throttle_; + } else { + result.throttle_ = throttleBuilder_.build(); + } + result.bitField0_ = to_bitField0_; + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas.getDefaultInstance()) return this; + if (other.hasBypassGlobals()) { + setBypassGlobals(other.getBypassGlobals()); + } + if (other.hasThrottle()) { + mergeThrottle(other.getThrottle()); + } + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + if (hasThrottle()) { + if (!getThrottle().isInitialized()) { + + return false; + } + } + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + private int bitField0_; + + // optional bool bypass_globals = 1 [default = false]; + private boolean bypassGlobals_ ; + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public boolean hasBypassGlobals() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public boolean getBypassGlobals() { + return bypassGlobals_; + } + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public Builder setBypassGlobals(boolean value) { + bitField0_ |= 0x00000001; + bypassGlobals_ = value; + onChanged(); + return this; + } + /** + * optional bool bypass_globals = 1 [default = false]; + */ + public Builder clearBypassGlobals() { + bitField0_ = (bitField0_ & ~0x00000001); + bypassGlobals_ = false; + onChanged(); + return this; + } + + // optional .Throttle throttle = 2; + private org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance(); + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder> throttleBuilder_; + /** + * optional .Throttle throttle = 2; + */ + public boolean hasThrottle() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional .Throttle throttle = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle getThrottle() { + if (throttleBuilder_ == null) { + return throttle_; + } else { + return throttleBuilder_.getMessage(); + } + } + /** + * optional .Throttle throttle = 2; + */ + public Builder setThrottle(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle value) { + if (throttleBuilder_ == null) { + if (value == null) { + throw new NullPointerException(); + } + throttle_ = value; + onChanged(); + } else { + throttleBuilder_.setMessage(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .Throttle throttle = 2; + */ + public Builder setThrottle( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder builderForValue) { + if (throttleBuilder_ == null) { + throttle_ = builderForValue.build(); + onChanged(); + } else { + throttleBuilder_.setMessage(builderForValue.build()); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .Throttle throttle = 2; + */ + public Builder mergeThrottle(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle value) { + if (throttleBuilder_ == null) { + if (((bitField0_ & 0x00000002) == 0x00000002) && + throttle_ != org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance()) { + throttle_ = + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.newBuilder(throttle_).mergeFrom(value).buildPartial(); + } else { + throttle_ = value; + } + onChanged(); + } else { + throttleBuilder_.mergeFrom(value); + } + bitField0_ |= 0x00000002; + return this; + } + /** + * optional .Throttle throttle = 2; + */ + public Builder clearThrottle() { + if (throttleBuilder_ == null) { + throttle_ = org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.getDefaultInstance(); + onChanged(); + } else { + throttleBuilder_.clear(); + } + bitField0_ = (bitField0_ & ~0x00000002); + return this; + } + /** + * optional .Throttle throttle = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder getThrottleBuilder() { + bitField0_ |= 0x00000002; + onChanged(); + return getThrottleFieldBuilder().getBuilder(); + } + /** + * optional .Throttle throttle = 2; + */ + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder getThrottleOrBuilder() { + if (throttleBuilder_ != null) { + return throttleBuilder_.getMessageOrBuilder(); + } else { + return throttle_; + } + } + /** + * optional .Throttle throttle = 2; + */ + private com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder> + getThrottleFieldBuilder() { + if (throttleBuilder_ == null) { + throttleBuilder_ = new com.google.protobuf.SingleFieldBuilder< + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle.Builder, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleOrBuilder>( + throttle_, + getParentForChildren(), + isClean()); + throttle_ = null; + } + return throttleBuilder_; + } + + // @@protoc_insertion_point(builder_scope:Quotas) + } + + static { + defaultInstance = new Quotas(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:Quotas) + } + + public interface QuotaUsageOrBuilder + extends com.google.protobuf.MessageOrBuilder { + } + /** + * Protobuf type {@code QuotaUsage} + */ + public static final class QuotaUsage extends + com.google.protobuf.GeneratedMessage + implements QuotaUsageOrBuilder { + // Use QuotaUsage.newBuilder() to construct. + private QuotaUsage(com.google.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private QuotaUsage(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final QuotaUsage defaultInstance; + public static QuotaUsage getDefaultInstance() { + return defaultInstance; + } + + public QuotaUsage getDefaultInstanceForType() { + return defaultInstance; + } + + private final com.google.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final com.google.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private QuotaUsage( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + initFields(); + com.google.protobuf.UnknownFieldSet.Builder unknownFields = + com.google.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + } + } + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new com.google.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_QuotaUsage_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_QuotaUsage_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.Builder.class); + } + + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public QuotaUsage parsePartialFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return new QuotaUsage(input, extensionRegistry); + } + }; + + @java.lang.Override + public com.google.protobuf.Parser getParserForType() { + return PARSER; + } + + private void initFields() { + } + private byte memoizedIsInitialized = -1; + public final boolean isInitialized() { + byte isInitialized = memoizedIsInitialized; + if (isInitialized != -1) return isInitialized == 1; + + memoizedIsInitialized = 1; + return true; + } + + public void writeTo(com.google.protobuf.CodedOutputStream output) + throws java.io.IOException { + getSerializedSize(); + getUnknownFields().writeTo(output); + } + + private int memoizedSerializedSize = -1; + public int getSerializedSize() { + int size = memoizedSerializedSize; + if (size != -1) return size; + + size = 0; + size += getUnknownFields().getSerializedSize(); + memoizedSerializedSize = size; + return size; + } + + private static final long serialVersionUID = 0L; + @java.lang.Override + protected java.lang.Object writeReplace() + throws java.io.ObjectStreamException { + return super.writeReplace(); + } + + @java.lang.Override + public boolean equals(final java.lang.Object obj) { + if (obj == this) { + return true; + } + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage)) { + return super.equals(obj); + } + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage other = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage) obj; + + boolean result = true; + result = result && + getUnknownFields().equals(other.getUnknownFields()); + return result; + } + + private int memoizedHashCode = 0; + @java.lang.Override + public int hashCode() { + if (memoizedHashCode != 0) { + return memoizedHashCode; + } + int hash = 41; + hash = (19 * hash) + getDescriptorForType().hashCode(); + hash = (29 * hash) + getUnknownFields().hashCode(); + memoizedHashCode = hash; + return hash; + } + + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + com.google.protobuf.ByteString data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + com.google.protobuf.ByteString data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom(byte[] data) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + byte[] data, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws com.google.protobuf.InvalidProtocolBufferException { + return PARSER.parseFrom(data, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseDelimitedFrom(java.io.InputStream input) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseDelimitedFrom( + java.io.InputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseDelimitedFrom(input, extensionRegistry); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + com.google.protobuf.CodedInputStream input) + throws java.io.IOException { + return PARSER.parseFrom(input); + } + public static org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parseFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + return PARSER.parseFrom(input, extensionRegistry); + } + + public static Builder newBuilder() { return Builder.create(); } + public Builder newBuilderForType() { return newBuilder(); } + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage prototype) { + return newBuilder().mergeFrom(prototype); + } + public Builder toBuilder() { return newBuilder(this); } + + @java.lang.Override + protected Builder newBuilderForType( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + Builder builder = new Builder(parent); + return builder; + } + /** + * Protobuf type {@code QuotaUsage} + */ + public static final class Builder extends + com.google.protobuf.GeneratedMessage.Builder + implements org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsageOrBuilder { + public static final com.google.protobuf.Descriptors.Descriptor + getDescriptor() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_QuotaUsage_descriptor; + } + + protected com.google.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_QuotaUsage_fieldAccessorTable + .ensureFieldAccessorsInitialized( + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.class, org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.Builder.class); + } + + // Construct using org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.newBuilder() + private Builder() { + maybeForceBuilderInitialization(); + } + + private Builder( + com.google.protobuf.GeneratedMessage.BuilderParent parent) { + super(parent); + maybeForceBuilderInitialization(); + } + private void maybeForceBuilderInitialization() { + if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { + } + } + private static Builder create() { + return new Builder(); + } + + public Builder clear() { + super.clear(); + return this; + } + + public Builder clone() { + return create().mergeFrom(buildPartial()); + } + + public com.google.protobuf.Descriptors.Descriptor + getDescriptorForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.internal_static_QuotaUsage_descriptor; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.getDefaultInstance(); + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage build() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage result = buildPartial(); + if (!result.isInitialized()) { + throw newUninitializedMessageException(result); + } + return result; + } + + public org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage result = new org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage(this); + onBuilt(); + return result; + } + + public Builder mergeFrom(com.google.protobuf.Message other) { + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage)other); + } else { + super.mergeFrom(other); + return this; + } + } + + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage.getDefaultInstance()) return this; + this.mergeUnknownFields(other.getUnknownFields()); + return this; + } + + public final boolean isInitialized() { + return true; + } + + public Builder mergeFrom( + com.google.protobuf.CodedInputStream input, + com.google.protobuf.ExtensionRegistryLite extensionRegistry) + throws java.io.IOException { + org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage parsedMessage = null; + try { + parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); + } catch (com.google.protobuf.InvalidProtocolBufferException e) { + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.QuotaUsage) e.getUnfinishedMessage(); + throw e; + } finally { + if (parsedMessage != null) { + mergeFrom(parsedMessage); + } + } + return this; + } + + // @@protoc_insertion_point(builder_scope:QuotaUsage) + } + + static { + defaultInstance = new QuotaUsage(true); + defaultInstance.initFields(); + } + + // @@protoc_insertion_point(class_scope:QuotaUsage) + } + + private static com.google.protobuf.Descriptors.Descriptor + internal_static_TimedQuota_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_TimedQuota_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_Throttle_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_Throttle_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_ThrottleRequest_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_ThrottleRequest_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_Quotas_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_Quotas_fieldAccessorTable; + private static com.google.protobuf.Descriptors.Descriptor + internal_static_QuotaUsage_descriptor; + private static + com.google.protobuf.GeneratedMessage.FieldAccessorTable + internal_static_QuotaUsage_fieldAccessorTable; + + public static com.google.protobuf.Descriptors.FileDescriptor + getDescriptor() { + return descriptor; + } + private static com.google.protobuf.Descriptors.FileDescriptor + descriptor; + static { + java.lang.String[] descriptorData = { + "\n\013Quota.proto\032\013HBase.proto\"r\n\nTimedQuota" + + "\022\034\n\ttime_unit\030\001 \002(\0162\t.TimeUnit\022\022\n\nsoft_l" + + "imit\030\002 \001(\004\022\r\n\005share\030\003 \001(\002\022#\n\005scope\030\004 \001(\016" + + "2\013.QuotaScope:\007MACHINE\"\307\001\n\010Throttle\022\034\n\007r" + + "eq_num\030\001 \001(\0132\013.TimedQuota\022\035\n\010req_size\030\002 " + + "\001(\0132\013.TimedQuota\022\036\n\twrite_num\030\003 \001(\0132\013.Ti" + + "medQuota\022\037\n\nwrite_size\030\004 \001(\0132\013.TimedQuot" + + "a\022\035\n\010read_num\030\005 \001(\0132\013.TimedQuota\022\036\n\tread" + + "_size\030\006 \001(\0132\013.TimedQuota\"P\n\017ThrottleRequ" + + "est\022\033\n\004type\030\001 \001(\0162\r.ThrottleType\022 \n\013time", + "d_quota\030\002 \001(\0132\013.TimedQuota\"D\n\006Quotas\022\035\n\016" + + "bypass_globals\030\001 \001(\010:\005false\022\033\n\010throttle\030" + + "\002 \001(\0132\t.Throttle\"\014\n\nQuotaUsage*&\n\nQuotaS" + + "cope\022\013\n\007CLUSTER\020\001\022\013\n\007MACHINE\020\002*v\n\014Thrott" + + "leType\022\022\n\016REQUEST_NUMBER\020\001\022\020\n\014REQUEST_SI" + + "ZE\020\002\022\020\n\014WRITE_NUMBER\020\003\022\016\n\nWRITE_SIZE\020\004\022\017" + + "\n\013READ_NUMBER\020\005\022\r\n\tREAD_SIZE\020\006*\031\n\tQuotaT" + + "ype\022\014\n\010THROTTLE\020\001BA\n*org.apache.hadoop.h" + + "base.protobuf.generatedB\013QuotaProtosH\001\210\001" + + "\001\240\001\001" + }; + com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = + new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { + public com.google.protobuf.ExtensionRegistry assignDescriptors( + com.google.protobuf.Descriptors.FileDescriptor root) { + descriptor = root; + internal_static_TimedQuota_descriptor = + getDescriptor().getMessageTypes().get(0); + internal_static_TimedQuota_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_TimedQuota_descriptor, + new java.lang.String[] { "TimeUnit", "SoftLimit", "Share", "Scope", }); + internal_static_Throttle_descriptor = + getDescriptor().getMessageTypes().get(1); + internal_static_Throttle_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_Throttle_descriptor, + new java.lang.String[] { "ReqNum", "ReqSize", "WriteNum", "WriteSize", "ReadNum", "ReadSize", }); + internal_static_ThrottleRequest_descriptor = + getDescriptor().getMessageTypes().get(2); + internal_static_ThrottleRequest_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_ThrottleRequest_descriptor, + new java.lang.String[] { "Type", "TimedQuota", }); + internal_static_Quotas_descriptor = + getDescriptor().getMessageTypes().get(3); + internal_static_Quotas_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_Quotas_descriptor, + new java.lang.String[] { "BypassGlobals", "Throttle", }); + internal_static_QuotaUsage_descriptor = + getDescriptor().getMessageTypes().get(4); + internal_static_QuotaUsage_fieldAccessorTable = new + com.google.protobuf.GeneratedMessage.FieldAccessorTable( + internal_static_QuotaUsage_descriptor, + new java.lang.String[] { }); + return null; + } + }; + com.google.protobuf.Descriptors.FileDescriptor + .internalBuildGeneratedFileFrom(descriptorData, + new com.google.protobuf.Descriptors.FileDescriptor[] { + org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.getDescriptor(), + }, assigner); + } + + // @@protoc_insertion_point(outer_class_scope) +} diff --git hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java index 6a6cb5e..5a1fbf1 100644 --- hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java +++ hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/ZooKeeperProtos.java @@ -2353,1093 +2353,6 @@ public final class ZooKeeperProtos { // @@protoc_insertion_point(class_scope:ClusterUp) } - public interface RegionTransitionOrBuilder - extends com.google.protobuf.MessageOrBuilder { - - // required uint32 event_type_code = 1; - /** - * required uint32 event_type_code = 1; - * - *
    -     * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -     * 
    - */ - boolean hasEventTypeCode(); - /** - * required uint32 event_type_code = 1; - * - *
    -     * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -     * 
    - */ - int getEventTypeCode(); - - // required bytes region_name = 2; - /** - * required bytes region_name = 2; - * - *
    -     * Full regionname in bytes
    -     * 
    - */ - boolean hasRegionName(); - /** - * required bytes region_name = 2; - * - *
    -     * Full regionname in bytes
    -     * 
    - */ - com.google.protobuf.ByteString getRegionName(); - - // required uint64 create_time = 3; - /** - * required uint64 create_time = 3; - */ - boolean hasCreateTime(); - /** - * required uint64 create_time = 3; - */ - long getCreateTime(); - - // required .ServerName server_name = 4; - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - boolean hasServerName(); - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName getServerName(); - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder getServerNameOrBuilder(); - - // optional bytes payload = 5; - /** - * optional bytes payload = 5; - */ - boolean hasPayload(); - /** - * optional bytes payload = 5; - */ - com.google.protobuf.ByteString getPayload(); - } - /** - * Protobuf type {@code RegionTransition} - * - *
    -   **
    -   * What we write under unassigned up in zookeeper as a region moves through
    -   * open/close, etc., regions.  Details a region in transition.
    -   * 
    - */ - public static final class RegionTransition extends - com.google.protobuf.GeneratedMessage - implements RegionTransitionOrBuilder { - // Use RegionTransition.newBuilder() to construct. - private RegionTransition(com.google.protobuf.GeneratedMessage.Builder builder) { - super(builder); - this.unknownFields = builder.getUnknownFields(); - } - private RegionTransition(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - - private static final RegionTransition defaultInstance; - public static RegionTransition getDefaultInstance() { - return defaultInstance; - } - - public RegionTransition getDefaultInstanceForType() { - return defaultInstance; - } - - private final com.google.protobuf.UnknownFieldSet unknownFields; - @java.lang.Override - public final com.google.protobuf.UnknownFieldSet - getUnknownFields() { - return this.unknownFields; - } - private RegionTransition( - com.google.protobuf.CodedInputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws com.google.protobuf.InvalidProtocolBufferException { - initFields(); - int mutable_bitField0_ = 0; - com.google.protobuf.UnknownFieldSet.Builder unknownFields = - com.google.protobuf.UnknownFieldSet.newBuilder(); - try { - boolean done = false; - while (!done) { - int tag = input.readTag(); - switch (tag) { - case 0: - done = true; - break; - default: { - if (!parseUnknownField(input, unknownFields, - extensionRegistry, tag)) { - done = true; - } - break; - } - case 8: { - bitField0_ |= 0x00000001; - eventTypeCode_ = input.readUInt32(); - break; - } - case 18: { - bitField0_ |= 0x00000002; - regionName_ = input.readBytes(); - break; - } - case 24: { - bitField0_ |= 0x00000004; - createTime_ = input.readUInt64(); - break; - } - case 34: { - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder subBuilder = null; - if (((bitField0_ & 0x00000008) == 0x00000008)) { - subBuilder = serverName_.toBuilder(); - } - serverName_ = input.readMessage(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.PARSER, extensionRegistry); - if (subBuilder != null) { - subBuilder.mergeFrom(serverName_); - serverName_ = subBuilder.buildPartial(); - } - bitField0_ |= 0x00000008; - break; - } - case 42: { - bitField0_ |= 0x00000010; - payload_ = input.readBytes(); - break; - } - } - } - } catch (com.google.protobuf.InvalidProtocolBufferException e) { - throw e.setUnfinishedMessage(this); - } catch (java.io.IOException e) { - throw new com.google.protobuf.InvalidProtocolBufferException( - e.getMessage()).setUnfinishedMessage(this); - } finally { - this.unknownFields = unknownFields.build(); - makeExtensionsImmutable(); - } - } - public static final com.google.protobuf.Descriptors.Descriptor - getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_RegionTransition_descriptor; - } - - protected com.google.protobuf.GeneratedMessage.FieldAccessorTable - internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_RegionTransition_fieldAccessorTable - .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.Builder.class); - } - - public static com.google.protobuf.Parser PARSER = - new com.google.protobuf.AbstractParser() { - public RegionTransition parsePartialFrom( - com.google.protobuf.CodedInputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws com.google.protobuf.InvalidProtocolBufferException { - return new RegionTransition(input, extensionRegistry); - } - }; - - @java.lang.Override - public com.google.protobuf.Parser getParserForType() { - return PARSER; - } - - private int bitField0_; - // required uint32 event_type_code = 1; - public static final int EVENT_TYPE_CODE_FIELD_NUMBER = 1; - private int eventTypeCode_; - /** - * required uint32 event_type_code = 1; - * - *
    -     * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -     * 
    - */ - public boolean hasEventTypeCode() { - return ((bitField0_ & 0x00000001) == 0x00000001); - } - /** - * required uint32 event_type_code = 1; - * - *
    -     * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -     * 
    - */ - public int getEventTypeCode() { - return eventTypeCode_; - } - - // required bytes region_name = 2; - public static final int REGION_NAME_FIELD_NUMBER = 2; - private com.google.protobuf.ByteString regionName_; - /** - * required bytes region_name = 2; - * - *
    -     * Full regionname in bytes
    -     * 
    - */ - public boolean hasRegionName() { - return ((bitField0_ & 0x00000002) == 0x00000002); - } - /** - * required bytes region_name = 2; - * - *
    -     * Full regionname in bytes
    -     * 
    - */ - public com.google.protobuf.ByteString getRegionName() { - return regionName_; - } - - // required uint64 create_time = 3; - public static final int CREATE_TIME_FIELD_NUMBER = 3; - private long createTime_; - /** - * required uint64 create_time = 3; - */ - public boolean hasCreateTime() { - return ((bitField0_ & 0x00000004) == 0x00000004); - } - /** - * required uint64 create_time = 3; - */ - public long getCreateTime() { - return createTime_; - } - - // required .ServerName server_name = 4; - public static final int SERVER_NAME_FIELD_NUMBER = 4; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName serverName_; - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - public boolean hasServerName() { - return ((bitField0_ & 0x00000008) == 0x00000008); - } - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName getServerName() { - return serverName_; - } - /** - * required .ServerName server_name = 4; - * - *
    -     * The region server where the transition will happen or is happening
    -     * 
    - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder getServerNameOrBuilder() { - return serverName_; - } - - // optional bytes payload = 5; - public static final int PAYLOAD_FIELD_NUMBER = 5; - private com.google.protobuf.ByteString payload_; - /** - * optional bytes payload = 5; - */ - public boolean hasPayload() { - return ((bitField0_ & 0x00000010) == 0x00000010); - } - /** - * optional bytes payload = 5; - */ - public com.google.protobuf.ByteString getPayload() { - return payload_; - } - - private void initFields() { - eventTypeCode_ = 0; - regionName_ = com.google.protobuf.ByteString.EMPTY; - createTime_ = 0L; - serverName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.getDefaultInstance(); - payload_ = com.google.protobuf.ByteString.EMPTY; - } - private byte memoizedIsInitialized = -1; - public final boolean isInitialized() { - byte isInitialized = memoizedIsInitialized; - if (isInitialized != -1) return isInitialized == 1; - - if (!hasEventTypeCode()) { - memoizedIsInitialized = 0; - return false; - } - if (!hasRegionName()) { - memoizedIsInitialized = 0; - return false; - } - if (!hasCreateTime()) { - memoizedIsInitialized = 0; - return false; - } - if (!hasServerName()) { - memoizedIsInitialized = 0; - return false; - } - if (!getServerName().isInitialized()) { - memoizedIsInitialized = 0; - return false; - } - memoizedIsInitialized = 1; - return true; - } - - public void writeTo(com.google.protobuf.CodedOutputStream output) - throws java.io.IOException { - getSerializedSize(); - if (((bitField0_ & 0x00000001) == 0x00000001)) { - output.writeUInt32(1, eventTypeCode_); - } - if (((bitField0_ & 0x00000002) == 0x00000002)) { - output.writeBytes(2, regionName_); - } - if (((bitField0_ & 0x00000004) == 0x00000004)) { - output.writeUInt64(3, createTime_); - } - if (((bitField0_ & 0x00000008) == 0x00000008)) { - output.writeMessage(4, serverName_); - } - if (((bitField0_ & 0x00000010) == 0x00000010)) { - output.writeBytes(5, payload_); - } - getUnknownFields().writeTo(output); - } - - private int memoizedSerializedSize = -1; - public int getSerializedSize() { - int size = memoizedSerializedSize; - if (size != -1) return size; - - size = 0; - if (((bitField0_ & 0x00000001) == 0x00000001)) { - size += com.google.protobuf.CodedOutputStream - .computeUInt32Size(1, eventTypeCode_); - } - if (((bitField0_ & 0x00000002) == 0x00000002)) { - size += com.google.protobuf.CodedOutputStream - .computeBytesSize(2, regionName_); - } - if (((bitField0_ & 0x00000004) == 0x00000004)) { - size += com.google.protobuf.CodedOutputStream - .computeUInt64Size(3, createTime_); - } - if (((bitField0_ & 0x00000008) == 0x00000008)) { - size += com.google.protobuf.CodedOutputStream - .computeMessageSize(4, serverName_); - } - if (((bitField0_ & 0x00000010) == 0x00000010)) { - size += com.google.protobuf.CodedOutputStream - .computeBytesSize(5, payload_); - } - size += getUnknownFields().getSerializedSize(); - memoizedSerializedSize = size; - return size; - } - - private static final long serialVersionUID = 0L; - @java.lang.Override - protected java.lang.Object writeReplace() - throws java.io.ObjectStreamException { - return super.writeReplace(); - } - - @java.lang.Override - public boolean equals(final java.lang.Object obj) { - if (obj == this) { - return true; - } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition)) { - return super.equals(obj); - } - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition other = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition) obj; - - boolean result = true; - result = result && (hasEventTypeCode() == other.hasEventTypeCode()); - if (hasEventTypeCode()) { - result = result && (getEventTypeCode() - == other.getEventTypeCode()); - } - result = result && (hasRegionName() == other.hasRegionName()); - if (hasRegionName()) { - result = result && getRegionName() - .equals(other.getRegionName()); - } - result = result && (hasCreateTime() == other.hasCreateTime()); - if (hasCreateTime()) { - result = result && (getCreateTime() - == other.getCreateTime()); - } - result = result && (hasServerName() == other.hasServerName()); - if (hasServerName()) { - result = result && getServerName() - .equals(other.getServerName()); - } - result = result && (hasPayload() == other.hasPayload()); - if (hasPayload()) { - result = result && getPayload() - .equals(other.getPayload()); - } - result = result && - getUnknownFields().equals(other.getUnknownFields()); - return result; - } - - private int memoizedHashCode = 0; - @java.lang.Override - public int hashCode() { - if (memoizedHashCode != 0) { - return memoizedHashCode; - } - int hash = 41; - hash = (19 * hash) + getDescriptorForType().hashCode(); - if (hasEventTypeCode()) { - hash = (37 * hash) + EVENT_TYPE_CODE_FIELD_NUMBER; - hash = (53 * hash) + getEventTypeCode(); - } - if (hasRegionName()) { - hash = (37 * hash) + REGION_NAME_FIELD_NUMBER; - hash = (53 * hash) + getRegionName().hashCode(); - } - if (hasCreateTime()) { - hash = (37 * hash) + CREATE_TIME_FIELD_NUMBER; - hash = (53 * hash) + hashLong(getCreateTime()); - } - if (hasServerName()) { - hash = (37 * hash) + SERVER_NAME_FIELD_NUMBER; - hash = (53 * hash) + getServerName().hashCode(); - } - if (hasPayload()) { - hash = (37 * hash) + PAYLOAD_FIELD_NUMBER; - hash = (53 * hash) + getPayload().hashCode(); - } - hash = (29 * hash) + getUnknownFields().hashCode(); - memoizedHashCode = hash; - return hash; - } - - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - com.google.protobuf.ByteString data) - throws com.google.protobuf.InvalidProtocolBufferException { - return PARSER.parseFrom(data); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - com.google.protobuf.ByteString data, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws com.google.protobuf.InvalidProtocolBufferException { - return PARSER.parseFrom(data, extensionRegistry); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom(byte[] data) - throws com.google.protobuf.InvalidProtocolBufferException { - return PARSER.parseFrom(data); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - byte[] data, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws com.google.protobuf.InvalidProtocolBufferException { - return PARSER.parseFrom(data, extensionRegistry); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom(java.io.InputStream input) - throws java.io.IOException { - return PARSER.parseFrom(input); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - java.io.InputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws java.io.IOException { - return PARSER.parseFrom(input, extensionRegistry); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseDelimitedFrom(java.io.InputStream input) - throws java.io.IOException { - return PARSER.parseDelimitedFrom(input); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseDelimitedFrom( - java.io.InputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws java.io.IOException { - return PARSER.parseDelimitedFrom(input, extensionRegistry); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - com.google.protobuf.CodedInputStream input) - throws java.io.IOException { - return PARSER.parseFrom(input); - } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parseFrom( - com.google.protobuf.CodedInputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws java.io.IOException { - return PARSER.parseFrom(input, extensionRegistry); - } - - public static Builder newBuilder() { return Builder.create(); } - public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition prototype) { - return newBuilder().mergeFrom(prototype); - } - public Builder toBuilder() { return newBuilder(this); } - - @java.lang.Override - protected Builder newBuilderForType( - com.google.protobuf.GeneratedMessage.BuilderParent parent) { - Builder builder = new Builder(parent); - return builder; - } - /** - * Protobuf type {@code RegionTransition} - * - *
    -     **
    -     * What we write under unassigned up in zookeeper as a region moves through
    -     * open/close, etc., regions.  Details a region in transition.
    -     * 
    - */ - public static final class Builder extends - com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransitionOrBuilder { - public static final com.google.protobuf.Descriptors.Descriptor - getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_RegionTransition_descriptor; - } - - protected com.google.protobuf.GeneratedMessage.FieldAccessorTable - internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_RegionTransition_fieldAccessorTable - .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.Builder.class); - } - - // Construct using org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.newBuilder() - private Builder() { - maybeForceBuilderInitialization(); - } - - private Builder( - com.google.protobuf.GeneratedMessage.BuilderParent parent) { - super(parent); - maybeForceBuilderInitialization(); - } - private void maybeForceBuilderInitialization() { - if (com.google.protobuf.GeneratedMessage.alwaysUseFieldBuilders) { - getServerNameFieldBuilder(); - } - } - private static Builder create() { - return new Builder(); - } - - public Builder clear() { - super.clear(); - eventTypeCode_ = 0; - bitField0_ = (bitField0_ & ~0x00000001); - regionName_ = com.google.protobuf.ByteString.EMPTY; - bitField0_ = (bitField0_ & ~0x00000002); - createTime_ = 0L; - bitField0_ = (bitField0_ & ~0x00000004); - if (serverNameBuilder_ == null) { - serverName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.getDefaultInstance(); - } else { - serverNameBuilder_.clear(); - } - bitField0_ = (bitField0_ & ~0x00000008); - payload_ = com.google.protobuf.ByteString.EMPTY; - bitField0_ = (bitField0_ & ~0x00000010); - return this; - } - - public Builder clone() { - return create().mergeFrom(buildPartial()); - } - - public com.google.protobuf.Descriptors.Descriptor - getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_RegionTransition_descriptor; - } - - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.getDefaultInstance(); - } - - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition build() { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition result = buildPartial(); - if (!result.isInitialized()) { - throw newUninitializedMessageException(result); - } - return result; - } - - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition result = new org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition(this); - int from_bitField0_ = bitField0_; - int to_bitField0_ = 0; - if (((from_bitField0_ & 0x00000001) == 0x00000001)) { - to_bitField0_ |= 0x00000001; - } - result.eventTypeCode_ = eventTypeCode_; - if (((from_bitField0_ & 0x00000002) == 0x00000002)) { - to_bitField0_ |= 0x00000002; - } - result.regionName_ = regionName_; - if (((from_bitField0_ & 0x00000004) == 0x00000004)) { - to_bitField0_ |= 0x00000004; - } - result.createTime_ = createTime_; - if (((from_bitField0_ & 0x00000008) == 0x00000008)) { - to_bitField0_ |= 0x00000008; - } - if (serverNameBuilder_ == null) { - result.serverName_ = serverName_; - } else { - result.serverName_ = serverNameBuilder_.build(); - } - if (((from_bitField0_ & 0x00000010) == 0x00000010)) { - to_bitField0_ |= 0x00000010; - } - result.payload_ = payload_; - result.bitField0_ = to_bitField0_; - onBuilt(); - return result; - } - - public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition)other); - } else { - super.mergeFrom(other); - return this; - } - } - - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition.getDefaultInstance()) return this; - if (other.hasEventTypeCode()) { - setEventTypeCode(other.getEventTypeCode()); - } - if (other.hasRegionName()) { - setRegionName(other.getRegionName()); - } - if (other.hasCreateTime()) { - setCreateTime(other.getCreateTime()); - } - if (other.hasServerName()) { - mergeServerName(other.getServerName()); - } - if (other.hasPayload()) { - setPayload(other.getPayload()); - } - this.mergeUnknownFields(other.getUnknownFields()); - return this; - } - - public final boolean isInitialized() { - if (!hasEventTypeCode()) { - - return false; - } - if (!hasRegionName()) { - - return false; - } - if (!hasCreateTime()) { - - return false; - } - if (!hasServerName()) { - - return false; - } - if (!getServerName().isInitialized()) { - - return false; - } - return true; - } - - public Builder mergeFrom( - com.google.protobuf.CodedInputStream input, - com.google.protobuf.ExtensionRegistryLite extensionRegistry) - throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition parsedMessage = null; - try { - parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); - } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionTransition) e.getUnfinishedMessage(); - throw e; - } finally { - if (parsedMessage != null) { - mergeFrom(parsedMessage); - } - } - return this; - } - private int bitField0_; - - // required uint32 event_type_code = 1; - private int eventTypeCode_ ; - /** - * required uint32 event_type_code = 1; - * - *
    -       * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -       * 
    - */ - public boolean hasEventTypeCode() { - return ((bitField0_ & 0x00000001) == 0x00000001); - } - /** - * required uint32 event_type_code = 1; - * - *
    -       * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -       * 
    - */ - public int getEventTypeCode() { - return eventTypeCode_; - } - /** - * required uint32 event_type_code = 1; - * - *
    -       * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -       * 
    - */ - public Builder setEventTypeCode(int value) { - bitField0_ |= 0x00000001; - eventTypeCode_ = value; - onChanged(); - return this; - } - /** - * required uint32 event_type_code = 1; - * - *
    -       * Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -       * 
    - */ - public Builder clearEventTypeCode() { - bitField0_ = (bitField0_ & ~0x00000001); - eventTypeCode_ = 0; - onChanged(); - return this; - } - - // required bytes region_name = 2; - private com.google.protobuf.ByteString regionName_ = com.google.protobuf.ByteString.EMPTY; - /** - * required bytes region_name = 2; - * - *
    -       * Full regionname in bytes
    -       * 
    - */ - public boolean hasRegionName() { - return ((bitField0_ & 0x00000002) == 0x00000002); - } - /** - * required bytes region_name = 2; - * - *
    -       * Full regionname in bytes
    -       * 
    - */ - public com.google.protobuf.ByteString getRegionName() { - return regionName_; - } - /** - * required bytes region_name = 2; - * - *
    -       * Full regionname in bytes
    -       * 
    - */ - public Builder setRegionName(com.google.protobuf.ByteString value) { - if (value == null) { - throw new NullPointerException(); - } - bitField0_ |= 0x00000002; - regionName_ = value; - onChanged(); - return this; - } - /** - * required bytes region_name = 2; - * - *
    -       * Full regionname in bytes
    -       * 
    - */ - public Builder clearRegionName() { - bitField0_ = (bitField0_ & ~0x00000002); - regionName_ = getDefaultInstance().getRegionName(); - onChanged(); - return this; - } - - // required uint64 create_time = 3; - private long createTime_ ; - /** - * required uint64 create_time = 3; - */ - public boolean hasCreateTime() { - return ((bitField0_ & 0x00000004) == 0x00000004); - } - /** - * required uint64 create_time = 3; - */ - public long getCreateTime() { - return createTime_; - } - /** - * required uint64 create_time = 3; - */ - public Builder setCreateTime(long value) { - bitField0_ |= 0x00000004; - createTime_ = value; - onChanged(); - return this; - } - /** - * required uint64 create_time = 3; - */ - public Builder clearCreateTime() { - bitField0_ = (bitField0_ & ~0x00000004); - createTime_ = 0L; - onChanged(); - return this; - } - - // required .ServerName server_name = 4; - private org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName serverName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.getDefaultInstance(); - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder> serverNameBuilder_; - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public boolean hasServerName() { - return ((bitField0_ & 0x00000008) == 0x00000008); - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName getServerName() { - if (serverNameBuilder_ == null) { - return serverName_; - } else { - return serverNameBuilder_.getMessage(); - } - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public Builder setServerName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName value) { - if (serverNameBuilder_ == null) { - if (value == null) { - throw new NullPointerException(); - } - serverName_ = value; - onChanged(); - } else { - serverNameBuilder_.setMessage(value); - } - bitField0_ |= 0x00000008; - return this; - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public Builder setServerName( - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder builderForValue) { - if (serverNameBuilder_ == null) { - serverName_ = builderForValue.build(); - onChanged(); - } else { - serverNameBuilder_.setMessage(builderForValue.build()); - } - bitField0_ |= 0x00000008; - return this; - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public Builder mergeServerName(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName value) { - if (serverNameBuilder_ == null) { - if (((bitField0_ & 0x00000008) == 0x00000008) && - serverName_ != org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.getDefaultInstance()) { - serverName_ = - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.newBuilder(serverName_).mergeFrom(value).buildPartial(); - } else { - serverName_ = value; - } - onChanged(); - } else { - serverNameBuilder_.mergeFrom(value); - } - bitField0_ |= 0x00000008; - return this; - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public Builder clearServerName() { - if (serverNameBuilder_ == null) { - serverName_ = org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.getDefaultInstance(); - onChanged(); - } else { - serverNameBuilder_.clear(); - } - bitField0_ = (bitField0_ & ~0x00000008); - return this; - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder getServerNameBuilder() { - bitField0_ |= 0x00000008; - onChanged(); - return getServerNameFieldBuilder().getBuilder(); - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - public org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder getServerNameOrBuilder() { - if (serverNameBuilder_ != null) { - return serverNameBuilder_.getMessageOrBuilder(); - } else { - return serverName_; - } - } - /** - * required .ServerName server_name = 4; - * - *
    -       * The region server where the transition will happen or is happening
    -       * 
    - */ - private com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder> - getServerNameFieldBuilder() { - if (serverNameBuilder_ == null) { - serverNameBuilder_ = new com.google.protobuf.SingleFieldBuilder< - org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerName.Builder, org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ServerNameOrBuilder>( - serverName_, - getParentForChildren(), - isClean()); - serverName_ = null; - } - return serverNameBuilder_; - } - - // optional bytes payload = 5; - private com.google.protobuf.ByteString payload_ = com.google.protobuf.ByteString.EMPTY; - /** - * optional bytes payload = 5; - */ - public boolean hasPayload() { - return ((bitField0_ & 0x00000010) == 0x00000010); - } - /** - * optional bytes payload = 5; - */ - public com.google.protobuf.ByteString getPayload() { - return payload_; - } - /** - * optional bytes payload = 5; - */ - public Builder setPayload(com.google.protobuf.ByteString value) { - if (value == null) { - throw new NullPointerException(); - } - bitField0_ |= 0x00000010; - payload_ = value; - onChanged(); - return this; - } - /** - * optional bytes payload = 5; - */ - public Builder clearPayload() { - bitField0_ = (bitField0_ & ~0x00000010); - payload_ = getDefaultInstance().getPayload(); - onChanged(); - return this; - } - - // @@protoc_insertion_point(builder_scope:RegionTransition) - } - - static { - defaultInstance = new RegionTransition(true); - defaultInstance.initFields(); - } - - // @@protoc_insertion_point(class_scope:RegionTransition) - } - public interface SplitLogTaskOrBuilder extends com.google.protobuf.MessageOrBuilder { @@ -4419,12 +3332,12 @@ public final class ZooKeeperProtos { // @@protoc_insertion_point(class_scope:SplitLogTask) } - public interface TableOrBuilder + public interface DeprecatedTableStateOrBuilder extends com.google.protobuf.MessageOrBuilder { - // required .Table.State state = 1 [default = ENABLED]; + // required .DeprecatedTableState.State state = 1 [default = ENABLED]; /** - * required .Table.State state = 1 [default = ENABLED]; + * required .DeprecatedTableState.State state = 1 [default = ENABLED]; * *
          * This is the table's state.  If no znode for a table,
    @@ -4434,7 +3347,7 @@ public final class ZooKeeperProtos {
          */
         boolean hasState();
         /**
    -     * required .Table.State state = 1 [default = ENABLED];
    +     * required .DeprecatedTableState.State state = 1 [default = ENABLED];
          *
          * 
          * This is the table's state.  If no znode for a table,
    @@ -4442,32 +3355,33 @@ public final class ZooKeeperProtos {
          * for more.
          * 
    */ - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State getState(); + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State getState(); } /** - * Protobuf type {@code Table} + * Protobuf type {@code DeprecatedTableState} * *
        **
        * The znode that holds state of table.
    +   * Deprected, table state is stored in table descriptor on HDFS.
        * 
    */ - public static final class Table extends + public static final class DeprecatedTableState extends com.google.protobuf.GeneratedMessage - implements TableOrBuilder { - // Use Table.newBuilder() to construct. - private Table(com.google.protobuf.GeneratedMessage.Builder builder) { + implements DeprecatedTableStateOrBuilder { + // Use DeprecatedTableState.newBuilder() to construct. + private DeprecatedTableState(com.google.protobuf.GeneratedMessage.Builder builder) { super(builder); this.unknownFields = builder.getUnknownFields(); } - private Table(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } + private DeprecatedTableState(boolean noInit) { this.unknownFields = com.google.protobuf.UnknownFieldSet.getDefaultInstance(); } - private static final Table defaultInstance; - public static Table getDefaultInstance() { + private static final DeprecatedTableState defaultInstance; + public static DeprecatedTableState getDefaultInstance() { return defaultInstance; } - public Table getDefaultInstanceForType() { + public DeprecatedTableState getDefaultInstanceForType() { return defaultInstance; } @@ -4477,7 +3391,7 @@ public final class ZooKeeperProtos { getUnknownFields() { return this.unknownFields; } - private Table( + private DeprecatedTableState( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { @@ -4502,7 +3416,7 @@ public final class ZooKeeperProtos { } case 8: { int rawValue = input.readEnum(); - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State value = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State.valueOf(rawValue); + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State value = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State.valueOf(rawValue); if (value == null) { unknownFields.mergeVarintField(1, rawValue); } else { @@ -4525,33 +3439,33 @@ public final class ZooKeeperProtos { } public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_Table_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_DeprecatedTableState_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_Table_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_DeprecatedTableState_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.Builder.class); } - public static com.google.protobuf.Parser
    PARSER = - new com.google.protobuf.AbstractParser
    () { - public Table parsePartialFrom( + public static com.google.protobuf.Parser PARSER = + new com.google.protobuf.AbstractParser() { + public DeprecatedTableState parsePartialFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { - return new Table(input, extensionRegistry); + return new DeprecatedTableState(input, extensionRegistry); } }; @java.lang.Override - public com.google.protobuf.Parser
    getParserForType() { + public com.google.protobuf.Parser getParserForType() { return PARSER; } /** - * Protobuf enum {@code Table.State} + * Protobuf enum {@code DeprecatedTableState.State} * *
          * Table's current state
    @@ -4629,7 +3543,7 @@ public final class ZooKeeperProtos {
           }
           public static final com.google.protobuf.Descriptors.EnumDescriptor
               getDescriptor() {
    -        return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.getDescriptor().getEnumTypes().get(0);
    +        return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.getDescriptor().getEnumTypes().get(0);
           }
     
           private static final State[] VALUES = values();
    @@ -4651,15 +3565,15 @@ public final class ZooKeeperProtos {
             this.value = value;
           }
     
    -      // @@protoc_insertion_point(enum_scope:Table.State)
    +      // @@protoc_insertion_point(enum_scope:DeprecatedTableState.State)
         }
     
         private int bitField0_;
    -    // required .Table.State state = 1 [default = ENABLED];
    +    // required .DeprecatedTableState.State state = 1 [default = ENABLED];
         public static final int STATE_FIELD_NUMBER = 1;
    -    private org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State state_;
    +    private org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State state_;
         /**
    -     * required .Table.State state = 1 [default = ENABLED];
    +     * required .DeprecatedTableState.State state = 1 [default = ENABLED];
          *
          * 
          * This is the table's state.  If no znode for a table,
    @@ -4671,7 +3585,7 @@ public final class ZooKeeperProtos {
           return ((bitField0_ & 0x00000001) == 0x00000001);
         }
         /**
    -     * required .Table.State state = 1 [default = ENABLED];
    +     * required .DeprecatedTableState.State state = 1 [default = ENABLED];
          *
          * 
          * This is the table's state.  If no znode for a table,
    @@ -4679,12 +3593,12 @@ public final class ZooKeeperProtos {
          * for more.
          * 
    */ - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State getState() { + public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State getState() { return state_; } private void initFields() { - state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State.ENABLED; + state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State.ENABLED; } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { @@ -4735,10 +3649,10 @@ public final class ZooKeeperProtos { if (obj == this) { return true; } - if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table)) { + if (!(obj instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState)) { return super.equals(obj); } - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table other = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table) obj; + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState other = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState) obj; boolean result = true; result = result && (hasState() == other.hasState()); @@ -4768,53 +3682,53 @@ public final class ZooKeeperProtos { return hash; } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( com.google.protobuf.ByteString data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( com.google.protobuf.ByteString data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom(byte[] data) + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom(byte[] data) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( byte[] data, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws com.google.protobuf.InvalidProtocolBufferException { return PARSER.parseFrom(data, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseDelimitedFrom(java.io.InputStream input) + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseDelimitedFrom(java.io.InputStream input) throws java.io.IOException { return PARSER.parseDelimitedFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseDelimitedFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseDelimitedFrom( java.io.InputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { return PARSER.parseDelimitedFrom(input, extensionRegistry); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( com.google.protobuf.CodedInputStream input) throws java.io.IOException { return PARSER.parseFrom(input); } - public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parseFrom( + public static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parseFrom( com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { @@ -4823,7 +3737,7 @@ public final class ZooKeeperProtos { public static Builder newBuilder() { return Builder.create(); } public Builder newBuilderForType() { return newBuilder(); } - public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table prototype) { + public static Builder newBuilder(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState prototype) { return newBuilder().mergeFrom(prototype); } public Builder toBuilder() { return newBuilder(this); } @@ -4835,29 +3749,30 @@ public final class ZooKeeperProtos { return builder; } /** - * Protobuf type {@code Table} + * Protobuf type {@code DeprecatedTableState} * *
          **
          * The znode that holds state of table.
    +     * Deprected, table state is stored in table descriptor on HDFS.
          * 
    */ public static final class Builder extends com.google.protobuf.GeneratedMessage.Builder - implements org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.TableOrBuilder { + implements org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableStateOrBuilder { public static final com.google.protobuf.Descriptors.Descriptor getDescriptor() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_Table_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_DeprecatedTableState_descriptor; } protected com.google.protobuf.GeneratedMessage.FieldAccessorTable internalGetFieldAccessorTable() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_Table_fieldAccessorTable + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_DeprecatedTableState_fieldAccessorTable .ensureFieldAccessorsInitialized( - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.Builder.class); + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.class, org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.Builder.class); } - // Construct using org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.newBuilder() + // Construct using org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.newBuilder() private Builder() { maybeForceBuilderInitialization(); } @@ -4877,7 +3792,7 @@ public final class ZooKeeperProtos { public Builder clear() { super.clear(); - state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State.ENABLED; + state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State.ENABLED; bitField0_ = (bitField0_ & ~0x00000001); return this; } @@ -4888,23 +3803,23 @@ public final class ZooKeeperProtos { public com.google.protobuf.Descriptors.Descriptor getDescriptorForType() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_Table_descriptor; + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.internal_static_DeprecatedTableState_descriptor; } - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table getDefaultInstanceForType() { - return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.getDefaultInstance(); + public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState getDefaultInstanceForType() { + return org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.getDefaultInstance(); } - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table build() { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table result = buildPartial(); + public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState build() { + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState result = buildPartial(); if (!result.isInitialized()) { throw newUninitializedMessageException(result); } return result; } - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table buildPartial() { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table result = new org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table(this); + public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState buildPartial() { + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState result = new org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState(this); int from_bitField0_ = bitField0_; int to_bitField0_ = 0; if (((from_bitField0_ & 0x00000001) == 0x00000001)) { @@ -4917,16 +3832,16 @@ public final class ZooKeeperProtos { } public Builder mergeFrom(com.google.protobuf.Message other) { - if (other instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table) { - return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table)other); + if (other instanceof org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState) { + return mergeFrom((org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState)other); } else { super.mergeFrom(other); return this; } } - public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table other) { - if (other == org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.getDefaultInstance()) return this; + public Builder mergeFrom(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState other) { + if (other == org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.getDefaultInstance()) return this; if (other.hasState()) { setState(other.getState()); } @@ -4946,11 +3861,11 @@ public final class ZooKeeperProtos { com.google.protobuf.CodedInputStream input, com.google.protobuf.ExtensionRegistryLite extensionRegistry) throws java.io.IOException { - org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table parsedMessage = null; + org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState parsedMessage = null; try { parsedMessage = PARSER.parsePartialFrom(input, extensionRegistry); } catch (com.google.protobuf.InvalidProtocolBufferException e) { - parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table) e.getUnfinishedMessage(); + parsedMessage = (org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState) e.getUnfinishedMessage(); throw e; } finally { if (parsedMessage != null) { @@ -4961,10 +3876,10 @@ public final class ZooKeeperProtos { } private int bitField0_; - // required .Table.State state = 1 [default = ENABLED]; - private org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State.ENABLED; + // required .DeprecatedTableState.State state = 1 [default = ENABLED]; + private org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State.ENABLED; /** - * required .Table.State state = 1 [default = ENABLED]; + * required .DeprecatedTableState.State state = 1 [default = ENABLED]; * *
            * This is the table's state.  If no znode for a table,
    @@ -4976,7 +3891,7 @@ public final class ZooKeeperProtos {
             return ((bitField0_ & 0x00000001) == 0x00000001);
           }
           /**
    -       * required .Table.State state = 1 [default = ENABLED];
    +       * required .DeprecatedTableState.State state = 1 [default = ENABLED];
            *
            * 
            * This is the table's state.  If no znode for a table,
    @@ -4984,11 +3899,11 @@ public final class ZooKeeperProtos {
            * for more.
            * 
    */ - public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State getState() { + public org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State getState() { return state_; } /** - * required .Table.State state = 1 [default = ENABLED]; + * required .DeprecatedTableState.State state = 1 [default = ENABLED]; * *
            * This is the table's state.  If no znode for a table,
    @@ -4996,7 +3911,7 @@ public final class ZooKeeperProtos {
            * for more.
            * 
    */ - public Builder setState(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State value) { + public Builder setState(org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State value) { if (value == null) { throw new NullPointerException(); } @@ -5006,7 +3921,7 @@ public final class ZooKeeperProtos { return this; } /** - * required .Table.State state = 1 [default = ENABLED]; + * required .DeprecatedTableState.State state = 1 [default = ENABLED]; * *
            * This is the table's state.  If no znode for a table,
    @@ -5016,20 +3931,20 @@ public final class ZooKeeperProtos {
            */
           public Builder clearState() {
             bitField0_ = (bitField0_ & ~0x00000001);
    -        state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State.ENABLED;
    +        state_ = org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.DeprecatedTableState.State.ENABLED;
             onChanged();
             return this;
           }
     
    -      // @@protoc_insertion_point(builder_scope:Table)
    +      // @@protoc_insertion_point(builder_scope:DeprecatedTableState)
         }
     
         static {
    -      defaultInstance = new Table(true);
    +      defaultInstance = new DeprecatedTableState(true);
           defaultInstance.initFields();
         }
     
    -    // @@protoc_insertion_point(class_scope:Table)
    +    // @@protoc_insertion_point(class_scope:DeprecatedTableState)
       }
     
       public interface ReplicationPeerOrBuilder
    @@ -10684,20 +9599,15 @@ public final class ZooKeeperProtos {
         com.google.protobuf.GeneratedMessage.FieldAccessorTable
           internal_static_ClusterUp_fieldAccessorTable;
       private static com.google.protobuf.Descriptors.Descriptor
    -    internal_static_RegionTransition_descriptor;
    -  private static
    -    com.google.protobuf.GeneratedMessage.FieldAccessorTable
    -      internal_static_RegionTransition_fieldAccessorTable;
    -  private static com.google.protobuf.Descriptors.Descriptor
         internal_static_SplitLogTask_descriptor;
       private static
         com.google.protobuf.GeneratedMessage.FieldAccessorTable
           internal_static_SplitLogTask_fieldAccessorTable;
       private static com.google.protobuf.Descriptors.Descriptor
    -    internal_static_Table_descriptor;
    +    internal_static_DeprecatedTableState_descriptor;
       private static
         com.google.protobuf.GeneratedMessage.FieldAccessorTable
    -      internal_static_Table_fieldAccessorTable;
    +      internal_static_DeprecatedTableState_fieldAccessorTable;
       private static com.google.protobuf.Descriptors.Descriptor
         internal_static_ReplicationPeer_descriptor;
       private static
    @@ -10748,38 +9658,35 @@ public final class ZooKeeperProtos {
           "\r\022!\n\005state\030\003 \001(\0162\022.RegionState.State\"M\n\006" +
           "Master\022\033\n\006master\030\001 \002(\0132\013.ServerName\022\023\n\013r" +
           "pc_version\030\002 \001(\r\022\021\n\tinfo_port\030\003 \001(\r\"\037\n\tC" +
    -      "lusterUp\022\022\n\nstart_date\030\001 \002(\t\"\210\001\n\020RegionT" +
    -      "ransition\022\027\n\017event_type_code\030\001 \002(\r\022\023\n\013re" +
    -      "gion_name\030\002 \002(\014\022\023\n\013create_time\030\003 \002(\004\022 \n\013" +
    -      "server_name\030\004 \002(\0132\013.ServerName\022\017\n\007payloa",
    -      "d\030\005 \001(\014\"\214\002\n\014SplitLogTask\022\"\n\005state\030\001 \002(\0162" +
    -      "\023.SplitLogTask.State\022 \n\013server_name\030\002 \002(" +
    -      "\0132\013.ServerName\0221\n\004mode\030\003 \001(\0162\032.SplitLogT" +
    -      "ask.RecoveryMode:\007UNKNOWN\"C\n\005State\022\016\n\nUN" +
    -      "ASSIGNED\020\000\022\t\n\005OWNED\020\001\022\014\n\010RESIGNED\020\002\022\010\n\004D" +
    -      "ONE\020\003\022\007\n\003ERR\020\004\">\n\014RecoveryMode\022\013\n\007UNKNOW" +
    -      "N\020\000\022\021\n\rLOG_SPLITTING\020\001\022\016\n\nLOG_REPLAY\020\002\"n" +
    -      "\n\005Table\022$\n\005state\030\001 \002(\0162\014.Table.State:\007EN" +
    -      "ABLED\"?\n\005State\022\013\n\007ENABLED\020\000\022\014\n\010DISABLED\020" +
    -      "\001\022\r\n\tDISABLING\020\002\022\014\n\010ENABLING\020\003\"\215\001\n\017Repli",
    -      "cationPeer\022\022\n\nclusterkey\030\001 \002(\t\022\037\n\027replic" +
    -      "ationEndpointImpl\030\002 \001(\t\022\035\n\004data\030\003 \003(\0132\017." +
    -      "BytesBytesPair\022&\n\rconfiguration\030\004 \003(\0132\017." +
    -      "NameStringPair\"^\n\020ReplicationState\022&\n\005st" +
    -      "ate\030\001 \002(\0162\027.ReplicationState.State\"\"\n\005St" +
    -      "ate\022\013\n\007ENABLED\020\000\022\014\n\010DISABLED\020\001\"+\n\027Replic" +
    -      "ationHLogPosition\022\020\n\010position\030\001 \002(\003\"%\n\017R" +
    -      "eplicationLock\022\022\n\nlock_owner\030\001 \002(\t\"\230\001\n\tT" +
    -      "ableLock\022\036\n\ntable_name\030\001 \001(\0132\n.TableName" +
    -      "\022\037\n\nlock_owner\030\002 \001(\0132\013.ServerName\022\021\n\tthr",
    -      "ead_id\030\003 \001(\003\022\021\n\tis_shared\030\004 \001(\010\022\017\n\007purpo" +
    -      "se\030\005 \001(\t\022\023\n\013create_time\030\006 \001(\003\";\n\017StoreSe" +
    -      "quenceId\022\023\n\013family_name\030\001 \002(\014\022\023\n\013sequenc" +
    -      "e_id\030\002 \002(\004\"g\n\026RegionStoreSequenceIds\022 \n\030" +
    -      "last_flushed_sequence_id\030\001 \002(\004\022+\n\021store_" +
    -      "sequence_id\030\002 \003(\0132\020.StoreSequenceIdBE\n*o" +
    -      "rg.apache.hadoop.hbase.protobuf.generate" +
    -      "dB\017ZooKeeperProtosH\001\210\001\001\240\001\001"
    +      "lusterUp\022\022\n\nstart_date\030\001 \002(\t\"\214\002\n\014SplitLo" +
    +      "gTask\022\"\n\005state\030\001 \002(\0162\023.SplitLogTask.Stat" +
    +      "e\022 \n\013server_name\030\002 \002(\0132\013.ServerName\0221\n\004m" +
    +      "ode\030\003 \001(\0162\032.SplitLogTask.RecoveryMode:\007U",
    +      "NKNOWN\"C\n\005State\022\016\n\nUNASSIGNED\020\000\022\t\n\005OWNED" +
    +      "\020\001\022\014\n\010RESIGNED\020\002\022\010\n\004DONE\020\003\022\007\n\003ERR\020\004\">\n\014R" +
    +      "ecoveryMode\022\013\n\007UNKNOWN\020\000\022\021\n\rLOG_SPLITTIN" +
    +      "G\020\001\022\016\n\nLOG_REPLAY\020\002\"\214\001\n\024DeprecatedTableS" +
    +      "tate\0223\n\005state\030\001 \002(\0162\033.DeprecatedTableSta" +
    +      "te.State:\007ENABLED\"?\n\005State\022\013\n\007ENABLED\020\000\022" +
    +      "\014\n\010DISABLED\020\001\022\r\n\tDISABLING\020\002\022\014\n\010ENABLING" +
    +      "\020\003\"\215\001\n\017ReplicationPeer\022\022\n\nclusterkey\030\001 \002" +
    +      "(\t\022\037\n\027replicationEndpointImpl\030\002 \001(\t\022\035\n\004d" +
    +      "ata\030\003 \003(\0132\017.BytesBytesPair\022&\n\rconfigurat",
    +      "ion\030\004 \003(\0132\017.NameStringPair\"^\n\020Replicatio" +
    +      "nState\022&\n\005state\030\001 \002(\0162\027.ReplicationState" +
    +      ".State\"\"\n\005State\022\013\n\007ENABLED\020\000\022\014\n\010DISABLED" +
    +      "\020\001\"+\n\027ReplicationHLogPosition\022\020\n\010positio" +
    +      "n\030\001 \002(\003\"%\n\017ReplicationLock\022\022\n\nlock_owner" +
    +      "\030\001 \002(\t\"\230\001\n\tTableLock\022\036\n\ntable_name\030\001 \001(\013" +
    +      "2\n.TableName\022\037\n\nlock_owner\030\002 \001(\0132\013.Serve" +
    +      "rName\022\021\n\tthread_id\030\003 \001(\003\022\021\n\tis_shared\030\004 " +
    +      "\001(\010\022\017\n\007purpose\030\005 \001(\t\022\023\n\013create_time\030\006 \001(" +
    +      "\003\";\n\017StoreSequenceId\022\023\n\013family_name\030\001 \002(",
    +      "\014\022\023\n\013sequence_id\030\002 \002(\004\"g\n\026RegionStoreSeq" +
    +      "uenceIds\022 \n\030last_flushed_sequence_id\030\001 \002" +
    +      "(\004\022+\n\021store_sequence_id\030\002 \003(\0132\020.StoreSeq" +
    +      "uenceIdBE\n*org.apache.hadoop.hbase.proto" +
    +      "buf.generatedB\017ZooKeeperProtosH\001\210\001\001\240\001\001"
         };
         com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
           new com.google.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
    @@ -10804,62 +9711,56 @@ public final class ZooKeeperProtos {
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_ClusterUp_descriptor,
                   new java.lang.String[] { "StartDate", });
    -          internal_static_RegionTransition_descriptor =
    -            getDescriptor().getMessageTypes().get(3);
    -          internal_static_RegionTransition_fieldAccessorTable = new
    -            com.google.protobuf.GeneratedMessage.FieldAccessorTable(
    -              internal_static_RegionTransition_descriptor,
    -              new java.lang.String[] { "EventTypeCode", "RegionName", "CreateTime", "ServerName", "Payload", });
               internal_static_SplitLogTask_descriptor =
    -            getDescriptor().getMessageTypes().get(4);
    +            getDescriptor().getMessageTypes().get(3);
               internal_static_SplitLogTask_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_SplitLogTask_descriptor,
                   new java.lang.String[] { "State", "ServerName", "Mode", });
    -          internal_static_Table_descriptor =
    -            getDescriptor().getMessageTypes().get(5);
    -          internal_static_Table_fieldAccessorTable = new
    +          internal_static_DeprecatedTableState_descriptor =
    +            getDescriptor().getMessageTypes().get(4);
    +          internal_static_DeprecatedTableState_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
    -              internal_static_Table_descriptor,
    +              internal_static_DeprecatedTableState_descriptor,
                   new java.lang.String[] { "State", });
               internal_static_ReplicationPeer_descriptor =
    -            getDescriptor().getMessageTypes().get(6);
    +            getDescriptor().getMessageTypes().get(5);
               internal_static_ReplicationPeer_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_ReplicationPeer_descriptor,
                   new java.lang.String[] { "Clusterkey", "ReplicationEndpointImpl", "Data", "Configuration", });
               internal_static_ReplicationState_descriptor =
    -            getDescriptor().getMessageTypes().get(7);
    +            getDescriptor().getMessageTypes().get(6);
               internal_static_ReplicationState_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_ReplicationState_descriptor,
                   new java.lang.String[] { "State", });
               internal_static_ReplicationHLogPosition_descriptor =
    -            getDescriptor().getMessageTypes().get(8);
    +            getDescriptor().getMessageTypes().get(7);
               internal_static_ReplicationHLogPosition_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_ReplicationHLogPosition_descriptor,
                   new java.lang.String[] { "Position", });
               internal_static_ReplicationLock_descriptor =
    -            getDescriptor().getMessageTypes().get(9);
    +            getDescriptor().getMessageTypes().get(8);
               internal_static_ReplicationLock_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_ReplicationLock_descriptor,
                   new java.lang.String[] { "LockOwner", });
               internal_static_TableLock_descriptor =
    -            getDescriptor().getMessageTypes().get(10);
    +            getDescriptor().getMessageTypes().get(9);
               internal_static_TableLock_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_TableLock_descriptor,
                   new java.lang.String[] { "TableName", "LockOwner", "ThreadId", "IsShared", "Purpose", "CreateTime", });
               internal_static_StoreSequenceId_descriptor =
    -            getDescriptor().getMessageTypes().get(11);
    +            getDescriptor().getMessageTypes().get(10);
               internal_static_StoreSequenceId_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_StoreSequenceId_descriptor,
                   new java.lang.String[] { "FamilyName", "SequenceId", });
               internal_static_RegionStoreSequenceIds_descriptor =
    -            getDescriptor().getMessageTypes().get(12);
    +            getDescriptor().getMessageTypes().get(11);
               internal_static_RegionStoreSequenceIds_fieldAccessorTable = new
                 com.google.protobuf.GeneratedMessage.FieldAccessorTable(
                   internal_static_RegionStoreSequenceIds_descriptor,
    diff --git hbase-protocol/src/main/protobuf/Client.proto hbase-protocol/src/main/protobuf/Client.proto
    index ede1c26..1a3c43e 100644
    --- hbase-protocol/src/main/protobuf/Client.proto
    +++ hbase-protocol/src/main/protobuf/Client.proto
    @@ -353,6 +353,14 @@ message RegionAction {
       repeated Action action = 3;
     }
     
    +/*
    +* Statistics about the current load on the region
    +*/
    +message RegionLoadStats{
    +  // percent load on the memstore. Guaranteed to be positive, between 0 and 100
    +  optional int32 memstoreLoad = 1 [default = 0];
    +}
    +
     /**
      * Either a Result or an Exception NameBytesPair (keyed by
      * exception name whose value is the exception stringified)
    @@ -366,6 +374,8 @@ message ResultOrException {
       optional NameBytesPair exception = 3;
       // result if this was a coprocessor service call
       optional CoprocessorServiceResult service_result = 4;
    +  // current load on the region
    +  optional RegionLoadStats loadStats = 5;
     }
     
     /**
    diff --git hbase-protocol/src/main/protobuf/HBase.proto hbase-protocol/src/main/protobuf/HBase.proto
    index 24941ff..c3c8c6a 100644
    --- hbase-protocol/src/main/protobuf/HBase.proto
    +++ hbase-protocol/src/main/protobuf/HBase.proto
    @@ -44,6 +44,27 @@ message TableSchema {
       repeated NameStringPair configuration = 4;
     }
     
    +/** Denotes state of the table */
    +message TableState {
    +  // Table's current state
    +  enum State {
    +    ENABLED = 0;
    +    DISABLED = 1;
    +    DISABLING = 2;
    +    ENABLING = 3;
    +  }
    +  // This is the table's state.
    +  required State state = 1;
    +  required TableName table = 2;
    +  optional uint64 timestamp = 3;
    +}
    +
    +/** On HDFS representation of table state. */
    +message TableDescriptor {
    +  required TableSchema schema = 1;
    +  optional TableState.State state = 2 [ default = ENABLED ];
    +}
    +
     /**
      * Column Family Schema
      * Inspired by the rest ColumSchemaMessage
    @@ -164,6 +185,7 @@ message SnapshotDescription {
       }
       optional Type type = 4 [default = FLUSH];
       optional int32 version = 5;
    +  optional string owner = 6;
     }
     
     /**
    @@ -179,6 +201,16 @@ message ProcedureDescription {
     message EmptyMsg {
     }
     
    +enum TimeUnit {
    +  NANOSECONDS = 1;
    +  MICROSECONDS = 2;
    +  MILLISECONDS = 3;
    +  SECONDS = 4;
    +  MINUTES = 5;
    +  HOURS = 6;
    +  DAYS = 7;
    +}
    +
     message LongMsg {
       required int64 long_msg = 1;
     }
    diff --git hbase-protocol/src/main/protobuf/Master.proto hbase-protocol/src/main/protobuf/Master.proto
    index 34f68e9..e55dcc0 100644
    --- hbase-protocol/src/main/protobuf/Master.proto
    +++ hbase-protocol/src/main/protobuf/Master.proto
    @@ -28,6 +28,7 @@ option optimize_for = SPEED;
     import "HBase.proto";
     import "Client.proto";
     import "ClusterStatus.proto";
    +import "Quota.proto";
     
     /* Column-level protobufs */
     
    @@ -332,6 +333,14 @@ message GetTableNamesResponse {
       repeated TableName table_names = 1;
     }
     
    +message GetTableStateRequest {
    +  required TableName table_name = 1;
    +}
    +
    +message GetTableStateResponse {
    +  required TableState table_state = 1;
    +}
    +
     message GetClusterStatusRequest {
     }
     
    @@ -364,6 +373,20 @@ message IsProcedureDoneResponse {
             optional ProcedureDescription snapshot = 2;
     }
     
    +message SetQuotaRequest {
    +  optional string user_name = 1;
    +  optional string user_group = 2;
    +  optional string namespace = 3;
    +  optional TableName table_name = 4;
    +
    +  optional bool remove_all = 5;
    +  optional bool bypass_globals = 6;
    +  optional ThrottleRequest throttle = 7;
    +}
    +
    +message SetQuotaResponse {
    +}
    +
     service MasterService {
       /** Used by the client to get the number of regions that have received the updated schema */
       rpc GetSchemaAlterStatus(GetSchemaAlterStatusRequest)
    @@ -571,4 +594,11 @@ service MasterService {
       /** returns a list of tables for a given namespace*/
       rpc ListTableNamesByNamespace(ListTableNamesByNamespaceRequest)
         returns(ListTableNamesByNamespaceResponse);
    +
    +  /** returns table state */
    +  rpc GetTableState(GetTableStateRequest)
    +    returns(GetTableStateResponse);
    +
    +  /** Apply the new quota settings */
    +  rpc SetQuota(SetQuotaRequest) returns(SetQuotaResponse);
     }
    diff --git hbase-protocol/src/main/protobuf/Quota.proto hbase-protocol/src/main/protobuf/Quota.proto
    new file mode 100644
    index 0000000..6ef15fe
    --- /dev/null
    +++ hbase-protocol/src/main/protobuf/Quota.proto
    @@ -0,0 +1,73 @@
    + /**
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +option java_package = "org.apache.hadoop.hbase.protobuf.generated";
    +option java_outer_classname = "QuotaProtos";
    +option java_generic_services = true;
    +option java_generate_equals_and_hash = true;
    +option optimize_for = SPEED;
    +
    +import "HBase.proto";
    +
    +enum QuotaScope {
    +  CLUSTER = 1;
    +  MACHINE = 2;
    +}
    +
    +message TimedQuota {
    +  required TimeUnit time_unit = 1;
    +  optional uint64 soft_limit  = 2;
    +  optional float share = 3;
    +  optional QuotaScope scope  = 4 [default = MACHINE];
    +}
    +
    +enum ThrottleType {
    +  REQUEST_NUMBER = 1;
    +  REQUEST_SIZE   = 2;
    +  WRITE_NUMBER   = 3;
    +  WRITE_SIZE     = 4;
    +  READ_NUMBER    = 5;
    +  READ_SIZE      = 6;
    +}
    +
    +message Throttle {
    +  optional TimedQuota req_num  = 1;
    +  optional TimedQuota req_size = 2;
    +
    +  optional TimedQuota write_num  = 3;
    +  optional TimedQuota write_size = 4;
    +
    +  optional TimedQuota read_num  = 5;
    +  optional TimedQuota read_size = 6;
    +}
    +
    +message ThrottleRequest {
    +  optional ThrottleType type = 1;
    +  optional TimedQuota timed_quota = 2;
    +}
    +
    +enum QuotaType {
    +  THROTTLE = 1;
    +}
    +
    +message Quotas {
    +  optional bool bypass_globals = 1 [default = false];
    +  optional Throttle throttle = 2;
    +}
    +
    +message QuotaUsage {
    +}
    diff --git hbase-protocol/src/main/protobuf/ZooKeeper.proto hbase-protocol/src/main/protobuf/ZooKeeper.proto
    index bd1dc30..bac881b 100644
    --- hbase-protocol/src/main/protobuf/ZooKeeper.proto
    +++ hbase-protocol/src/main/protobuf/ZooKeeper.proto
    @@ -65,21 +65,6 @@ message ClusterUp {
     }
     
     /**
    - * What we write under unassigned up in zookeeper as a region moves through
    - * open/close, etc., regions.  Details a region in transition.
    - */
    -message RegionTransition {
    -  // Code for EventType gotten by doing o.a.h.h.EventHandler.EventType.getCode()
    -  required uint32 event_type_code = 1;
    -  // Full regionname in bytes
    -  required bytes region_name = 2;
    -  required uint64 create_time = 3;
    -  // The region server where the transition will happen or is happening
    -  required ServerName server_name = 4;
    -  optional bytes payload = 5;
    -}
    -
    -/**
      * WAL SplitLog directory znodes have this for content.  Used doing distributed
      * WAL splitting.  Holds current state and name of server that originated split.
      */
    @@ -103,8 +88,9 @@ message SplitLogTask {
     
     /**
      * The znode that holds state of table.
    + * Deprected, table state is stored in table descriptor on HDFS.
      */
    -message Table {
    +message DeprecatedTableState {
       // Table's current state
       enum State {
         ENABLED = 0;
    diff --git hbase-rest/pom.xml hbase-rest/pom.xml
    index 913bec5..9b3fa63 100644
    --- hbase-rest/pom.xml
    +++ hbase-rest/pom.xml
    @@ -25,7 +25,7 @@
       
         hbase
         org.apache.hbase
    -    1.0.0-SNAPSHOT
    +    2.0.0-SNAPSHOT
         ..
       
       hbase-rest
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java
    index 1ecb7c6..001c6b5 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RegionsResource.java
    @@ -38,6 +38,8 @@ import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.HRegionInfo;
     import org.apache.hadoop.hbase.ServerName;
     import org.apache.hadoop.hbase.TableNotFoundException;
    +import org.apache.hadoop.hbase.client.Connection;
    +import org.apache.hadoop.hbase.client.ConnectionFactory;
     import org.apache.hadoop.hbase.client.MetaScanner;
     import org.apache.hadoop.hbase.rest.model.TableInfoModel;
     import org.apache.hadoop.hbase.rest.model.TableRegionModel;
    @@ -76,8 +78,10 @@ public class RegionsResource extends ResourceBase {
         try {
           TableName tableName = TableName.valueOf(tableResource.getName());
           TableInfoModel model = new TableInfoModel(tableName.getNameAsString());
    -      Map regions = MetaScanner.allTableRegions(
    -        servlet.getConfiguration(), null, tableName);
    +
    +      Connection connection = ConnectionFactory.createConnection(servlet.getConfiguration());
    +      Map regions = MetaScanner.allTableRegions(connection, tableName);
    +      connection.close();
           for (Map.Entry e: regions.entrySet()) {
             HRegionInfo hri = e.getKey();
             ServerName addr = e.getValue();
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java
    index a2de329..2ad0541 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/client/Cluster.java
    @@ -100,4 +100,11 @@ public class Cluster {
         sb.append(port);
         return remove(sb.toString());
       }
    +
    +  @Override public String toString() {
    +    return "Cluster{" +
    +        "nodes=" + nodes +
    +        ", lastHost='" + lastHost + '\'' +
    +        '}';
    +  }
     }
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java
    index 54fc8de..e332d49 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/StorageClusterVersionModel.java
    @@ -68,11 +68,11 @@ public class StorageClusterVersionModel implements Serializable {
         return version;
       }
     
    -    //needed for jackson deserialization
    -    private static StorageClusterVersionModel valueOf(String value) {
    -      StorageClusterVersionModel versionModel
    -          = new StorageClusterVersionModel();
    -      versionModel.setVersion(value);
    -      return versionModel;
    -    }
    +  //needed for jackson deserialization
    +  private static StorageClusterVersionModel valueOf(String value) {
    +    StorageClusterVersionModel versionModel
    +        = new StorageClusterVersionModel();
    +    versionModel.setVersion(value);
    +    return versionModel;
    +  }
     }
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
    index d843e79..9e9fe47 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/TableSchemaModel.java
    @@ -38,7 +38,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor;
     import org.apache.hadoop.hbase.HConstants;
     import org.apache.hadoop.hbase.HTableDescriptor;
     import org.apache.hadoop.hbase.TableName;
    -import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
     import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
     import org.apache.hadoop.hbase.rest.protobuf.generated.ColumnSchemaMessage.ColumnSchema;
     import org.apache.hadoop.hbase.rest.protobuf.generated.TableSchemaMessage.TableSchema;
    @@ -88,7 +87,7 @@ public class TableSchemaModel implements Serializable, ProtobufMessageHandler {
        */
       public TableSchemaModel(HTableDescriptor htd) {
         setName(htd.getTableName().getNameAsString());
    -    for (Map.Entry e:
    +    for (Map.Entry e:
             htd.getValues().entrySet()) {
           addAttribute(Bytes.toString(e.getKey().get()), 
             Bytes.toString(e.getValue().get()));
    @@ -96,9 +95,9 @@ public class TableSchemaModel implements Serializable, ProtobufMessageHandler {
         for (HColumnDescriptor hcd: htd.getFamilies()) {
           ColumnSchemaModel columnModel = new ColumnSchemaModel();
           columnModel.setName(hcd.getNameAsString());
    -      for (Map.Entry e:
    +      for (Map.Entry e:
               hcd.getValues().entrySet()) {
    -        columnModel.addAttribute(Bytes.toString(e.getKey().get()), 
    +        columnModel.addAttribute(Bytes.toString(e.getKey().get()),
                 Bytes.toString(e.getValue().get()));
           }
           addColumnFamily(columnModel);
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
    index d1817db..fca4544 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
    @@ -70,5 +70,5 @@ public class PlainTextMessageBodyProducer
         byte[] bytes = buffer.get();
         outStream.write(bytes);
         buffer.remove();
    -  }	
    +  }
     }
    diff --git hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
    index 0c2430f..12171a4 100644
    --- hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
    +++ hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
    @@ -52,8 +52,8 @@ public class ProtobufMessageBodyProducer
     
       @Override
       public boolean isWriteable(Class type, Type genericType, 
    -    Annotation[] annotations, MediaType mediaType) {
    -      return ProtobufMessageHandler.class.isAssignableFrom(type);
    +      Annotation[] annotations, MediaType mediaType) {
    +    return ProtobufMessageHandler.class.isAssignableFrom(type);
       }
     
       @Override
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java
    index aaf7d59..b02f069 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/PerformanceEvaluation.java
    @@ -18,6 +18,22 @@
      */
     package org.apache.hadoop.hbase.rest;
     
    +import java.io.DataInput;
    +import java.io.DataOutput;
    +import java.io.IOException;
    +import java.io.PrintStream;
    +import java.lang.reflect.Constructor;
    +import java.text.SimpleDateFormat;
    +import java.util.ArrayList;
    +import java.util.Arrays;
    +import java.util.Date;
    +import java.util.List;
    +import java.util.Map;
    +import java.util.Random;
    +import java.util.TreeMap;
    +import java.util.regex.Matcher;
    +import java.util.regex.Pattern;
    +
     import org.apache.commons.logging.Log;
     import org.apache.commons.logging.LogFactory;
     import org.apache.hadoop.conf.Configuration;
    @@ -37,11 +53,11 @@ import org.apache.hadoop.hbase.client.Durability;
     import org.apache.hadoop.hbase.client.Get;
     import org.apache.hadoop.hbase.client.HConnection;
     import org.apache.hadoop.hbase.client.HConnectionManager;
    +import org.apache.hadoop.hbase.client.HTableInterface;
     import org.apache.hadoop.hbase.client.Put;
     import org.apache.hadoop.hbase.client.Result;
     import org.apache.hadoop.hbase.client.ResultScanner;
     import org.apache.hadoop.hbase.client.Scan;
    -import org.apache.hadoop.hbase.client.Table;
     import org.apache.hadoop.hbase.filter.BinaryComparator;
     import org.apache.hadoop.hbase.filter.CompareFilter;
     import org.apache.hadoop.hbase.filter.Filter;
    @@ -75,22 +91,6 @@ import org.apache.hadoop.util.LineReader;
     import org.apache.hadoop.util.Tool;
     import org.apache.hadoop.util.ToolRunner;
     
    -import java.io.DataInput;
    -import java.io.DataOutput;
    -import java.io.IOException;
    -import java.io.PrintStream;
    -import java.lang.reflect.Constructor;
    -import java.text.SimpleDateFormat;
    -import java.util.ArrayList;
    -import java.util.Arrays;
    -import java.util.Date;
    -import java.util.List;
    -import java.util.Map;
    -import java.util.Random;
    -import java.util.TreeMap;
    -import java.util.regex.Matcher;
    -import java.util.regex.Pattern;
    -
     /**
      * Script used evaluating Stargate performance and scalability.  Runs a SG
      * client that steps through one of a set of hardcoded tests or 'experiments'
    @@ -870,7 +870,7 @@ public class PerformanceEvaluation extends Configured implements Tool {
         protected final int totalRows;
         private final Status status;
         protected TableName tableName;
    -    protected Table table;
    +    protected HTableInterface table;
         protected volatile Configuration conf;
         protected boolean flushCommits;
         protected boolean writeToWAL;
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestDeleteRow.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestDeleteRow.java
    index 5af3831..516ce9e 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestDeleteRow.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestDeleteRow.java
    @@ -20,16 +20,14 @@ package org.apache.hadoop.hbase.rest;
     import static org.junit.Assert.assertEquals;
     
     import java.io.IOException;
    -
     import javax.xml.bind.JAXBException;
    -
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.rest.client.Response;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestDeleteRow extends RowResourceBase {
     
       @Test
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGZIPResponseWrapper.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGZIPResponseWrapper.java
    index 4b7eb44..18beb24 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGZIPResponseWrapper.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGZIPResponseWrapper.java
    @@ -30,13 +30,14 @@ import java.io.IOException;
     import javax.servlet.ServletOutputStream;
     import javax.servlet.http.HttpServletResponse;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.rest.filter.GZIPResponseStream;
     import org.apache.hadoop.hbase.rest.filter.GZIPResponseWrapper;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestGZIPResponseWrapper {
     
       private final HttpServletResponse response = mock(HttpServletResponse.class);
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
    index 959cb50..a5326af 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
    @@ -15,7 +15,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import static org.junit.Assert.assertEquals;
    @@ -32,18 +31,19 @@ import javax.xml.bind.JAXBException;
     import org.apache.commons.httpclient.Header;
     import org.apache.hadoop.hbase.CompatibilityFactory;
     import org.apache.hadoop.hbase.HConstants;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.rest.client.Response;
     import org.apache.hadoop.hbase.rest.model.CellModel;
     import org.apache.hadoop.hbase.rest.model.CellSetModel;
     import org.apache.hadoop.hbase.rest.model.RowModel;
     import org.apache.hadoop.hbase.security.UserProvider;
     import org.apache.hadoop.hbase.test.MetricsAssertHelper;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestGetAndPutResource extends RowResourceBase {
     
       private static final MetricsAssertHelper METRICS_ASSERT =
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
    index 5eca15a..66483d7 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import static org.junit.Assert.assertEquals;
    @@ -32,7 +31,6 @@ import org.apache.commons.httpclient.Header;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
     import org.apache.hadoop.hbase.HColumnDescriptor;
     import org.apache.hadoop.hbase.HTableDescriptor;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.client.Admin;
     import org.apache.hadoop.hbase.client.Get;
    @@ -42,13 +40,15 @@ import org.apache.hadoop.hbase.client.Table;
     import org.apache.hadoop.hbase.rest.client.Client;
     import org.apache.hadoop.hbase.rest.client.Cluster;
     import org.apache.hadoop.hbase.rest.client.Response;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.AfterClass;
     import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestGzipFilter {
       private static final TableName TABLE = TableName.valueOf("TestGzipFilter");
       private static final String CFA = "a";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
    index 412ccdb..c7da65a 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import org.apache.hadoop.conf.Configuration;
    @@ -30,6 +29,7 @@ import org.apache.hadoop.hbase.rest.model.CellSetModel;
     import org.apache.hadoop.hbase.rest.model.RowModel;
     import org.apache.hadoop.hbase.rest.provider.JacksonProvider;
     import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.codehaus.jackson.map.ObjectMapper;
     import org.junit.AfterClass;
    @@ -46,8 +46,7 @@ import java.io.IOException;
     
     import static org.junit.Assert.assertEquals;
     
    -
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestMultiRowResource {
     
       private static final TableName TABLE = TableName.valueOf("TestRowResource");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestResourceFilter.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestResourceFilter.java
    index 1832942..11d465f 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestResourceFilter.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestResourceFilter.java
    @@ -20,16 +20,17 @@ package org.apache.hadoop.hbase.rest;
     import static org.junit.Assert.assertEquals;
     
     import org.apache.hadoop.hbase.HBaseTestingUtility;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.rest.client.Client;
     import org.apache.hadoop.hbase.rest.client.Cluster;
     import org.apache.hadoop.hbase.rest.client.Response;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.junit.AfterClass;
     import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestResourceFilter {
     
       private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
    index f5c83ab..2387b98 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
    @@ -46,6 +46,7 @@ import org.apache.hadoop.hbase.rest.model.CellSetModel;
     import org.apache.hadoop.hbase.rest.model.RowModel;
     import org.apache.hadoop.hbase.rest.model.ScannerModel;
     import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import static org.junit.Assert.*;
    @@ -55,7 +56,7 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestScannerResource {
       private static final TableName TABLE = TableName.valueOf("TestScannerResource");
       private static final String NONEXISTENT_TABLE = "ThisTableDoesNotExist";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
    index a7498b8..7f0b1f5 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
    @@ -63,6 +63,7 @@ import org.apache.hadoop.hbase.rest.model.CellSetModel;
     import org.apache.hadoop.hbase.rest.model.RowModel;
     import org.apache.hadoop.hbase.rest.model.ScannerModel;
     import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import static org.junit.Assert.*;
    @@ -72,7 +73,7 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestScannersWithFilters {
     
       private static final Log LOG = LogFactory.getLog(TestScannersWithFilters.class);
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
    index 3620efd..ca48ea8 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
    @@ -14,7 +14,8 @@
      * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      * See the License for the specific language governing permissions and
      * limitations under the License.
    - */package org.apache.hadoop.hbase.rest;
    + */
    +package org.apache.hadoop.hbase.rest;
     
     import static org.junit.Assert.assertEquals;
     import static org.junit.Assert.assertNotNull;
    @@ -36,7 +37,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility;
     import org.apache.hadoop.hbase.HColumnDescriptor;
     import org.apache.hadoop.hbase.HTableDescriptor;
     import org.apache.hadoop.hbase.KeyValue;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.client.Admin;
     import org.apache.hadoop.hbase.client.Durability;
    @@ -59,13 +59,15 @@ import org.apache.hadoop.hbase.security.visibility.VisibilityClient;
     import org.apache.hadoop.hbase.security.visibility.VisibilityConstants;
     import org.apache.hadoop.hbase.security.visibility.VisibilityController;
     import org.apache.hadoop.hbase.security.visibility.VisibilityUtils;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.AfterClass;
     import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestScannersWithLabels {
       private static final TableName TABLE = TableName.valueOf("TestScannersWithLabels");
       private static final String CFA = "a";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
    index f389164..17bb733 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import java.io.ByteArrayInputStream;
    @@ -28,7 +27,6 @@ import javax.xml.bind.JAXBException;
     
     import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.client.Admin;
     import org.apache.hadoop.hbase.rest.client.Client;
    @@ -37,6 +35,8 @@ import org.apache.hadoop.hbase.rest.client.Response;
     import org.apache.hadoop.hbase.rest.model.ColumnSchemaModel;
     import org.apache.hadoop.hbase.rest.model.TableSchemaModel;
     import org.apache.hadoop.hbase.rest.model.TestTableSchemaModel;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import static org.junit.Assert.*;
    @@ -46,7 +46,7 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestSchemaResource {
       private static String TABLE1 = "TestSchemaResource1";
       private static String TABLE2 = "TestSchemaResource2";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
    index b332b1d..00c2049 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import java.io.ByteArrayInputStream;
    @@ -25,13 +24,19 @@ import java.io.IOException;
     import javax.xml.bind.JAXBContext;
     import javax.xml.bind.JAXBException;
     
    +import org.apache.commons.logging.Log;
    +import org.apache.commons.logging.LogFactory;
    +import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
    +import org.apache.hadoop.hbase.Waiter;
    +import org.apache.hadoop.hbase.client.HBaseAdmin;
     import org.apache.hadoop.hbase.rest.client.Client;
     import org.apache.hadoop.hbase.rest.client.Cluster;
     import org.apache.hadoop.hbase.rest.client.Response;
     import org.apache.hadoop.hbase.rest.model.StorageClusterStatusModel;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import static org.junit.Assert.*;
    @@ -41,16 +46,19 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestStatusResource {
    -  private static final byte[] META_REGION_NAME = Bytes.toBytes(TableName.META_TABLE_NAME+",,1");
    +  public static Log LOG = LogFactory.getLog(TestStatusResource.class);
    +
    +  private static final byte[] META_REGION_NAME = Bytes.toBytes(TableName.META_TABLE_NAME + ",,1");
     
       private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility();
    -  private static final HBaseRESTTestingUtility REST_TEST_UTIL = 
    -    new HBaseRESTTestingUtility();
    +  private static final HBaseRESTTestingUtility REST_TEST_UTIL =
    +      new HBaseRESTTestingUtility();
       private static Client client;
       private static JAXBContext context;
    -  
    +  private static Configuration conf;
    +
       private static void validate(StorageClusterStatusModel model) {
         assertNotNull(model);
         assertTrue(model.getRegions() + ">= 1", model.getRegions() >= 1);
    @@ -75,11 +83,21 @@ public class TestStatusResource {
     
       @BeforeClass
       public static void setUpBeforeClass() throws Exception {
    -    TEST_UTIL.startMiniCluster();
    -    REST_TEST_UTIL.startServletContainer(TEST_UTIL.getConfiguration());
    -    client = new Client(new Cluster().add("localhost", 
    -      REST_TEST_UTIL.getServletPort()));
    +    conf = TEST_UTIL.getConfiguration();
    +    TEST_UTIL.startMiniCluster(1, 1);
    +    TEST_UTIL.createTable(Bytes.toBytes("TestStatusResource"), Bytes.toBytes("D"));
    +    TEST_UTIL.createTable(Bytes.toBytes("TestStatusResource2"), Bytes.toBytes("D"));
    +    REST_TEST_UTIL.startServletContainer(conf);
    +    Cluster cluster = new Cluster();
    +    cluster.add("localhost", REST_TEST_UTIL.getServletPort());
    +    client = new Client(cluster);
         context = JAXBContext.newInstance(StorageClusterStatusModel.class);
    +    TEST_UTIL.waitFor(6000, new Waiter.Predicate() {
    +      @Override
    +      public boolean evaluate() throws IOException {
    +        return TEST_UTIL.getMiniHBaseCluster().getClusterStatus().getAverageLoad() > 0;
    +      }
    +    });
       }
     
       @AfterClass
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
    index 81612ce..77e89cd 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
    @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.rest.model.TableInfoModel;
     import org.apache.hadoop.hbase.rest.model.TableListModel;
     import org.apache.hadoop.hbase.rest.model.TableRegionModel;
     import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.apache.hadoop.util.StringUtils;
     
    @@ -53,7 +54,7 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestTableResource {
       private static final Log LOG = LogFactory.getLog(TestTableResource.class);
     
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
    index 9fc9301..789e9e1 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
    @@ -46,7 +46,6 @@ import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
     import org.apache.hadoop.hbase.HColumnDescriptor;
     import org.apache.hadoop.hbase.HTableDescriptor;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.client.Admin;
     import org.apache.hadoop.hbase.filter.Filter;
    @@ -59,6 +58,8 @@ import org.apache.hadoop.hbase.rest.model.CellModel;
     import org.apache.hadoop.hbase.rest.model.CellSetModel;
     import org.apache.hadoop.hbase.rest.model.RowModel;
     import org.apache.hadoop.hbase.rest.provider.JacksonProvider;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.codehaus.jackson.JsonFactory;
     import org.codehaus.jackson.JsonParser;
    @@ -71,7 +72,7 @@ import org.junit.experimental.categories.Category;
     import org.xml.sax.InputSource;
     import org.xml.sax.XMLReader;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestTableScan {
     
       private static final TableName TABLE = TableName.valueOf("TestScanResource");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java
    index cbacc40..34973c2 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest;
     
     import java.io.ByteArrayInputStream;
    @@ -28,12 +27,13 @@ import javax.xml.bind.JAXBException;
     import org.apache.commons.logging.Log;
     import org.apache.commons.logging.LogFactory;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.rest.client.Client;
     import org.apache.hadoop.hbase.rest.client.Cluster;
     import org.apache.hadoop.hbase.rest.client.Response;
     import org.apache.hadoop.hbase.rest.model.StorageClusterVersionModel;
     import org.apache.hadoop.hbase.rest.model.VersionModel;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import static org.junit.Assert.*;
    @@ -45,7 +45,7 @@ import org.junit.Test;
     import com.sun.jersey.spi.container.servlet.ServletContainer;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestVersionResource {
       private static final Log LOG = LogFactory.getLog(TestVersionResource.class);
     
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdminRetries.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdminRetries.java
    index ac986c5..7c888e0 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdminRetries.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteAdminRetries.java
    @@ -32,6 +32,7 @@ import java.util.regex.Pattern;
     import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
     import org.apache.hadoop.hbase.HTableDescriptor;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.Before;
    @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category;
     /**
      * Tests {@link RemoteAdmin} retries.
      */
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestRemoteAdminRetries {
     
       private static final int SLEEP_TIME = 50;
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteHTableRetries.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteHTableRetries.java
    index adfeafe..5b18a6a 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteHTableRetries.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteHTableRetries.java
    @@ -32,6 +32,7 @@ import java.util.regex.Pattern;
     
     import org.apache.hadoop.conf.Configuration;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.client.Delete;
     import org.apache.hadoop.hbase.client.Get;
    @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category;
     /**
      * Test RemoteHTable retries.
      */
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestRemoteHTableRetries {
     
       private static final int SLEEP_TIME = 50;
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
    index baf9961..9516995 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestRemoteTable.java
    @@ -16,7 +16,6 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest.client;
     
     import static org.junit.Assert.assertEquals;
    @@ -37,7 +36,6 @@ import org.apache.hadoop.hbase.CellUtil;
     import org.apache.hadoop.hbase.HBaseTestingUtility;
     import org.apache.hadoop.hbase.HColumnDescriptor;
     import org.apache.hadoop.hbase.HTableDescriptor;
    -import org.apache.hadoop.hbase.testclassification.MediumTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.client.Admin;
     import org.apache.hadoop.hbase.client.Delete;
    @@ -49,6 +47,8 @@ import org.apache.hadoop.hbase.client.ResultScanner;
     import org.apache.hadoop.hbase.client.Scan;
     import org.apache.hadoop.hbase.client.Table;
     import org.apache.hadoop.hbase.rest.HBaseRESTTestingUtility;
    +import org.apache.hadoop.hbase.testclassification.MediumTests;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.After;
     import org.junit.AfterClass;
    @@ -57,7 +57,7 @@ import org.junit.BeforeClass;
     import org.junit.Test;
     import org.junit.experimental.categories.Category;
     
    -@Category(MediumTests.class)
    +@Category({RestTests.class, MediumTests.class})
     public class TestRemoteTable {
       private static final TableName TABLE = TableName.valueOf("TestRemoteTable");
       private static final byte[] ROW_1 = Bytes.toBytes("testrow1");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
    index bc273b4..cdc6ee5 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
    @@ -19,12 +19,13 @@
     
     package org.apache.hadoop.hbase.rest.model;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestCellModel extends TestModelBase {
     
       private static final long TIMESTAMP = 1245219839331L;
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
    index 08cd0e4..2bef955 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
    @@ -21,12 +21,13 @@ package org.apache.hadoop.hbase.rest.model;
     
     import java.util.Iterator;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestCellSetModel extends TestModelBase {
     
       private static final byte[] ROW1 = Bytes.toBytes("testrow1");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
    index bf1d204..af5545e 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
    @@ -19,10 +19,11 @@
     
     package org.apache.hadoop.hbase.rest.model;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestColumnSchemaModel extends TestModelBase {
     
       protected static final String COLUMN_NAME = "testcolumn";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java
    index f10b640..46df357 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java
    @@ -21,6 +21,7 @@
     package org.apache.hadoop.hbase.rest.model;
     
     import junit.framework.TestCase;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.rest.ProtobufMessageHandler;
     import org.apache.hadoop.hbase.rest.provider.JAXBContextResolver;
    @@ -37,7 +38,7 @@ import java.io.IOException;
     import java.io.StringReader;
     import java.io.StringWriter;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public abstract class TestModelBase extends TestCase {
     
       protected String AS_XML;
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
    index 98ccb66..b5dcf2f 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
    @@ -23,12 +23,13 @@ import java.util.Iterator;
     
     import javax.xml.bind.JAXBContext;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestRowModel extends TestModelBase {
     
       private static final byte[] ROW1 = Bytes.toBytes("testrow1");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
    index f05d79f..a5ac2ca 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
    @@ -19,11 +19,12 @@
     
     package org.apache.hadoop.hbase.rest.model;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestScannerModel extends TestModelBase {
       private static final String PRIVATE = "private";
       private static final String PUBLIC = "public";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
    index 7437096..36850a5 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
    @@ -21,13 +21,14 @@ package org.apache.hadoop.hbase.rest.model;
     
     import java.util.Iterator;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.TableName;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestStorageClusterStatusModel extends TestModelBase {
     
       public TestStorageClusterStatusModel() throws Exception {
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
    index fb004bf..602312d 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
    @@ -19,10 +19,11 @@
     
     package org.apache.hadoop.hbase.rest.model;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestStorageClusterVersionModel extends TestModelBase {
       private static final String VERSION = "0.0.1-testing";
     
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
    index 88d1c96..a061b31 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
    @@ -21,12 +21,13 @@ package org.apache.hadoop.hbase.rest.model;
     
     import java.util.Iterator;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestTableInfoModel extends TestModelBase {
       private static final String TABLE = "testtable";
       private static final byte[] START_KEY = Bytes.toBytes("abracadbra");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
    index ea5960d..f20486d 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
    @@ -21,11 +21,12 @@ package org.apache.hadoop.hbase.rest.model;
     
     import java.util.Iterator;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestTableListModel extends TestModelBase {
       private static final String TABLE1 = "table1";
       private static final String TABLE2 = "table2";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
    index 5df67b0..d592381 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
    @@ -20,12 +20,13 @@
     package org.apache.hadoop.hbase.rest.model;
     
     import org.apache.hadoop.hbase.*;
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     import org.apache.hadoop.hbase.util.Bytes;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestTableRegionModel extends TestModelBase {
       private static final String TABLE = "testtable";
       private static final byte[] START_KEY = Bytes.toBytes("abracadbra");
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
    index baaaf8c..4b2eb05 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
    @@ -23,11 +23,12 @@ import java.util.Iterator;
     
     import javax.xml.bind.JAXBContext;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestTableSchemaModel extends TestModelBase {
     
       public static final String TABLE_NAME = "testTable";
    diff --git hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
    index 4a9ceaf..e8da529 100644
    --- hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
    +++ hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
    @@ -16,14 +16,14 @@
      * See the License for the specific language governing permissions and
      * limitations under the License.
      */
    -
     package org.apache.hadoop.hbase.rest.model;
     
    +import org.apache.hadoop.hbase.testclassification.RestTests;
     import org.apache.hadoop.hbase.testclassification.SmallTests;
     
     import org.junit.experimental.categories.Category;
     
    -@Category(SmallTests.class)
    +@Category({RestTests.class, SmallTests.class})
     public class TestVersionModel extends TestModelBase {
       private static final String REST_VERSION = "0.0.1";
       private static final String OS_VERSION = 
    diff --git hbase-server/pom.xml hbase-server/pom.xml
    index 32a6a74..f9c887f 100644
    --- hbase-server/pom.xml
    +++ hbase-server/pom.xml
    @@ -23,7 +23,7 @@
       
         hbase
         org.apache.hbase
    -    1.0.0-SNAPSHOT
    +    2.0.0-SNAPSHOT
         ..
       
       hbase-server
    @@ -53,13 +53,27 @@
           
         
         
    -        
    -          org.apache.maven.plugins
    -          maven-site-plugin
    -          
    -            true
    -          
    -        
    +      
    +        maven-compiler-plugin
    +        
    +          
    +            default-compile
    +            
    +              ${java.default.compiler}
    +              true
    +            
    +          
    +          
    +       
    +      
    +      
    +        org.apache.maven.plugins
    +        maven-site-plugin
    +        
    +          true
    +        
    +      
           
    diff --git hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/AssignmentManagerStatusTmpl.jamon hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/AssignmentManagerStatusTmpl.jamon
    index 08ed672..f6ea464 100644
    --- hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/AssignmentManagerStatusTmpl.jamon
    +++ hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/master/AssignmentManagerStatusTmpl.jamon
    @@ -85,7 +85,9 @@ if (toRemove > 0) {
                 <%else>
                         
    - + diff --git hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RegionListTmpl.jamon hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RegionListTmpl.jamon index 40fe757..6ca8ec6 100644 --- hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RegionListTmpl.jamon +++ hbase-server/src/main/jamon/org/apache/hadoop/hbase/tmpl/regionserver/RegionListTmpl.jamon @@ -100,9 +100,12 @@ <%for HRegionInfo r: onlineRegions %> - - - + + + @@ -126,7 +129,8 @@ <%java> RegionLoad load = regionServer.createRegionLoad(r.getEncodedName()); - + <%if load != null %> @@ -159,7 +163,8 @@ <%java> RegionLoad load = regionServer.createRegionLoad(r.getEncodedName()); - + <%if load != null %> @@ -198,7 +203,8 @@ ((float) load.getCurrentCompactedKVs() / load.getTotalCompactingKVs())) + "%"; } - + <%if load != null %> @@ -225,7 +231,8 @@ <%java> RegionLoad load = regionServer.createRegionLoad(r.getEncodedName()); - + <%if load != null %> diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManager.java index b7bfa75..bd0268a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManager.java @@ -55,12 +55,4 @@ public interface CoordinatedStateManager { * @return instance of Server coordinated state manager runs within */ Server getServer(); - - /** - * Returns implementation of TableStateManager. - * @throws InterruptedException if operation is interrupted - * @throws CoordinatedStateException if error happens in underlying coordination mechanism - */ - TableStateManager getTableStateManager() throws InterruptedException, - CoordinatedStateException; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManagerFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManagerFactory.java index e7e7832..7cc3f6d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManagerFactory.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/CoordinatedStateManagerFactory.java @@ -27,7 +27,12 @@ import org.apache.hadoop.util.ReflectionUtils; * based on configuration. */ @InterfaceAudience.Private -public class CoordinatedStateManagerFactory { +public final class CoordinatedStateManagerFactory { + + /** + * Private to keep this class from being accidentally instantiated. + */ + private CoordinatedStateManagerFactory(){} /** * Creates consensus provider from the given configuration. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/DaemonThreadFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/DaemonThreadFactory.java index d621cbf..11da20f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/DaemonThreadFactory.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/DaemonThreadFactory.java @@ -24,10 +24,10 @@ import java.util.concurrent.atomic.AtomicInteger; * Thread factory that creates daemon threads */ public class DaemonThreadFactory implements ThreadFactory { - static final AtomicInteger poolNumber = new AtomicInteger(1); - final ThreadGroup group; - final AtomicInteger threadNumber = new AtomicInteger(1); - final String namePrefix; + private static final AtomicInteger poolNumber = new AtomicInteger(1); + private final ThreadGroup group; + private final AtomicInteger threadNumber = new AtomicInteger(1); + private final String namePrefix; public DaemonThreadFactory(String name) { SecurityManager s = System.getSecurityManager(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java index 4226c3f..8d65c66 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java @@ -58,7 +58,7 @@ import org.apache.hadoop.util.StringUtils; if (!isHealthy) { boolean needToStop = decideToStop(); if (needToStop) { - this.stopper.stop("The node reported unhealthy " + threshold + this.getStopper().stop("The node reported unhealthy " + threshold + " number of times consecutively."); } // Always log health report. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/MetaMigrationConvertingToPB.java hbase-server/src/main/java/org/apache/hadoop/hbase/MetaMigrationConvertingToPB.java deleted file mode 100644 index 13bebd3..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/MetaMigrationConvertingToPB.java +++ /dev/null @@ -1,176 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hbase; - -import java.io.IOException; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.client.HConnection; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.MetaTableAccessor.Visitor; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.master.MasterServices; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.util.Bytes; - -/** - * A tool to migrate the data stored in hbase:meta table to pbuf serialization. - * Supports migrating from 0.92.x and 0.94.x to 0.96.x for the catalog table. - * @deprecated will be removed for the major release after 0.96. - */ -@Deprecated -public class MetaMigrationConvertingToPB { - - private static final Log LOG = LogFactory.getLog(MetaMigrationConvertingToPB.class); - - private static class ConvertToPBMetaVisitor implements Visitor { - private final MasterServices services; - private long numMigratedRows; - - public ConvertToPBMetaVisitor(MasterServices services) { - this.services = services; - numMigratedRows = 0; - } - - @Override - public boolean visit(Result r) throws IOException { - if (r == null || r.isEmpty()) return true; - // Check info:regioninfo, info:splitA, and info:splitB. Make sure all - // have migrated HRegionInfos. - byte [] hriBytes = getBytes(r, HConstants.REGIONINFO_QUALIFIER); - // Presumes that an edit updating all three cells either succeeds or - // doesn't -- that we don't have case of info:regioninfo migrated but not - // info:splitA. - if (isMigrated(hriBytes)) return true; - // OK. Need to migrate this row in meta. - - //This will 'migrate' the HRI from 092.x and 0.94.x to 0.96+ by reading the - //writable serialization - HRegionInfo hri = parseFrom(hriBytes); - - // Now make a put to write back to meta. - Put p = MetaTableAccessor.makePutFromRegionInfo(hri); - - // Now migrate info:splitA and info:splitB if they are not null - migrateSplitIfNecessary(r, p, HConstants.SPLITA_QUALIFIER); - migrateSplitIfNecessary(r, p, HConstants.SPLITB_QUALIFIER); - - MetaTableAccessor.putToMetaTable(this.services.getConnection(), p); - if (LOG.isDebugEnabled()) { - LOG.debug("Migrated " + Bytes.toString(p.getRow())); - } - numMigratedRows++; - return true; - } - } - - static void migrateSplitIfNecessary(final Result r, final Put p, final byte [] which) - throws IOException { - byte [] hriSplitBytes = getBytes(r, which); - if (!isMigrated(hriSplitBytes)) { - //This will 'migrate' the HRI from 092.x and 0.94.x to 0.96+ by reading the - //writable serialization - HRegionInfo hri = parseFrom(hriSplitBytes); - p.addImmutable(HConstants.CATALOG_FAMILY, which, hri.toByteArray()); - } - } - - static HRegionInfo parseFrom(byte[] hriBytes) throws IOException { - try { - return HRegionInfo.parseFrom(hriBytes); - } catch (DeserializationException ex) { - throw new IOException(ex); - } - } - - /** - * @param r Result to dig in. - * @param qualifier Qualifier to look at in the passed r. - * @return Bytes for an HRegionInfo or null if no bytes or empty bytes found. - */ - static byte [] getBytes(final Result r, final byte [] qualifier) { - byte [] hriBytes = r.getValue(HConstants.CATALOG_FAMILY, qualifier); - if (hriBytes == null || hriBytes.length <= 0) return null; - return hriBytes; - } - - static boolean isMigrated(final byte [] hriBytes) { - if (hriBytes == null || hriBytes.length <= 0) return true; - - return ProtobufUtil.isPBMagicPrefix(hriBytes); - } - - /** - * Converting writable serialization to PB, if it is needed. - * @param services MasterServices to get a handle on master - * @return num migrated rows - * @throws IOException or RuntimeException if something goes wrong - */ - public static long updateMetaIfNecessary(final MasterServices services) - throws IOException { - if (isMetaTableUpdated(services.getConnection())) { - LOG.info("META already up-to date with PB serialization"); - return 0; - } - LOG.info("META has Writable serializations, migrating hbase:meta to PB serialization"); - try { - long rows = updateMeta(services); - LOG.info("META updated with PB serialization. Total rows updated: " + rows); - return rows; - } catch (IOException e) { - LOG.warn("Update hbase:meta with PB serialization failed." + "Master startup aborted."); - throw e; - } - } - - /** - * Update hbase:meta rows, converting writable serialization to PB - * @return num migrated rows - */ - static long updateMeta(final MasterServices masterServices) throws IOException { - LOG.info("Starting update of META"); - ConvertToPBMetaVisitor v = new ConvertToPBMetaVisitor(masterServices); - MetaTableAccessor.fullScan(masterServices.getConnection(), v); - LOG.info("Finished update of META. Total rows updated:" + v.numMigratedRows); - return v.numMigratedRows; - } - - /** - * @param hConnection connection to be used - * @return True if the meta table has been migrated. - * @throws IOException - */ - static boolean isMetaTableUpdated(final HConnection hConnection) throws IOException { - List results = MetaTableAccessor.fullScanOfMeta(hConnection); - if (results == null || results.isEmpty()) { - LOG.info("hbase:meta doesn't have any entries to update."); - return true; - } - for (Result r : results) { - byte[] value = r.getValue(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); - if (!isMigrated(value)) { - return false; - } - } - return true; - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptor.java hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptor.java new file mode 100644 index 0000000..d27bfb7 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptor.java @@ -0,0 +1,182 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import java.io.IOException; + +import com.google.common.annotations.VisibleForTesting; +import com.google.protobuf.InvalidProtocolBufferException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; +import org.apache.hadoop.hbase.regionserver.BloomType; + +/** + * Class represents table state on HDFS. + */ +@InterfaceAudience.Private +public class TableDescriptor { + private HTableDescriptor hTableDescriptor; + private TableState.State tableState; + + /** + * Creates TableDescriptor with all fields. + * @param hTableDescriptor HTableDescriptor to use + * @param tableState table state + */ + public TableDescriptor(HTableDescriptor hTableDescriptor, + TableState.State tableState) { + this.hTableDescriptor = hTableDescriptor; + this.tableState = tableState; + } + + /** + * Creates TableDescriptor with Enabled table. + * @param hTableDescriptor HTableDescriptor to use + */ + @VisibleForTesting + public TableDescriptor(HTableDescriptor hTableDescriptor) { + this(hTableDescriptor, TableState.State.ENABLED); + } + + /** + * Associated HTableDescriptor + * @return instance of HTableDescriptor + */ + public HTableDescriptor getHTableDescriptor() { + return hTableDescriptor; + } + + public void setHTableDescriptor(HTableDescriptor hTableDescriptor) { + this.hTableDescriptor = hTableDescriptor; + } + + public TableState.State getTableState() { + return tableState; + } + + public void setTableState(TableState.State tableState) { + this.tableState = tableState; + } + + /** + * Convert to PB. + */ + public HBaseProtos.TableDescriptor convert() { + return HBaseProtos.TableDescriptor.newBuilder() + .setSchema(hTableDescriptor.convert()) + .setState(tableState.convert()) + .build(); + } + + /** + * Convert from PB + */ + public static TableDescriptor convert(HBaseProtos.TableDescriptor proto) { + HTableDescriptor hTableDescriptor = HTableDescriptor.convert(proto.getSchema()); + TableState.State state = TableState.State.convert(proto.getState()); + return new TableDescriptor(hTableDescriptor, state); + } + + /** + * @return This instance serialized with pb with pb magic prefix + * @see #parseFrom(byte[]) + */ + public byte [] toByteArray() { + return ProtobufUtil.prependPBMagic(convert().toByteArray()); + } + + /** + * @param bytes A pb serialized {@link TableDescriptor} instance with pb magic prefix + * @see #toByteArray() + */ + public static TableDescriptor parseFrom(final byte [] bytes) + throws DeserializationException, IOException { + if (!ProtobufUtil.isPBMagicPrefix(bytes)) { + throw new DeserializationException("Expected PB encoded TableDescriptor"); + } + int pblen = ProtobufUtil.lengthOfPBMagic(); + HBaseProtos.TableDescriptor.Builder builder = HBaseProtos.TableDescriptor.newBuilder(); + HBaseProtos.TableDescriptor ts; + try { + ts = builder.mergeFrom(bytes, pblen, bytes.length - pblen).build(); + } catch (InvalidProtocolBufferException e) { + throw new DeserializationException(e); + } + return convert(ts); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + TableDescriptor that = (TableDescriptor) o; + + if (hTableDescriptor != null ? + !hTableDescriptor.equals(that.hTableDescriptor) : + that.hTableDescriptor != null) return false; + if (tableState != that.tableState) return false; + + return true; + } + + @Override + public int hashCode() { + int result = hTableDescriptor != null ? hTableDescriptor.hashCode() : 0; + result = 31 * result + (tableState != null ? tableState.hashCode() : 0); + return result; + } + + @Override + public String toString() { + return "TableDescriptor{" + + "hTableDescriptor=" + hTableDescriptor + + ", tableState=" + tableState + + '}'; + } + + public static HTableDescriptor metaTableDescriptor(final Configuration conf) + throws IOException { + HTableDescriptor metaDescriptor = new HTableDescriptor( + TableName.META_TABLE_NAME, + new HColumnDescriptor[] { + new HColumnDescriptor(HConstants.CATALOG_FAMILY) + .setMaxVersions(conf.getInt(HConstants.HBASE_META_VERSIONS, + HConstants.DEFAULT_HBASE_META_VERSIONS)) + .setInMemory(true) + .setBlocksize(conf.getInt(HConstants.HBASE_META_BLOCK_SIZE, + HConstants.DEFAULT_HBASE_META_BLOCK_SIZE)) + .setScope(HConstants.REPLICATION_SCOPE_LOCAL) + // Disable blooms for meta. Needs work. Seems to mess w/ getClosestOrBefore. + .setBloomFilterType(BloomType.NONE) + // Enable cache of data blocks in L1 if more than one caching tier deployed: + // e.g. if using CombinedBlockCache (BucketCache). + .setCacheDataInL1(true) + }) { + }; + metaDescriptor.addCoprocessor( + "org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint", + null, Coprocessor.PRIORITY_SYSTEM, null); + return metaDescriptor; + } + +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptors.java hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptors.java index 33ae1d5..c7bfd03 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptors.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/TableDescriptors.java @@ -37,6 +37,14 @@ public interface TableDescriptors { throws IOException; /** + * @param tableName + * @return TableDescriptor for tablename + * @throws IOException + */ + TableDescriptor getDescriptor(final TableName tableName) + throws IOException; + + /** * Get Map of all NamespaceDescriptors for a given namespace. * @return Map of all descriptors. * @throws IOException @@ -54,6 +62,15 @@ public interface TableDescriptors { throws IOException; /** + * Get Map of all TableDescriptors. Populates the descriptor cache as a + * side effect. + * @return Map of all descriptors. + * @throws IOException + */ + Map getAllDescriptors() + throws IOException; + + /** * Add or update descriptor * @param htd Descriptor to set into TableDescriptors * @throws IOException @@ -62,6 +79,14 @@ public interface TableDescriptors { throws IOException; /** + * Add or update descriptor + * @param htd Descriptor to set into TableDescriptors + * @throws IOException + */ + void add(final TableDescriptor htd) + throws IOException; + + /** * @param tablename * @return Instance of table descriptor or null if none found. * @throws IOException diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/TableStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/TableStateManager.java deleted file mode 100644 index 70e1af2..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/TableStateManager.java +++ /dev/null @@ -1,115 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; - -import java.io.InterruptedIOException; -import java.util.Set; - -/** - * Helper class for table state management for operations running inside - * RegionServer or HMaster. - * Depending on implementation, fetches information from HBase system table, - * local data store, ZooKeeper ensemble or somewhere else. - * Code running on client side (with no coordinated state context) shall instead use - * {@link org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader} - */ -@InterfaceAudience.Private -public interface TableStateManager { - - /** - * Sets the table into desired state. Fails silently if the table is already in this state. - * @param tableName table to process - * @param state new state of this table - * @throws CoordinatedStateException if error happened when trying to set table state - */ - void setTableState(TableName tableName, ZooKeeperProtos.Table.State state) - throws CoordinatedStateException; - - /** - * Sets the specified table into the newState, but only if the table is already in - * one of the possibleCurrentStates (otherwise no operation is performed). - * @param tableName table to process - * @param newState new state for the table - * @param states table should be in one of these states for the operation - * to be performed - * @throws CoordinatedStateException if error happened while performing operation - * @return true if operation succeeded, false otherwise - */ - boolean setTableStateIfInStates(TableName tableName, ZooKeeperProtos.Table.State newState, - ZooKeeperProtos.Table.State... states) - throws CoordinatedStateException; - - /** - * Sets the specified table into the newState, but only if the table is NOT in - * one of the possibleCurrentStates (otherwise no operation is performed). - * @param tableName table to process - * @param newState new state for the table - * @param states table should NOT be in one of these states for the operation - * to be performed - * @throws CoordinatedStateException if error happened while performing operation - * @return true if operation succeeded, false otherwise - */ - boolean setTableStateIfNotInStates(TableName tableName, ZooKeeperProtos.Table.State newState, - ZooKeeperProtos.Table.State... states) - throws CoordinatedStateException; - - /** - * @return true if the table is in any one of the listed states, false otherwise. - */ - boolean isTableState(TableName tableName, ZooKeeperProtos.Table.State... states); - - /** - * Mark table as deleted. Fails silently if the table is not currently marked as disabled. - * @param tableName table to be deleted - * @throws CoordinatedStateException if error happened while performing operation - */ - void setDeletedTable(TableName tableName) throws CoordinatedStateException; - - /** - * Checks if table is present. - * - * @param tableName table we're checking - * @return true if the table is present, false otherwise - */ - boolean isTablePresent(TableName tableName); - - /** - * @return set of tables which are in any one of the listed states, empty Set if none - */ - Set getTablesInStates(ZooKeeperProtos.Table.State... states) - throws InterruptedIOException, CoordinatedStateException; - - /** - * If the table is found in the given state the in-memory state is removed. This - * helps in cases where CreateTable is to be retried by the client in case of - * failures. If deletePermanentState is true - the flag kept permanently is - * also reset. - * - * @param tableName table we're working on - * @param states if table isn't in any one of these states, operation aborts - * @param deletePermanentState if true, reset the permanent flag - * @throws CoordinatedStateException if error happened in underlying coordination engine - */ - void checkAndRemoveTableState(TableName tableName, ZooKeeperProtos.Table.State states, - boolean deletePermanentState) - throws CoordinatedStateException; -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java index d26ed1d..09a6659 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java @@ -36,9 +36,10 @@ import org.apache.zookeeper.KeeperException; * currently being archived. *

    * This only works properly if the - * {@link org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner} is also enabled - * (it always should be), since it may take a little time for the ZK notification to - * propagate, in which case we may accidentally delete some files. + * {@link org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner} + * is also enabled (it always should be), since it may take a little time + * for the ZK notification to propagate, in which case we may accidentally + * delete some files. */ @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) public class LongTermArchivingHFileCleaner extends BaseHFileCleanerDelegate { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java index c583923..eab4a8a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java @@ -251,6 +251,7 @@ public class HTableWrapper implements HTableInterface { * @deprecated If any exception is thrown by one of the actions, there is no way to * retrieve the partially executed results. Use {@link #batch(List, Object[])} instead. */ + @Deprecated @Override public Object[] batch(List actions) throws IOException, InterruptedException { @@ -270,6 +271,7 @@ public class HTableWrapper implements HTableInterface { * {@link #batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)} * instead. */ + @Deprecated @Override public Object[] batchCallback(List actions, Batch.Callback callback) throws IOException, InterruptedException { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java index 4d023be..baf2aa6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/client/TableSnapshotScanner.java @@ -49,10 +49,10 @@ import org.apache.hadoop.hbase.util.FSUtils; *

    * This also allows one to run the scan from an * online or offline hbase cluster. The snapshot files can be exported by using the - * {@link org.apache.hadoop.hbase.snapshot.ExportSnapshot} tool, to a pure-hdfs cluster, - * and this scanner can be used to run the scan directly over the snapshot files. - * The snapshot should not be deleted while there are open scanners reading from snapshot - * files. + * {@link org.apache.hadoop.hbase.snapshot.ExportSnapshot} tool, + * to a pure-hdfs cluster, and this scanner can be used to + * run the scan directly over the snapshot files. The snapshot should not be deleted while there + * are open scanners reading from snapshot files. * *

    * An internal RegionScanner is used to execute the {@link Scan} obtained diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintException.java hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintException.java index 11924c4..31746b6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintException.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/ConstraintException.java @@ -20,12 +20,13 @@ package org.apache.hadoop.hbase.constraint; import org.apache.hadoop.hbase.classification.InterfaceAudience; /** - * Exception that a user defined constraint throws on failure of a - * {@link org.apache.hadoop.hbase.client.Put}. - *

    Does NOT attempt the - * {@link org.apache.hadoop.hbase.client.Put} multiple times, - * since the constraintshould fail every time for the same - * {@link org.apache.hadoop.hbase.client.Put} (it should be idempotent). + * Exception that a user defined constraint throws on failure of a + * {@link org.apache.hadoop.hbase.client.Put}. + *

    Does NOT attempt the + * {@link org.apache.hadoop.hbase.client.Put} multiple times, + * since the constraint should fail every time for + * the same {@link org.apache.hadoop.hbase.client.Put} (it should be + * idempotent). */ @InterfaceAudience.Private public class ConstraintException extends org.apache.hadoop.hbase.DoNotRetryIOException { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java index 85ef717..a07ecd3 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/constraint/Constraints.java @@ -34,7 +34,6 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; @@ -121,9 +120,9 @@ public final class Constraints { disable(desc); // remove all the constraint settings - List keys = new ArrayList(); + List keys = new ArrayList(); // loop through all the key, values looking for constraints - for (Map.Entry e : desc + for (Map.Entry e : desc .getValues().entrySet()) { String key = Bytes.toString((e.getKey().get())); String[] className = CONSTRAINT_HTD_ATTR_KEY_PATTERN.split(key); @@ -132,7 +131,7 @@ public final class Constraints { } } // now remove all the keys we found - for (ImmutableBytesWritable key : keys) { + for (Bytes key : keys) { desc.remove(key); } } @@ -562,7 +561,7 @@ public final class Constraints { ClassLoader classloader) throws IOException { List constraints = new ArrayList(); // loop through all the key, values looking for constraints - for (Map.Entry e : desc + for (Map.Entry e : desc .getValues().entrySet()) { // read out the constraint String key = Bytes.toString(e.getKey().get()).trim(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/BaseCoordinatedStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/BaseCoordinatedStateManager.java index f79e5d8..ae36f08 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/BaseCoordinatedStateManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/BaseCoordinatedStateManager.java @@ -18,10 +18,8 @@ package org.apache.hadoop.hbase.coordination; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.TableStateManager; /** * Base class for {@link org.apache.hadoop.hbase.CoordinatedStateManager} implementations. @@ -49,9 +47,6 @@ public abstract class BaseCoordinatedStateManager implements CoordinatedStateMan return null; } - @Override - public abstract TableStateManager getTableStateManager() throws InterruptedException, - CoordinatedStateException; /** * Method to retrieve coordination for split log worker */ @@ -60,23 +55,4 @@ public abstract class BaseCoordinatedStateManager implements CoordinatedStateMan * Method to retrieve coordination for split log manager */ public abstract SplitLogManagerCoordination getSplitLogManagerCoordination(); - /** - * Method to retrieve coordination for split transaction. - */ - abstract public SplitTransactionCoordination getSplitTransactionCoordination(); - - /** - * Method to retrieve coordination for closing region operations. - */ - public abstract CloseRegionCoordination getCloseRegionCoordination(); - - /** - * Method to retrieve coordination for opening region operations. - */ - public abstract OpenRegionCoordination getOpenRegionCoordination(); - - /** - * Method to retrieve coordination for region merge transaction - */ - public abstract RegionMergeCoordination getRegionMergeCoordination(); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/CloseRegionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/CloseRegionCoordination.java deleted file mode 100644 index 037d886..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/CloseRegionCoordination.java +++ /dev/null @@ -1,69 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.coordination; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; -import org.apache.hadoop.hbase.regionserver.HRegion; - -/** - * Coordinated operations for close region handlers. - */ -@InterfaceAudience.Private -public interface CloseRegionCoordination { - - /** - * Called before actual region closing to check that we can do close operation - * on this region. - * @param regionInfo region being closed - * @param crd details about closing operation - * @return true if caller shall proceed and close, false if need to abort closing. - */ - boolean checkClosingState(HRegionInfo regionInfo, CloseRegionDetails crd); - - /** - * Called after region is closed to notify all interesting parties / "register" - * region as finally closed. - * @param region region being closed - * @param sn ServerName on which task runs - * @param crd details about closing operation - */ - void setClosedState(HRegion region, ServerName sn, CloseRegionDetails crd); - - /** - * Construct CloseRegionDetails instance from CloseRegionRequest. - * @return instance of CloseRegionDetails - */ - CloseRegionDetails parseFromProtoRequest(AdminProtos.CloseRegionRequest request); - - /** - * Get details object with params for case when we're closing on - * regionserver side internally (not because of RPC call from master), - * so we don't parse details from protobuf request. - */ - CloseRegionDetails getDetaultDetails(); - - /** - * Marker interface for region closing tasks. Used to carry implementation details in - * encapsulated way through Handlers to the consensus API. - */ - static interface CloseRegionDetails { - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/OpenRegionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/OpenRegionCoordination.java deleted file mode 100644 index 25b743c..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/OpenRegionCoordination.java +++ /dev/null @@ -1,129 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.coordination; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.master.AssignmentManager; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; - -import java.io.IOException; - -/** - * Cocoordination operations for opening regions. - */ -@InterfaceAudience.Private -public interface OpenRegionCoordination { - - //--------------------- - // RS-side operations - //--------------------- - /** - * Tries to move regions to OPENED state. - * - * @param r Region we're working on. - * @param ord details about region opening task - * @return whether transition was successful or not - * @throws java.io.IOException - */ - boolean transitionToOpened(HRegion r, OpenRegionDetails ord) throws IOException; - - /** - * Transitions region from offline to opening state. - * @param regionInfo region we're working on. - * @param ord details about opening task. - * @return true if successful, false otherwise - */ - boolean transitionFromOfflineToOpening(HRegionInfo regionInfo, - OpenRegionDetails ord); - - /** - * Heartbeats to prevent timeouts. - * - * @param ord details about opening task. - * @param regionInfo region we're working on. - * @param rsServices instance of RegionServerrServices - * @param context used for logging purposes only - * @return true if successful heartbeat, false otherwise. - */ - boolean tickleOpening(OpenRegionDetails ord, HRegionInfo regionInfo, - RegionServerServices rsServices, String context); - - /** - * Tries transition region from offline to failed open. - * @param rsServices instance of RegionServerServices - * @param hri region we're working on - * @param ord details about region opening task - * @return true if successful, false otherwise - */ - boolean tryTransitionFromOfflineToFailedOpen(RegionServerServices rsServices, - HRegionInfo hri, OpenRegionDetails ord); - - /** - * Tries transition from Opening to Failed open. - * @param hri region we're working on - * @param ord details about region opening task - * @return true if successfu. false otherwise. - */ - boolean tryTransitionFromOpeningToFailedOpen(HRegionInfo hri, OpenRegionDetails ord); - - /** - * Construct OpenRegionDetails instance from part of protobuf request. - * @return instance of OpenRegionDetails. - */ - OpenRegionDetails parseFromProtoRequest(AdminProtos.OpenRegionRequest.RegionOpenInfo - regionOpenInfo); - - /** - * Get details object with params for case when we're opening on - * regionserver side with all "default" properties. - */ - OpenRegionDetails getDetailsForNonCoordinatedOpening(); - - //------------------------- - // HMaster-side operations - //------------------------- - - /** - * Commits opening operation on HM side (steps required for "commit" - * are determined by coordination implementation). - * @return true if committed successfully, false otherwise. - */ - public boolean commitOpenOnMasterSide(AssignmentManager assignmentManager, - HRegionInfo regionInfo, - OpenRegionDetails ord); - - /** - * Interface for region opening tasks. Used to carry implementation details in - * encapsulated way through Handlers to the coordination API. - */ - static interface OpenRegionDetails { - /** - * Sets server name on which opening operation is running. - */ - void setServerName(ServerName serverName); - - /** - * @return server name on which opening op is running. - */ - ServerName getServerName(); - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/RegionMergeCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/RegionMergeCoordination.java deleted file mode 100644 index 8015f4c..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/RegionMergeCoordination.java +++ /dev/null @@ -1,105 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one or more contributor license - * agreements. See the NOTICE file distributed with this work for additional information regarding - * copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. You may obtain a - * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable - * law or agreed to in writing, software distributed under the License is distributed on an "AS IS" - * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License - * for the specific language governing permissions and limitations under the License. - */ - -package org.apache.hadoop.hbase.coordination; - -import java.io.IOException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; - -/** - * Coordination operations for region merge transaction. The operation should be coordinated at the - * following stages:
    - * 1. startRegionMergeTransaction - all preparation/initialization for merge region transaction
    - * 2. waitForRegionMergeTransaction - wait until coordination complete all works related - * to merge
    - * 3. confirmRegionMergeTransaction - confirm that the merge could be completed and none of merging - * regions moved somehow
    - * 4. completeRegionMergeTransaction - all steps that are required to complete the transaction. - * Called after PONR (point of no return)
    - */ -@InterfaceAudience.Private -public interface RegionMergeCoordination { - - RegionMergeDetails getDefaultDetails(); - - /** - * Dummy interface for region merge transaction details. - */ - public static interface RegionMergeDetails { - } - - /** - * Start the region merge transaction - * @param region region to be created as offline - * @param serverName server event originates from - * @throws IOException - */ - void startRegionMergeTransaction(HRegionInfo region, ServerName serverName, HRegionInfo a, - HRegionInfo b) throws IOException; - - /** - * Get everything ready for region merge - * @throws IOException - */ - void waitForRegionMergeTransaction(RegionServerServices services, HRegionInfo mergedRegionInfo, - HRegion region_a, HRegion region_b, RegionMergeDetails details) throws IOException; - - /** - * Confirm that the region merge can be performed - * @param merged region - * @param a merging region A - * @param b merging region B - * @param serverName server event originates from - * @param rmd region merge details - * @throws IOException If thrown, transaction failed. - */ - void confirmRegionMergeTransaction(HRegionInfo merged, HRegionInfo a, HRegionInfo b, - ServerName serverName, RegionMergeDetails rmd) throws IOException; - - /** - * @param merged region - * @param a merging region A - * @param b merging region B - * @param serverName server event originates from - * @param rmd region merge details - * @throws IOException - */ - void processRegionMergeRequest(HRegionInfo merged, HRegionInfo a, HRegionInfo b, - ServerName serverName, RegionMergeDetails rmd) throws IOException; - - /** - * Finish off merge transaction - * @param services Used to online/offline regions. - * @param merged region - * @param region_a merging region A - * @param region_b merging region B - * @param rmd region merge details - * @param mergedRegion - * @throws IOException If thrown, transaction failed. Call - * {@link org.apache.hadoop.hbase.regionserver.RegionMergeTransaction#rollback( - * Server, RegionServerServices)} - */ - void completeRegionMergeTransaction(RegionServerServices services, HRegionInfo merged, - HRegion region_a, HRegion region_b, RegionMergeDetails rmd, HRegion mergedRegion) - throws IOException; - - /** - * This method is used during rollback - * @param merged region to be rolled back - */ - void clean(HRegionInfo merged); - -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitLogWorkerCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitLogWorkerCoordination.java index 164c136..707850d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitLogWorkerCoordination.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitLogWorkerCoordination.java @@ -34,7 +34,7 @@ import com.google.common.annotations.VisibleForTesting; /** * Coordinated operations for {@link SplitLogWorker} and - * {@link org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler} Important + * {@link org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler} Important * methods for SplitLogWorker:
    * {@link #isReady()} called from {@link SplitLogWorker#run()} to check whether the coordination is * ready to supply the tasks
    diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitTransactionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitTransactionCoordination.java deleted file mode 100644 index bbc8500..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/SplitTransactionCoordination.java +++ /dev/null @@ -1,100 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hbase.coordination; - -import java.io.IOException; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; - -/** - * Coordination operations for split transaction. The split operation should be coordinated at the - * following stages: - * 1. start - all preparation/initialization for split transaction should be done there. - * 2. waitForSplitTransaction - the coordination should perform all logic related to split - * transaction and wait till it's finished - * 3. completeSplitTransaction - all steps that are required to complete the transaction. - * Called after PONR (point of no return) - */ -@InterfaceAudience.Private -public interface SplitTransactionCoordination { - - /** - * Dummy interface for split transaction details. - */ - public static interface SplitTransactionDetails { - } - - SplitTransactionDetails getDefaultDetails(); - - - /** - * init coordination for split transaction - * @param parent region to be created as offline - * @param serverName server event originates from - * @param hri_a daughter region - * @param hri_b daughter region - * @throws IOException - */ - void startSplitTransaction(HRegion parent, ServerName serverName, - HRegionInfo hri_a, HRegionInfo hri_b) throws IOException; - - /** - * Wait while coordination process the transaction - * @param services Used to online/offline regions. - * @param parent region - * @param hri_a daughter region - * @param hri_b daughter region - * @param std split transaction details - * @throws IOException - */ - void waitForSplitTransaction(final RegionServerServices services, - HRegion parent, HRegionInfo hri_a, HRegionInfo hri_b, SplitTransactionDetails std) - throws IOException; - - /** - * Finish off split transaction - * @param services Used to online/offline regions. - * @param first daughter region - * @param second daughter region - * @param std split transaction details - * @param parent - * @throws IOException If thrown, transaction failed. Call - * {@link org.apache.hadoop.hbase.regionserver. - * SplitTransaction#rollback(Server, RegionServerServices)} - */ - void completeSplitTransaction(RegionServerServices services, HRegion first, - HRegion second, SplitTransactionDetails std, HRegion parent) throws IOException; - - /** - * clean the split transaction - * @param hri node to delete - */ - void clean(final HRegionInfo hri); - - /** - * Required by AssignmentManager - */ - int processTransition(HRegionInfo p, HRegionInfo hri_a, HRegionInfo hri_b, - ServerName sn, SplitTransactionDetails std) throws IOException; -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitLogManagerCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitLogManagerCoordination.java index 1e02632..694ccff 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitLogManagerCoordination.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitLogManagerCoordination.java @@ -67,8 +67,8 @@ import org.apache.zookeeper.ZooDefs.Ids; import org.apache.zookeeper.data.Stat; /** - * ZooKeeper based implementation of - * {@link org.apache.hadoop.hbase.master.SplitLogManagerCoordination} + * ZooKeeper based implementation of + * {@link org.apache.hadoop.hbase.master.SplitLogManagerCoordination} */ @InterfaceAudience.Private public class ZKSplitLogManagerCoordination extends ZooKeeperListener implements @@ -904,11 +904,10 @@ public class ZKSplitLogManagerCoordination extends ZooKeeperListener implements /** - * {@link org.apache.hadoop.hbase.master.SplitLogManager} can use - * objects implementing this interface to finish off a partially - * done task by {@link org.apache.hadoop.hbase.regionserver.SplitLogWorker}. - * This provides a serialization point at the end of the task - * processing. Must be restartable and idempotent. + * {@link org.apache.hadoop.hbase.master.SplitLogManager} can use objects implementing this + * interface to finish off a partially done task by + * {@link org.apache.hadoop.hbase.regionserver.SplitLogWorker}. This provides a + * serialization point at the end of the task processing. Must be restartable and idempotent. */ public interface TaskFinisher { /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitTransactionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitTransactionCoordination.java deleted file mode 100644 index e28f079..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitTransactionCoordination.java +++ /dev/null @@ -1,313 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one or more contributor license - * agreements. See the NOTICE file distributed with this work for additional information regarding - * copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. You may obtain a - * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable - * law or agreed to in writing, software distributed under the License is distributed on an "AS IS" - * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License - * for the specific language governing permissions and limitations under the License. - */ - -package org.apache.hadoop.hbase.coordination; - -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REGION_SPLIT; -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REGION_SPLITTING; -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REQUEST_REGION_SPLIT; - -import java.io.IOException; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.CoordinatedStateManager; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.coordination.SplitTransactionCoordination; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.data.Stat; - -public class ZKSplitTransactionCoordination implements SplitTransactionCoordination { - - private CoordinatedStateManager coordinationManager; - private final ZooKeeperWatcher watcher; - - private static final Log LOG = LogFactory.getLog(ZKSplitTransactionCoordination.class); - - public ZKSplitTransactionCoordination(CoordinatedStateManager coordinationProvider, - ZooKeeperWatcher watcher) { - this.coordinationManager = coordinationProvider; - this.watcher = watcher; - } - - /** - * Creates a new ephemeral node in the PENDING_SPLIT state for the specified region. Create it - * ephemeral in case regionserver dies mid-split. - *

    - * Does not transition nodes from other states. If a node already exists for this region, an - * Exception will be thrown. - * @param parent region to be created as offline - * @param serverName server event originates from - * @param hri_a daughter region - * @param hri_b daughter region - * @throws IOException - */ - - @Override - public void startSplitTransaction(HRegion parent, ServerName serverName, HRegionInfo hri_a, - HRegionInfo hri_b) throws IOException { - - HRegionInfo region = parent.getRegionInfo(); - try { - - LOG.debug(watcher.prefix("Creating ephemeral node for " + region.getEncodedName() - + " in PENDING_SPLIT state")); - byte[] payload = HRegionInfo.toDelimitedByteArray(hri_a, hri_b); - RegionTransition rt = - RegionTransition.createRegionTransition(RS_ZK_REQUEST_REGION_SPLIT, - region.getRegionName(), serverName, payload); - String node = ZKAssign.getNodeName(watcher, region.getEncodedName()); - if (!ZKUtil.createEphemeralNodeAndWatch(watcher, node, rt.toByteArray())) { - throw new IOException("Failed create of ephemeral " + node); - } - - } catch (KeeperException e) { - throw new IOException("Failed creating PENDING_SPLIT znode on " - + parent.getRegionNameAsString(), e); - } - - } - - /** - * Transitions an existing ephemeral node for the specified region which is currently in the begin - * state to be in the end state. Master cleans up the final SPLIT znode when it reads it (or if we - * crash, zk will clean it up). - *

    - * Does not transition nodes from other states. If for some reason the node could not be - * transitioned, the method returns -1. If the transition is successful, the version of the node - * after transition is returned. - *

    - * This method can fail and return false for three different reasons: - *

      - *
    • Node for this region does not exist
    • - *
    • Node for this region is not in the begin state
    • - *
    • After verifying the begin state, update fails because of wrong version (this should never - * actually happen since an RS only does this transition following a transition to the begin - * state. If two RS are conflicting, one would fail the original transition to the begin state and - * not this transition)
    • - *
    - *

    - * Does not set any watches. - *

    - * This method should only be used by a RegionServer when splitting a region. - * @param parent region to be transitioned to opened - * @param a Daughter a of split - * @param b Daughter b of split - * @param serverName server event originates from - * @param std split transaction details - * @param beginState the expected current state the znode should be - * @param endState the state to be transition to - * @return version of node after transition, -1 if unsuccessful transition - * @throws IOException - */ - - private int transitionSplittingNode(HRegionInfo parent, HRegionInfo a, HRegionInfo b, - ServerName serverName, SplitTransactionDetails std, final EventType beginState, - final EventType endState) throws IOException { - ZkSplitTransactionDetails zstd = (ZkSplitTransactionDetails) std; - byte[] payload = HRegionInfo.toDelimitedByteArray(a, b); - try { - return ZKAssign.transitionNode(watcher, parent, serverName, beginState, endState, - zstd.getZnodeVersion(), payload); - } catch (KeeperException e) { - throw new IOException( - "Failed transition of splitting node " + parent.getRegionNameAsString(), e); - } - } - - /** - * Wait for the splitting node to be transitioned from pending_split to splitting by master. - * That's how we are sure master has processed the event and is good with us to move on. If we - * don't get any update, we periodically transition the node so that master gets the callback. If - * the node is removed or is not in pending_split state any more, we abort the split. - */ - @Override - public void waitForSplitTransaction(final RegionServerServices services, HRegion parent, - HRegionInfo hri_a, HRegionInfo hri_b, SplitTransactionDetails sptd) throws IOException { - ZkSplitTransactionDetails zstd = (ZkSplitTransactionDetails) sptd; - - // After creating the split node, wait for master to transition it - // from PENDING_SPLIT to SPLITTING so that we can move on. We want master - // knows about it and won't transition any region which is splitting. - try { - int spins = 0; - Stat stat = new Stat(); - ServerName expectedServer = coordinationManager.getServer().getServerName(); - String node = parent.getRegionInfo().getEncodedName(); - while (!(coordinationManager.getServer().isStopped() || services.isStopping())) { - if (spins % 5 == 0) { - LOG.debug("Still waiting for master to process " + "the pending_split for " + node); - SplitTransactionDetails temp = getDefaultDetails(); - transitionSplittingNode(parent.getRegionInfo(), hri_a, hri_b, expectedServer, temp, - RS_ZK_REQUEST_REGION_SPLIT, RS_ZK_REQUEST_REGION_SPLIT); - } - Thread.sleep(100); - spins++; - byte[] data = ZKAssign.getDataNoWatch(watcher, node, stat); - if (data == null) { - throw new IOException("Data is null, splitting node " + node + " no longer exists"); - } - RegionTransition rt = RegionTransition.parseFrom(data); - EventType et = rt.getEventType(); - if (et == RS_ZK_REGION_SPLITTING) { - ServerName serverName = rt.getServerName(); - if (!serverName.equals(expectedServer)) { - throw new IOException("Splitting node " + node + " is for " + serverName + ", not us " - + expectedServer); - } - byte[] payloadOfSplitting = rt.getPayload(); - List splittingRegions = - HRegionInfo.parseDelimitedFrom(payloadOfSplitting, 0, payloadOfSplitting.length); - assert splittingRegions.size() == 2; - HRegionInfo a = splittingRegions.get(0); - HRegionInfo b = splittingRegions.get(1); - if (!(hri_a.equals(a) && hri_b.equals(b))) { - throw new IOException("Splitting node " + node + " is for " + a + ", " + b - + ", not expected daughters: " + hri_a + ", " + hri_b); - } - // Master has processed it. - zstd.setZnodeVersion(stat.getVersion()); - return; - } - if (et != RS_ZK_REQUEST_REGION_SPLIT) { - throw new IOException("Splitting node " + node + " moved out of splitting to " + et); - } - } - // Server is stopping/stopped - throw new IOException("Server is " + (services.isStopping() ? "stopping" : "stopped")); - } catch (Exception e) { - if (e instanceof InterruptedException) { - Thread.currentThread().interrupt(); - } - throw new IOException("Failed getting SPLITTING znode on " + parent.getRegionNameAsString(), - e); - } - } - - /** - * Finish off split transaction, transition the zknode - * @param services Used to online/offline regions. - * @param a daughter region - * @param b daughter region - * @param std split transaction details - * @param parent - * @throws IOException If thrown, transaction failed. Call - * {@link org.apache.hadoop.hbase.regionserver.SplitTransaction#rollback( - * Server, RegionServerServices)} - */ - @Override - public void completeSplitTransaction(final RegionServerServices services, HRegion a, HRegion b, - SplitTransactionDetails std, HRegion parent) throws IOException { - ZkSplitTransactionDetails zstd = (ZkSplitTransactionDetails) std; - // Tell master about split by updating zk. If we fail, abort. - if (coordinationManager.getServer() != null) { - try { - zstd.setZnodeVersion(transitionSplittingNode(parent.getRegionInfo(), a.getRegionInfo(), - b.getRegionInfo(), coordinationManager.getServer().getServerName(), zstd, - RS_ZK_REGION_SPLITTING, RS_ZK_REGION_SPLIT)); - - int spins = 0; - // Now wait for the master to process the split. We know it's done - // when the znode is deleted. The reason we keep tickling the znode is - // that it's possible for the master to miss an event. - do { - if (spins % 10 == 0) { - LOG.debug("Still waiting on the master to process the split for " - + parent.getRegionInfo().getEncodedName()); - } - Thread.sleep(100); - // When this returns -1 it means the znode doesn't exist - zstd.setZnodeVersion(transitionSplittingNode(parent.getRegionInfo(), a.getRegionInfo(), - b.getRegionInfo(), coordinationManager.getServer().getServerName(), zstd, - RS_ZK_REGION_SPLIT, RS_ZK_REGION_SPLIT)); - spins++; - } while (zstd.getZnodeVersion() != -1 && !coordinationManager.getServer().isStopped() - && !services.isStopping()); - } catch (Exception e) { - if (e instanceof InterruptedException) { - Thread.currentThread().interrupt(); - } - throw new IOException("Failed telling master about split", e); - } - } - - // Leaving here, the splitdir with its dross will be in place but since the - // split was successful, just leave it; it'll be cleaned when parent is - // deleted and cleaned up. - } - - @Override - public void clean(final HRegionInfo hri) { - try { - // Only delete if its in expected state; could have been hijacked. - if (!ZKAssign.deleteNode(coordinationManager.getServer().getZooKeeper(), - hri.getEncodedName(), RS_ZK_REQUEST_REGION_SPLIT, coordinationManager.getServer() - .getServerName())) { - ZKAssign.deleteNode(coordinationManager.getServer().getZooKeeper(), hri.getEncodedName(), - RS_ZK_REGION_SPLITTING, coordinationManager.getServer().getServerName()); - } - } catch (KeeperException.NoNodeException e) { - LOG.info("Failed cleanup zk node of " + hri.getRegionNameAsString(), e); - } catch (KeeperException e) { - coordinationManager.getServer().abort("Failed cleanup of " + hri.getRegionNameAsString(), e); - } - } - - /** - * ZK-based implementation. Has details about whether the state transition should be reflected in - * ZK, as well as expected version of znode. - */ - public static class ZkSplitTransactionDetails implements - SplitTransactionCoordination.SplitTransactionDetails { - private int znodeVersion; - - public ZkSplitTransactionDetails() { - } - - /** - * @return znode current version - */ - public int getZnodeVersion() { - return znodeVersion; - } - - /** - * @param znodeVersion znode new version - */ - public void setZnodeVersion(int znodeVersion) { - this.znodeVersion = znodeVersion; - } - } - - @Override - public SplitTransactionDetails getDefaultDetails() { - ZkSplitTransactionDetails zstd = new ZkSplitTransactionDetails(); - zstd.setZnodeVersion(-1); - return zstd; - } - - @Override - public int processTransition(HRegionInfo p, HRegionInfo hri_a, HRegionInfo hri_b, ServerName sn, - SplitTransactionDetails std) throws IOException { - return transitionSplittingNode(p, hri_a, hri_b, sn, std, RS_ZK_REQUEST_REGION_SPLIT, - RS_ZK_REGION_SPLITTING); - - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCloseRegionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCloseRegionCoordination.java deleted file mode 100644 index c7583a1..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCloseRegionCoordination.java +++ /dev/null @@ -1,197 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.coordination; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateManager; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; - -import java.io.IOException; - -/** - * ZK-based implementation of {@link CloseRegionCoordination}. - */ -@InterfaceAudience.Private -public class ZkCloseRegionCoordination implements CloseRegionCoordination { - private static final Log LOG = LogFactory.getLog(ZkCloseRegionCoordination.class); - - private final static int FAILED_VERSION = -1; - - private CoordinatedStateManager csm; - private final ZooKeeperWatcher watcher; - - public ZkCloseRegionCoordination(CoordinatedStateManager csm, ZooKeeperWatcher watcher) { - this.csm = csm; - this.watcher = watcher; - } - - /** - * In ZK-based version we're checking for bad znode state, e.g. if we're - * trying to delete the znode, and it's not ours (version doesn't match). - */ - @Override - public boolean checkClosingState(HRegionInfo regionInfo, CloseRegionDetails crd) { - ZkCloseRegionDetails zkCrd = (ZkCloseRegionDetails) crd; - - try { - return zkCrd.isPublishStatusInZk() && !ZKAssign.checkClosingState(watcher, - regionInfo, ((ZkCloseRegionDetails) crd).getExpectedVersion()); - } catch (KeeperException ke) { - csm.getServer().abort("Unrecoverable exception while checking state with zk " + - regionInfo.getRegionNameAsString() + ", still finishing close", ke); - throw new RuntimeException(ke); - } - } - - /** - * In ZK-based version we do some znodes transitioning. - */ - @Override - public void setClosedState(HRegion region, ServerName sn, CloseRegionDetails crd) { - ZkCloseRegionDetails zkCrd = (ZkCloseRegionDetails) crd; - String name = region.getRegionInfo().getRegionNameAsString(); - - if (zkCrd.isPublishStatusInZk()) { - if (setClosedState(region,sn, zkCrd)) { - LOG.debug("Set closed state in zk for " + name + " on " + sn); - } else { - LOG.debug("Set closed state in zk UNSUCCESSFUL for " + name + " on " + sn); - } - } - } - - /** - * Parse ZK-related fields from request. - */ - @Override - public CloseRegionDetails parseFromProtoRequest(AdminProtos.CloseRegionRequest request) { - ZkCloseRegionCoordination.ZkCloseRegionDetails zkCrd = - new ZkCloseRegionCoordination.ZkCloseRegionDetails(); - zkCrd.setPublishStatusInZk(request.getTransitionInZK()); - int versionOfClosingNode = -1; - if (request.hasVersionOfClosingNode()) { - versionOfClosingNode = request.getVersionOfClosingNode(); - } - zkCrd.setExpectedVersion(versionOfClosingNode); - - return zkCrd; - } - - /** - * No ZK tracking will be performed for that case. - * This method should be used when we want to construct CloseRegionDetails, - * but don't want any coordination on that (when it's initiated by regionserver), - * so no znode state transitions will be performed. - */ - @Override - public CloseRegionDetails getDetaultDetails() { - ZkCloseRegionCoordination.ZkCloseRegionDetails zkCrd = - new ZkCloseRegionCoordination.ZkCloseRegionDetails(); - zkCrd.setPublishStatusInZk(false); - zkCrd.setExpectedVersion(FAILED_VERSION); - - return zkCrd; - } - - /** - * Transition ZK node to CLOSED - * @param region HRegion instance being closed - * @param sn ServerName on which task runs - * @param zkCrd details about region closing operation. - * @return If the state is set successfully - */ - private boolean setClosedState(final HRegion region, - ServerName sn, - ZkCloseRegionDetails zkCrd) { - final int expectedVersion = zkCrd.getExpectedVersion(); - - try { - if (ZKAssign.transitionNodeClosed(watcher, region.getRegionInfo(), - sn, expectedVersion) == FAILED_VERSION) { - LOG.warn("Completed the CLOSE of a region but when transitioning from " + - " CLOSING to CLOSED got a version mismatch, someone else clashed " + - "so now unassigning"); - region.close(); - return false; - } - } catch (NullPointerException e) { - // I've seen NPE when table was deleted while close was running in unit tests. - LOG.warn("NPE during close -- catching and continuing...", e); - return false; - } catch (KeeperException e) { - LOG.error("Failed transitioning node from CLOSING to CLOSED", e); - return false; - } catch (IOException e) { - LOG.error("Failed to close region after failing to transition", e); - return false; - } - return true; - } - - /** - * ZK-based implementation. Has details about whether the state transition should be - * reflected in ZK, as well as expected version of znode. - */ - public static class ZkCloseRegionDetails implements CloseRegionCoordination.CloseRegionDetails { - - /** - * True if we are to update zk about the region close; if the close - * was orchestrated by master, then update zk. If the close is being run by - * the regionserver because its going down, don't update zk. - * */ - private boolean publishStatusInZk; - - /** - * The version of znode to compare when RS transitions the znode from - * CLOSING state. - */ - private int expectedVersion = FAILED_VERSION; - - public ZkCloseRegionDetails() { - } - - public ZkCloseRegionDetails(boolean publishStatusInZk, int expectedVersion) { - this.publishStatusInZk = publishStatusInZk; - this.expectedVersion = expectedVersion; - } - - public boolean isPublishStatusInZk() { - return publishStatusInZk; - } - - public void setPublishStatusInZk(boolean publishStatusInZk) { - this.publishStatusInZk = publishStatusInZk; - } - - public int getExpectedVersion() { - return expectedVersion; - } - - public void setExpectedVersion(int expectedVersion) { - this.expectedVersion = expectedVersion; - } - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCoordinatedStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCoordinatedStateManager.java index 2f739be..3e89be7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCoordinatedStateManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkCoordinatedStateManager.java @@ -17,31 +17,20 @@ */ package org.apache.hadoop.hbase.coordination; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.TableStateManager; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateManager; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; /** * ZooKeeper-based implementation of {@link org.apache.hadoop.hbase.CoordinatedStateManager}. */ @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) public class ZkCoordinatedStateManager extends BaseCoordinatedStateManager { - private static final Log LOG = LogFactory.getLog(ZkCoordinatedStateManager.class); protected Server server; protected ZooKeeperWatcher watcher; - protected SplitTransactionCoordination splitTransactionCoordination; - protected CloseRegionCoordination closeRegionCoordination; protected SplitLogWorkerCoordination splitLogWorkerCoordination; protected SplitLogManagerCoordination splitLogManagerCoordination; - protected OpenRegionCoordination openRegionCoordination; - protected RegionMergeCoordination regionMergeCoordination; @Override public void initialize(Server server) { @@ -49,10 +38,7 @@ public class ZkCoordinatedStateManager extends BaseCoordinatedStateManager { this.watcher = server.getZooKeeper(); splitLogWorkerCoordination = new ZkSplitLogWorkerCoordination(this, watcher); splitLogManagerCoordination = new ZKSplitLogManagerCoordination(this, watcher); - splitTransactionCoordination = new ZKSplitTransactionCoordination(this, watcher); - closeRegionCoordination = new ZkCloseRegionCoordination(this, watcher); - openRegionCoordination = new ZkOpenRegionCoordination(this, watcher); - regionMergeCoordination = new ZkRegionMergeCoordination(this, watcher); + } @Override @@ -61,16 +47,6 @@ public class ZkCoordinatedStateManager extends BaseCoordinatedStateManager { } @Override - public TableStateManager getTableStateManager() throws InterruptedException, - CoordinatedStateException { - try { - return new ZKTableStateManager(server.getZooKeeper()); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - } - - @Override public SplitLogWorkerCoordination getSplitLogWorkerCoordination() { return splitLogWorkerCoordination; } @@ -78,24 +54,4 @@ public class ZkCoordinatedStateManager extends BaseCoordinatedStateManager { public SplitLogManagerCoordination getSplitLogManagerCoordination() { return splitLogManagerCoordination; } - - @Override - public SplitTransactionCoordination getSplitTransactionCoordination() { - return splitTransactionCoordination; - } - - @Override - public CloseRegionCoordination getCloseRegionCoordination() { - return closeRegionCoordination; - } - - @Override - public OpenRegionCoordination getOpenRegionCoordination() { - return openRegionCoordination; - } - - @Override - public RegionMergeCoordination getRegionMergeCoordination() { - return regionMergeCoordination; - } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkOpenRegionCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkOpenRegionCoordination.java deleted file mode 100644 index 812bbe2..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkOpenRegionCoordination.java +++ /dev/null @@ -1,414 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.coordination; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateManager; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.AssignmentManager; -import org.apache.hadoop.hbase.master.RegionState; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; - -import java.io.IOException; - -/** - * ZK-based implementation of {@link OpenRegionCoordination}. - */ -@InterfaceAudience.Private -public class ZkOpenRegionCoordination implements OpenRegionCoordination { - private static final Log LOG = LogFactory.getLog(ZkOpenRegionCoordination.class); - - private CoordinatedStateManager coordination; - private final ZooKeeperWatcher watcher; - - public ZkOpenRegionCoordination(CoordinatedStateManager coordination, - ZooKeeperWatcher watcher) { - this.coordination = coordination; - this.watcher = watcher; - } - - //------------------------------- - // Region Server-side operations - //------------------------------- - - /** - * @param r Region we're working on. - * @return whether znode is successfully transitioned to OPENED state. - * @throws java.io.IOException - */ - @Override - public boolean transitionToOpened(final HRegion r, OpenRegionDetails ord) throws IOException { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - - boolean result = false; - HRegionInfo hri = r.getRegionInfo(); - final String name = hri.getRegionNameAsString(); - // Finally, Transition ZK node to OPENED - try { - if (ZKAssign.transitionNodeOpened(watcher, hri, - zkOrd.getServerName(), zkOrd.getVersion()) == -1) { - String warnMsg = "Completed the OPEN of region " + name + - " but when transitioning from " + " OPENING to OPENED "; - try { - String node = ZKAssign.getNodeName(watcher, hri.getEncodedName()); - if (ZKUtil.checkExists(watcher, node) < 0) { - // if the znode - coordination.getServer().abort(warnMsg + "the znode disappeared", null); - } else { - LOG.warn(warnMsg + "got a version mismatch, someone else clashed; " + - "so now unassigning -- closing region on server: " + zkOrd.getServerName()); - } - } catch (KeeperException ke) { - coordination.getServer().abort(warnMsg, ke); - } - } else { - LOG.debug("Transitioned " + r.getRegionInfo().getEncodedName() + - " to OPENED in zk on " + zkOrd.getServerName()); - result = true; - } - } catch (KeeperException e) { - LOG.error("Failed transitioning node " + name + - " from OPENING to OPENED -- closing region", e); - } - return result; - } - - /** - * Transition ZK node from OFFLINE to OPENING. - * @param regionInfo region info instance - * @param ord - instance of open region details, for ZK implementation - * will include version Of OfflineNode that needs to be compared - * before changing the node's state from OFFLINE - * @return True if successful transition. - */ - @Override - public boolean transitionFromOfflineToOpening(HRegionInfo regionInfo, - OpenRegionDetails ord) { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - - // encoded name is used as znode encoded name in ZK - final String encodedName = regionInfo.getEncodedName(); - - // TODO: should also handle transition from CLOSED? - try { - // Initialize the znode version. - zkOrd.setVersion(ZKAssign.transitionNode(watcher, regionInfo, - zkOrd.getServerName(), EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, zkOrd.getVersionOfOfflineNode())); - } catch (KeeperException e) { - LOG.error("Error transition from OFFLINE to OPENING for region=" + - encodedName, e); - zkOrd.setVersion(-1); - return false; - } - boolean b = isGoodVersion(zkOrd); - if (!b) { - LOG.warn("Failed transition from OFFLINE to OPENING for region=" + - encodedName); - } - return b; - } - - /** - * Update our OPENING state in zookeeper. - * Do this so master doesn't timeout this region-in-transition. - * We may lose the znode ownership during the open. Currently its - * too hard interrupting ongoing region open. Just let it complete - * and check we still have the znode after region open. - * - * @param context Some context to add to logs if failure - * @return True if successful transition. - */ - @Override - public boolean tickleOpening(OpenRegionDetails ord, HRegionInfo regionInfo, - RegionServerServices rsServices, final String context) { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - if (!isRegionStillOpening(regionInfo, rsServices)) { - LOG.warn("Open region aborted since it isn't opening any more"); - return false; - } - // If previous checks failed... do not try again. - if (!isGoodVersion(zkOrd)) return false; - String encodedName = regionInfo.getEncodedName(); - try { - zkOrd.setVersion(ZKAssign.confirmNodeOpening(watcher, - regionInfo, zkOrd.getServerName(), zkOrd.getVersion())); - } catch (KeeperException e) { - coordination.getServer().abort("Exception refreshing OPENING; region=" + encodedName + - ", context=" + context, e); - zkOrd.setVersion(-1); - return false; - } - boolean b = isGoodVersion(zkOrd); - if (!b) { - LOG.warn("Failed refreshing OPENING; region=" + encodedName + - ", context=" + context); - } - return b; - } - - /** - * Try to transition to open. - * - * This is not guaranteed to succeed, we just do our best. - * - * @param rsServices - * @param hri Region we're working on. - * @param ord Details about region open task - * @return whether znode is successfully transitioned to FAILED_OPEN state. - */ - @Override - public boolean tryTransitionFromOfflineToFailedOpen(RegionServerServices rsServices, - final HRegionInfo hri, - OpenRegionDetails ord) { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - boolean result = false; - final String name = hri.getRegionNameAsString(); - try { - LOG.info("Opening of region " + hri + " failed, transitioning" + - " from OFFLINE to FAILED_OPEN in ZK, expecting version " + - zkOrd.getVersionOfOfflineNode()); - if (ZKAssign.transitionNode( - rsServices.getZooKeeper(), hri, - rsServices.getServerName(), - EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_FAILED_OPEN, - zkOrd.getVersionOfOfflineNode()) == -1) { - LOG.warn("Unable to mark region " + hri + " as FAILED_OPEN. " + - "It's likely that the master already timed out this open " + - "attempt, and thus another RS already has the region."); - } else { - result = true; - } - } catch (KeeperException e) { - LOG.error("Failed transitioning node " + name + " from OFFLINE to FAILED_OPEN", e); - } - return result; - } - - private boolean isGoodVersion(ZkOpenRegionDetails zkOrd) { - return zkOrd.getVersion() != -1; - } - - /** - * This is not guaranteed to succeed, we just do our best. - * @param hri Region we're working on. - * @return whether znode is successfully transitioned to FAILED_OPEN state. - */ - @Override - public boolean tryTransitionFromOpeningToFailedOpen(final HRegionInfo hri, - OpenRegionDetails ord) { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - boolean result = false; - final String name = hri.getRegionNameAsString(); - try { - LOG.info("Opening of region " + hri + " failed, transitioning" + - " from OPENING to FAILED_OPEN in ZK, expecting version " + zkOrd.getVersion()); - if (ZKAssign.transitionNode( - watcher, hri, - zkOrd.getServerName(), - EventType.RS_ZK_REGION_OPENING, - EventType.RS_ZK_REGION_FAILED_OPEN, - zkOrd.getVersion()) == -1) { - LOG.warn("Unable to mark region " + hri + " as FAILED_OPEN. " + - "It's likely that the master already timed out this open " + - "attempt, and thus another RS already has the region."); - } else { - result = true; - } - } catch (KeeperException e) { - LOG.error("Failed transitioning node " + name + - " from OPENING to FAILED_OPEN", e); - } - return result; - } - - /** - * Parse ZK-related fields from request. - */ - @Override - public OpenRegionCoordination.OpenRegionDetails parseFromProtoRequest( - AdminProtos.OpenRegionRequest.RegionOpenInfo regionOpenInfo) { - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - - int versionOfOfflineNode = -1; - if (regionOpenInfo.hasVersionOfOfflineNode()) { - versionOfOfflineNode = regionOpenInfo.getVersionOfOfflineNode(); - } - zkCrd.setVersionOfOfflineNode(versionOfOfflineNode); - zkCrd.setServerName(coordination.getServer().getServerName()); - - return zkCrd; - } - - /** - * No ZK tracking will be performed for that case. - * This method should be used when we want to construct CloseRegionDetails, - * but don't want any coordination on that (when it's initiated by regionserver), - * so no znode state transitions will be performed. - */ - @Override - public OpenRegionCoordination.OpenRegionDetails getDetailsForNonCoordinatedOpening() { - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setVersionOfOfflineNode(-1); - zkCrd.setServerName(coordination.getServer().getServerName()); - - return zkCrd; - } - - //-------------------------- - // HMaster-side operations - //-------------------------- - @Override - public boolean commitOpenOnMasterSide(AssignmentManager assignmentManager, - HRegionInfo regionInfo, - OpenRegionDetails ord) { - boolean committedSuccessfully = true; - - // Code to defend against case where we get SPLIT before region open - // processing completes; temporary till we make SPLITs go via zk -- 0.92. - RegionState regionState = assignmentManager.getRegionStates() - .getRegionTransitionState(regionInfo.getEncodedName()); - boolean openedNodeDeleted = false; - if (regionState != null && regionState.isOpened()) { - openedNodeDeleted = deleteOpenedNode(regionInfo, ord); - if (!openedNodeDeleted) { - LOG.error("Znode of region " + regionInfo.getShortNameToLog() + " could not be deleted."); - } - } else { - LOG.warn("Skipping the onlining of " + regionInfo.getShortNameToLog() + - " because regions is NOT in RIT -- presuming this is because it SPLIT"); - } - if (!openedNodeDeleted) { - if (assignmentManager.getTableStateManager().isTableState(regionInfo.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - debugLog(regionInfo, "Opened region " - + regionInfo.getShortNameToLog() + " but " - + "this table is disabled, triggering close of region"); - committedSuccessfully = false; - } - } - - return committedSuccessfully; - } - - private boolean deleteOpenedNode(HRegionInfo regionInfo, OpenRegionDetails ord) { - ZkOpenRegionDetails zkOrd = (ZkOpenRegionDetails) ord; - int expectedVersion = zkOrd.getVersion(); - - debugLog(regionInfo, "Handling OPENED of " + - regionInfo.getShortNameToLog() + " from " + zkOrd.getServerName().toString() + - "; deleting unassigned node"); - try { - // delete the opened znode only if the version matches. - return ZKAssign.deleteNode(this.coordination.getServer().getZooKeeper(), - regionInfo.getEncodedName(), EventType.RS_ZK_REGION_OPENED, expectedVersion); - } catch(KeeperException.NoNodeException e){ - // Getting no node exception here means that already the region has been opened. - LOG.warn("The znode of the region " + regionInfo.getShortNameToLog() + - " would have already been deleted"); - return false; - } catch (KeeperException e) { - this.coordination.getServer().abort("Error deleting OPENED node in ZK (" + - regionInfo.getRegionNameAsString() + ")", e); - } - return false; - } - - private void debugLog(HRegionInfo region, String string) { - if (region.isMetaTable()) { - LOG.info(string); - } else { - LOG.debug(string); - } - } - - // Additional classes and helper methods - - /** - * ZK-based implementation. Has details about whether the state transition should be - * reflected in ZK, as well as expected version of znode. - */ - public static class ZkOpenRegionDetails implements OpenRegionCoordination.OpenRegionDetails { - - // We get version of our znode at start of open process and monitor it across - // the total open. We'll fail the open if someone hijacks our znode; we can - // tell this has happened if version is not as expected. - private volatile int version = -1; - - //version of the offline node that was set by the master - private volatile int versionOfOfflineNode = -1; - - /** - * Server name the handler is running on. - */ - private ServerName serverName; - - public ZkOpenRegionDetails() { - } - - public ZkOpenRegionDetails(int versionOfOfflineNode) { - this.versionOfOfflineNode = versionOfOfflineNode; - } - - public int getVersionOfOfflineNode() { - return versionOfOfflineNode; - } - - public void setVersionOfOfflineNode(int versionOfOfflineNode) { - this.versionOfOfflineNode = versionOfOfflineNode; - } - - public int getVersion() { - return version; - } - - public void setVersion(int version) { - this.version = version; - } - - @Override - public ServerName getServerName() { - return serverName; - } - - @Override - public void setServerName(ServerName serverName) { - this.serverName = serverName; - } - } - - private boolean isRegionStillOpening(HRegionInfo regionInfo, RegionServerServices rsServices) { - byte[] encodedName = regionInfo.getEncodedNameAsBytes(); - Boolean action = rsServices.getRegionsInTransitionInRS().get(encodedName); - return Boolean.TRUE.equals(action); // true means opening for RIT - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkRegionMergeCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkRegionMergeCoordination.java deleted file mode 100644 index 1d26cba..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkRegionMergeCoordination.java +++ /dev/null @@ -1,325 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hbase.coordination; - -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REGION_MERGED; -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REGION_MERGING; -import static org.apache.hadoop.hbase.executor.EventType.RS_ZK_REQUEST_REGION_MERGE; - -import java.io.IOException; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.CoordinatedStateManager; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.data.Stat; - -public class ZkRegionMergeCoordination implements RegionMergeCoordination { - - private CoordinatedStateManager manager; - private final ZooKeeperWatcher watcher; - - private static final Log LOG = LogFactory.getLog(ZkRegionMergeCoordination.class); - - public ZkRegionMergeCoordination(CoordinatedStateManager manager, - ZooKeeperWatcher watcher) { - this.manager = manager; - this.watcher = watcher; - } - - /** - * ZK-based implementation. Has details about whether the state transition should be reflected in - * ZK, as well as expected version of znode. - */ - public static class ZkRegionMergeDetails implements RegionMergeCoordination.RegionMergeDetails { - private int znodeVersion; - - public ZkRegionMergeDetails() { - } - - public int getZnodeVersion() { - return znodeVersion; - } - - public void setZnodeVersion(int znodeVersion) { - this.znodeVersion = znodeVersion; - } - } - - @Override - public RegionMergeDetails getDefaultDetails() { - ZkRegionMergeDetails zstd = new ZkRegionMergeDetails(); - zstd.setZnodeVersion(-1); - return zstd; - } - - /** - * Wait for the merging node to be transitioned from pending_merge - * to merging by master. That's how we are sure master has processed - * the event and is good with us to move on. If we don't get any update, - * we periodically transition the node so that master gets the callback. - * If the node is removed or is not in pending_merge state any more, - * we abort the merge. - * @throws IOException - */ - - @Override - public void waitForRegionMergeTransaction(RegionServerServices services, - HRegionInfo mergedRegionInfo, HRegion region_a, HRegion region_b, RegionMergeDetails details) - throws IOException { - try { - int spins = 0; - Stat stat = new Stat(); - ServerName expectedServer = manager.getServer().getServerName(); - String node = mergedRegionInfo.getEncodedName(); - ZkRegionMergeDetails zdetails = (ZkRegionMergeDetails) details; - while (!(manager.getServer().isStopped() || services.isStopping())) { - if (spins % 5 == 0) { - LOG.debug("Still waiting for master to process " + "the pending_merge for " + node); - ZkRegionMergeDetails zrmd = (ZkRegionMergeDetails) getDefaultDetails(); - transitionMergingNode(mergedRegionInfo, region_a.getRegionInfo(), - region_b.getRegionInfo(), expectedServer, zrmd, RS_ZK_REQUEST_REGION_MERGE, - RS_ZK_REQUEST_REGION_MERGE); - } - Thread.sleep(100); - spins++; - byte[] data = ZKAssign.getDataNoWatch(watcher, node, stat); - if (data == null) { - throw new IOException("Data is null, merging node " + node + " no longer exists"); - } - RegionTransition rt = RegionTransition.parseFrom(data); - EventType et = rt.getEventType(); - if (et == RS_ZK_REGION_MERGING) { - ServerName serverName = rt.getServerName(); - if (!serverName.equals(expectedServer)) { - throw new IOException("Merging node " + node + " is for " + serverName + ", not us " - + expectedServer); - } - byte[] payloadOfMerging = rt.getPayload(); - List mergingRegions = - HRegionInfo.parseDelimitedFrom(payloadOfMerging, 0, payloadOfMerging.length); - assert mergingRegions.size() == 3; - HRegionInfo a = mergingRegions.get(1); - HRegionInfo b = mergingRegions.get(2); - HRegionInfo hri_a = region_a.getRegionInfo(); - HRegionInfo hri_b = region_b.getRegionInfo(); - if (!(hri_a.equals(a) && hri_b.equals(b))) { - throw new IOException("Merging node " + node + " is for " + a + ", " + b - + ", not expected regions: " + hri_a + ", " + hri_b); - } - // Master has processed it. - zdetails.setZnodeVersion(stat.getVersion()); - return; - } - if (et != RS_ZK_REQUEST_REGION_MERGE) { - throw new IOException("Merging node " + node + " moved out of merging to " + et); - } - } - // Server is stopping/stopped - throw new IOException("Server is " + (services.isStopping() ? "stopping" : "stopped")); - } catch (Exception e) { - if (e instanceof InterruptedException) { - Thread.currentThread().interrupt(); - } - throw new IOException("Failed getting MERGING znode on " - + mergedRegionInfo.getRegionNameAsString(), e); - } - } - - /** - * Creates a new ephemeral node in the PENDING_MERGE state for the merged region. - * Create it ephemeral in case regionserver dies mid-merge. - * - *

    - * Does not transition nodes from other states. If a node already exists for - * this region, a {@link org.apache.zookeeper.KeeperException.NodeExistsException} will be thrown. - * - * @param region region to be created as offline - * @param serverName server event originates from - * @throws IOException - */ - @Override - public void startRegionMergeTransaction(final HRegionInfo region, final ServerName serverName, - final HRegionInfo a, final HRegionInfo b) throws IOException { - LOG.debug(watcher.prefix("Creating ephemeral node for " + region.getEncodedName() - + " in PENDING_MERGE state")); - byte[] payload = HRegionInfo.toDelimitedByteArray(region, a, b); - RegionTransition rt = - RegionTransition.createRegionTransition(RS_ZK_REQUEST_REGION_MERGE, region.getRegionName(), - serverName, payload); - String node = ZKAssign.getNodeName(watcher, region.getEncodedName()); - try { - if (!ZKUtil.createEphemeralNodeAndWatch(watcher, node, rt.toByteArray())) { - throw new IOException("Failed create of ephemeral " + node); - } - } catch (KeeperException e) { - throw new IOException(e); - } - } - - /* - * (non-Javadoc) - * @see - * org.apache.hadoop.hbase.regionserver.coordination.RegionMergeCoordination#clean(org.apache.hadoop - * .hbase.Server, org.apache.hadoop.hbase.HRegionInfo) - */ - @Override - public void clean(final HRegionInfo hri) { - try { - // Only delete if its in expected state; could have been hijacked. - if (!ZKAssign.deleteNode(watcher, hri.getEncodedName(), RS_ZK_REQUEST_REGION_MERGE, manager - .getServer().getServerName())) { - ZKAssign.deleteNode(watcher, hri.getEncodedName(), RS_ZK_REGION_MERGING, manager - .getServer().getServerName()); - } - } catch (KeeperException.NoNodeException e) { - LOG.info("Failed cleanup zk node of " + hri.getRegionNameAsString(), e); - } catch (KeeperException e) { - manager.getServer().abort("Failed cleanup zk node of " + hri.getRegionNameAsString(), e); - } - } - - /* - * ZooKeeper implementation of finishRegionMergeTransaction - */ - @Override - public void completeRegionMergeTransaction(final RegionServerServices services, - HRegionInfo mergedRegionInfo, HRegion region_a, HRegion region_b, RegionMergeDetails rmd, - HRegion mergedRegion) throws IOException { - ZkRegionMergeDetails zrmd = (ZkRegionMergeDetails) rmd; - if (manager.getServer() == null - || manager.getServer().getCoordinatedStateManager() == null) { - return; - } - // Tell master about merge by updating zk. If we fail, abort. - try { - transitionMergingNode(mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo(), - manager.getServer().getServerName(), rmd, RS_ZK_REGION_MERGING, RS_ZK_REGION_MERGED); - - long startTime = EnvironmentEdgeManager.currentTime(); - int spins = 0; - // Now wait for the master to process the merge. We know it's done - // when the znode is deleted. The reason we keep tickling the znode is - // that it's possible for the master to miss an event. - do { - if (spins % 10 == 0) { - LOG.debug("Still waiting on the master to process the merge for " - + mergedRegionInfo.getEncodedName() + ", waited " - + (EnvironmentEdgeManager.currentTime() - startTime) + "ms"); - } - Thread.sleep(100); - // When this returns -1 it means the znode doesn't exist - transitionMergingNode(mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo(), - manager.getServer().getServerName(), rmd, RS_ZK_REGION_MERGED, RS_ZK_REGION_MERGED); - spins++; - } while (zrmd.getZnodeVersion() != -1 && !manager.getServer().isStopped() - && !services.isStopping()); - } catch (Exception e) { - if (e instanceof InterruptedException) { - Thread.currentThread().interrupt(); - } - throw new IOException("Failed telling master about merge " - + mergedRegionInfo.getEncodedName(), e); - } - // Leaving here, the mergedir with its dross will be in place but since the - // merge was successful, just leave it; it'll be cleaned when region_a is - // cleaned up by CatalogJanitor on master - } - - /* - * Zookeeper implementation of region merge confirmation - */ - @Override - public void confirmRegionMergeTransaction(HRegionInfo merged, HRegionInfo a, HRegionInfo b, - ServerName serverName, RegionMergeDetails rmd) throws IOException { - transitionMergingNode(merged, a, b, serverName, rmd, RS_ZK_REGION_MERGING, - RS_ZK_REGION_MERGING); - } - - /* - * Zookeeper implementation of region merge processing - */ - @Override - public void processRegionMergeRequest(HRegionInfo p, HRegionInfo hri_a, HRegionInfo hri_b, - ServerName sn, RegionMergeDetails rmd) throws IOException { - transitionMergingNode(p, hri_a, hri_b, sn, rmd, EventType.RS_ZK_REQUEST_REGION_MERGE, - EventType.RS_ZK_REGION_MERGING); - } - - /** - * Transitions an existing ephemeral node for the specified region which is - * currently in the begin state to be in the end state. Master cleans up the - * final MERGE znode when it reads it (or if we crash, zk will clean it up). - * - *

    - * Does not transition nodes from other states. If for some reason the node - * could not be transitioned, the method returns -1. If the transition is - * successful, the version of the node after transition is updated in details. - * - *

    - * This method can fail and return false for three different reasons: - *

      - *
    • Node for this region does not exist
    • - *
    • Node for this region is not in the begin state
    • - *
    • After verifying the begin state, update fails because of wrong version - * (this should never actually happen since an RS only does this transition - * following a transition to the begin state. If two RS are conflicting, one would - * fail the original transition to the begin state and not this transition)
    • - *
    - * - *

    - * Does not set any watches. - * - *

    - * This method should only be used by a RegionServer when merging two regions. - * - * @param merged region to be transitioned to opened - * @param a merging region A - * @param b merging region B - * @param serverName server event originates from - * @param rmd region merge details - * @param beginState the expected current state the node should be - * @param endState the state to be transition to - * @throws IOException - */ - private void transitionMergingNode(HRegionInfo merged, HRegionInfo a, HRegionInfo b, - ServerName serverName, RegionMergeDetails rmd, final EventType beginState, - final EventType endState) throws IOException { - ZkRegionMergeDetails zrmd = (ZkRegionMergeDetails) rmd; - byte[] payload = HRegionInfo.toDelimitedByteArray(merged, a, b); - try { - zrmd.setZnodeVersion(ZKAssign.transitionNode(watcher, merged, serverName, beginState, - endState, zrmd.getZnodeVersion(), payload)); - } catch (KeeperException e) { - throw new IOException(e); - } - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java index a0addb0..9ea6bd7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZkSplitLogWorkerCoordination.java @@ -584,8 +584,7 @@ public class ZkSplitLogWorkerCoordination extends ZooKeeperListener implements */ /** * endTask() can fail and the only way to recover out of it is for the - * {@link org.apache.hadoop.hbase.master.SplitLogManager} to - * timeout the task node. + * {@link org.apache.hadoop.hbase.master.SplitLogManager} to timeout the task node. * @param slt * @param ctr */ diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java index 98c0563..85abbf8 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterAndRegionObserver.java @@ -19,21 +19,22 @@ package org.apache.hadoop.hbase.coprocessor; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; +import java.io.IOException; +import java.util.List; + +import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HBaseInterfaceAudience; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; - -import java.io.IOException; -import java.util.List; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) @InterfaceStability.Evolving @@ -392,6 +393,16 @@ public abstract class BaseMasterAndRegionObserver extends BaseRegionObserver } @Override + public void preListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + } + + @Override + public void postListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + } + + @Override public void preCloneSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { @@ -427,17 +438,6 @@ public abstract class BaseMasterAndRegionObserver extends BaseRegionObserver @Override public void preGetTableDescriptors(ObserverContext ctx, - List tableNamesList, List descriptors) - throws IOException { - } - - @Override - public void postGetTableDescriptors(ObserverContext ctx, - List descriptors) throws IOException { - } - - @Override - public void preGetTableDescriptors(ObserverContext ctx, List tableNamesList, List descriptors, String regex) throws IOException { } @@ -467,4 +467,54 @@ public abstract class BaseMasterAndRegionObserver extends BaseRegionObserver public void postTableFlush(ObserverContext ctx, TableName tableName) throws IOException { } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void preSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java index 4748a1b..de1645e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java @@ -19,21 +19,22 @@ package org.apache.hadoop.hbase.coprocessor; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; +import java.io.IOException; +import java.util.List; + +import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HBaseInterfaceAudience; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; - -import java.io.IOException; -import java.util.List; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; @InterfaceAudience.LimitedPrivate({HBaseInterfaceAudience.COPROC, HBaseInterfaceAudience.CONFIG}) @InterfaceStability.Evolving @@ -385,6 +386,16 @@ public class BaseMasterObserver implements MasterObserver { } @Override + public void preListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + } + + @Override + public void postListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + } + + @Override public void preCloneSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { @@ -420,18 +431,6 @@ public class BaseMasterObserver implements MasterObserver { @Override public void preGetTableDescriptors(ObserverContext ctx, - List tableNamesList, List descriptors) - throws IOException { - } - - @Override - public void postGetTableDescriptors(ObserverContext ctx, - List descriptors) throws IOException { - } - - - @Override - public void preGetTableDescriptors(ObserverContext ctx, List tableNamesList, List descriptors, String regex) throws IOException { } @@ -462,4 +461,53 @@ public class BaseMasterObserver implements MasterObserver { TableName tableName) throws IOException { } + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void preSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java index 2d99754..ab9f709 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java @@ -22,18 +22,19 @@ package org.apache.hadoop.hbase.coprocessor; import java.io.IOException; import java.util.List; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.HBaseInterfaceAudience; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; /** * Defines coprocessor hooks for interacting with operations on the @@ -597,6 +598,26 @@ public interface MasterObserver extends Coprocessor { throws IOException; /** + * Called before listSnapshots request has been processed. + * It can't bypass the default action, e.g., ctx.bypass() won't have effect. + * @param ctx the environment to interact with the framework and master + * @param snapshot the SnapshotDescriptor of the snapshot to list + * @throws IOException + */ + void preListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException; + + /** + * Called after listSnapshots request has been processed. + * It can't bypass the default action, e.g., ctx.bypass() won't have effect. + * @param ctx the environment to interact with the framework and master + * @param snapshot the SnapshotDescriptor of the snapshot to list + * @throws IOException + */ + void postListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException; + + /** * Called before a snapshot is cloned. * Called as part of restoreSnapshot RPC call. * It can't bypass the default action, e.g., ctx.bypass() won't have effect. @@ -672,29 +693,6 @@ public interface MasterObserver extends Coprocessor { * @param ctx the environment to interact with the framework and master * @param tableNamesList the list of table names, or null if querying for all * @param descriptors an empty list, can be filled with what to return if bypassing - * @throws IOException - * @deprecated Use preGetTableDescriptors with regex instead. - */ - @Deprecated - void preGetTableDescriptors(ObserverContext ctx, - List tableNamesList, List descriptors) throws IOException; - - /** - * Called after a getTableDescriptors request has been processed. - * @param ctx the environment to interact with the framework and master - * @param descriptors the list of descriptors about to be returned - * @throws IOException - * @deprecated Use postGetTableDescriptors with regex instead. - */ - @Deprecated - void postGetTableDescriptors(ObserverContext ctx, - List descriptors) throws IOException; - - /** - * Called before a getTableDescriptors request has been processed. - * @param ctx the environment to interact with the framework and master - * @param tableNamesList the list of table names, or null if querying for all - * @param descriptors an empty list, can be filled with what to return if bypassing * @param regex regular expression used for filtering the table names * @throws IOException */ @@ -734,6 +732,8 @@ public interface MasterObserver extends Coprocessor { void postGetTableNames(ObserverContext ctx, List descriptors, String regex) throws IOException; + + /** * Called before a new namespace is created by * {@link org.apache.hadoop.hbase.master.HMaster}. @@ -842,4 +842,108 @@ public interface MasterObserver extends Coprocessor { */ void postTableFlush(final ObserverContext ctx, final TableName tableName) throws IOException; + + /** + * Called before the quota for the user is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param quotas the quota settings + * @throws IOException + */ + void preSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException; + + /** + * Called after the quota for the user is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param quotas the quota settings + * @throws IOException + */ + void postSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException; + + /** + * Called before the quota for the user on the specified table is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param tableName the name of the table + * @param quotas the quota settings + * @throws IOException + */ + void preSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException; + + /** + * Called after the quota for the user on the specified table is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param tableName the name of the table + * @param quotas the quota settings + * @throws IOException + */ + void postSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException; + + /** + * Called before the quota for the user on the specified namespace is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param namespace the name of the namespace + * @param quotas the quota settings + * @throws IOException + */ + void preSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException; + + /** + * Called after the quota for the user on the specified namespace is stored. + * @param ctx the environment to interact with the framework and master + * @param userName the name of user + * @param namespace the name of the namespace + * @param quotas the quota settings + * @throws IOException + */ + void postSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException; + + /** + * Called before the quota for the table is stored. + * @param ctx the environment to interact with the framework and master + * @param tableName the name of the table + * @param quotas the quota settings + * @throws IOException + */ + void preSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException; + + /** + * Called after the quota for the table is stored. + * @param ctx the environment to interact with the framework and master + * @param tableName the name of the table + * @param quotas the quota settings + * @throws IOException + */ + void postSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException; + + /** + * Called before the quota for the namespace is stored. + * @param ctx the environment to interact with the framework and master + * @param namespace the name of the namespace + * @param quotas the quota settings + * @throws IOException + */ + void preSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException; + + /** + * Called after the quota for the namespace is stored. + * @param ctx the environment to interact with the framework and master + * @param namespace the name of the namespace + * @param quotas the quota settings + * @throws IOException + */ + void postSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java index a8b20ea..9fede52 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java @@ -119,6 +119,7 @@ public interface RegionObserver extends Coprocessor { * @throws IOException if an error occurred on the coprocessor * @deprecated use {@link #preFlush(ObserverContext, Store, InternalScanner)} instead */ + @Deprecated void preFlush(final ObserverContext c) throws IOException; /** @@ -139,6 +140,7 @@ public interface RegionObserver extends Coprocessor { * @throws IOException if an error occurred on the coprocessor * @deprecated use {@link #preFlush(ObserverContext, Store, InternalScanner)} instead. */ + @Deprecated void postFlush(final ObserverContext c) throws IOException; /** @@ -210,9 +212,9 @@ public interface RegionObserver extends Coprocessor { * options: *

      *
    • Wrap the provided {@link InternalScanner} with a custom implementation that is returned - * from this method. The custom scanner can then inspect - * {@link org.apache.hadoop.hbase.KeyValue}s from the wrapped - * scanner, applying its own policy to what gets written.
    • + * from this method. The custom scanner can then inspect + * {@link org.apache.hadoop.hbase.KeyValue}s from the wrapped scanner, applying its own + * policy to what gets written. *
    • Call {@link org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()} and provide a * custom implementation for writing of new {@link StoreFile}s. Note: any implementations * bypassing core compaction using this approach must write out new store files themselves or the @@ -237,9 +239,9 @@ public interface RegionObserver extends Coprocessor { * options: *
        *
      • Wrap the provided {@link InternalScanner} with a custom implementation that is returned - * from this method. The custom scanner can then inspect - * {@link org.apache.hadoop.hbase.KeyValue}s from the wrapped - * scanner, applying its own policy to what gets written.
      • + * from this method. The custom scanner can then inspect + * {@link org.apache.hadoop.hbase.KeyValue}s from the wrapped scanner, applying its own + * policy to what gets written. *
      • Call {@link org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()} and provide a * custom implementation for writing of new {@link StoreFile}s. Note: any implementations * bypassing core compaction using this approach must write out new store files themselves or the @@ -269,8 +271,8 @@ public interface RegionObserver extends Coprocessor { * effect in this hook. * @param c the environment provided by the region server * @param store the store being compacted - * @param scanners the list {@link org.apache.hadoop.hbase.regionserver.StoreFileScanner}s - * to be read from + * @param scanners the list {@link org.apache.hadoop.hbase.regionserver.StoreFileScanner}s + * to be read from * @param scanType the {@link ScanType} indicating whether this is a major or minor compaction * @param earliestPutTs timestamp of the earliest put that was found in any of the involved store * files @@ -294,8 +296,8 @@ public interface RegionObserver extends Coprocessor { * effect in this hook. * @param c the environment provided by the region server * @param store the store being compacted - * @param scanners the list {@link org.apache.hadoop.hbase.regionserver.StoreFileScanner}s - * to be read from + * @param scanners the list {@link org.apache.hadoop.hbase.regionserver.StoreFileScanner}s + * to be read from * @param scanType the {@link ScanType} indicating whether this is a major or minor compaction * @param earliestPutTs timestamp of the earliest put that was found in any of the involved store * files @@ -344,6 +346,7 @@ public interface RegionObserver extends Coprocessor { * @deprecated Use preSplit( * final ObserverContext c, byte[] splitRow) */ + @Deprecated void preSplit(final ObserverContext c) throws IOException; /** @@ -364,6 +367,7 @@ public interface RegionObserver extends Coprocessor { * @throws IOException if an error occurred on the coprocessor * @deprecated Use postCompleteSplit() instead */ + @Deprecated void postSplit(final ObserverContext c, final HRegion l, final HRegion r) throws IOException; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java index bf1f251..cbc0e56 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/executor/EventHandler.java @@ -48,11 +48,6 @@ import org.htrace.TraceScope; * hbase executor, see ExecutorService, has a switch for passing * event type to executor. *

        - * Event listeners can be installed and will be called pre- and post- process if - * this EventHandler is run in a Thread (its a Runnable so if its {@link #run()} - * method gets called). Implement - * {@link EventHandlerListener}s, and registering using - * {@link #setListener(EventHandlerListener)}. * @see ExecutorService */ @InterfaceAudience.Private @@ -70,31 +65,12 @@ public abstract class EventHandler implements Runnable, Comparable { // sequence id for this event private final long seqid; - // Listener to call pre- and post- processing. May be null. - private EventHandlerListener listener; - // Time to wait for events to happen, should be kept short protected int waitingTimeForEvents; private final Span parent; /** - * This interface provides pre- and post-process hooks for events. - */ - public interface EventHandlerListener { - /** - * Called before any event is processed - * @param event The event handler whose process method is about to be called. - */ - void beforeProcess(EventHandler event); - /** - * Called after any event is processed - * @param event The event handler whose process method is about to be called. - */ - void afterProcess(EventHandler event); - } - - /** * Default base class constructor. */ public EventHandler(Server server, EventType eventType) { @@ -124,9 +100,7 @@ public abstract class EventHandler implements Runnable, Comparable { public void run() { TraceScope chunk = Trace.startSpan(this.getClass().getSimpleName(), parent); try { - if (getListener() != null) getListener().beforeProcess(this); process(); - if (getListener() != null) getListener().afterProcess(this); } catch(Throwable t) { handleException(t); } finally { @@ -187,20 +161,6 @@ public abstract class EventHandler implements Runnable, Comparable { return (this.seqid < eh.seqid) ? -1 : 1; } - /** - * @return Current listener or null if none set. - */ - public synchronized EventHandlerListener getListener() { - return listener; - } - - /** - * @param listener Listener to call pre- and post- {@link #process()}. - */ - public synchronized void setListener(EventHandlerListener listener) { - this.listener = listener; - } - @Override public String toString() { return "Event #" + getSeqid() + diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java index 42cca2b..410fb39 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/executor/ExecutorService.java @@ -35,7 +35,6 @@ import java.util.concurrent.atomic.AtomicLong; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.executor.EventHandler.EventHandlerListener; import org.apache.hadoop.hbase.monitoring.ThreadMonitoring; import com.google.common.collect.Lists; @@ -52,10 +51,7 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder; * call {@link #shutdown()}. * *

        In order to use the service created above, call - * {@link #submit(EventHandler)}. Register pre- and post- processing listeners - * by registering your implementation of {@link EventHandler.EventHandlerListener} - * with {@link #registerListener(EventType, EventHandler.EventHandlerListener)}. Be sure - * to deregister your listener when done via {@link #unregisterListener(EventType)}. + * {@link #submit(EventHandler)}. */ @InterfaceAudience.Private public class ExecutorService { @@ -65,10 +61,6 @@ public class ExecutorService { private final ConcurrentHashMap executorMap = new ConcurrentHashMap(); - // listeners that are called before and after an event is processed - private ConcurrentHashMap eventHandlerListeners = - new ConcurrentHashMap(); - // Name of the server hosting this executor service. private final String servername; @@ -91,7 +83,7 @@ public class ExecutorService { throw new RuntimeException("An executor service with the name " + name + " is already running!"); } - Executor hbes = new Executor(name, maxThreads, this.eventHandlerListeners); + Executor hbes = new Executor(name, maxThreads); if (this.executorMap.putIfAbsent(name, hbes) != null) { throw new RuntimeException("An executor service with the name " + name + " is already running (2)!"); @@ -130,7 +122,7 @@ public class ExecutorService { String name = type.getExecutorName(this.servername); if (isExecutorServiceRunning(name)) { LOG.debug("Executor service " + toString() + " already running on " + - this.servername); + this.servername); return; } startExecutorService(name, maxThreads); @@ -149,28 +141,6 @@ public class ExecutorService { } } - /** - * Subscribe to updates before and after processing instances of - * {@link EventType}. Currently only one listener per - * event type. - * @param type Type of event we're registering listener for - * @param listener The listener to run. - */ - public void registerListener(final EventType type, - final EventHandlerListener listener) { - this.eventHandlerListeners.put(type, listener); - } - - /** - * Stop receiving updates before and after processing instances of - * {@link EventType} - * @param type Type of event we're registering listener for - * @return The listener we removed or null if we did not remove it. - */ - public EventHandlerListener unregisterListener(final EventType type) { - return this.eventHandlerListeners.remove(type); - } - public Map getAllExecutorStatuses() { Map ret = Maps.newHashMap(); for (Map.Entry e : executorMap.entrySet()) { @@ -190,15 +160,12 @@ public class ExecutorService { // work queue to use - unbounded queue final BlockingQueue q = new LinkedBlockingQueue(); private final String name; - private final Map eventHandlerListeners; private static final AtomicLong seqids = new AtomicLong(0); private final long id; - protected Executor(String name, int maxThreads, - final Map eventHandlerListeners) { + protected Executor(String name, int maxThreads) { this.id = seqids.incrementAndGet(); this.name = name; - this.eventHandlerListeners = eventHandlerListeners; // create the thread pool executor this.threadPoolExecutor = new TrackingThreadPoolExecutor( maxThreads, maxThreads, @@ -216,11 +183,6 @@ public class ExecutorService { void submit(final EventHandler event) { // If there is a listener for this type, make sure we call the before // and after process methods. - EventHandlerListener listener = - this.eventHandlerListeners.get(event.getEventType()); - if (listener != null) { - event.setListener(listener); - } this.threadPoolExecutor.execute(event); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/http/jmx/JMXJsonServlet.java hbase-server/src/main/java/org/apache/hadoop/hbase/http/jmx/JMXJsonServlet.java index b6e97a8..498e213 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/http/jmx/JMXJsonServlet.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/http/jmx/JMXJsonServlet.java @@ -5,9 +5,9 @@ * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at - * + * * http://www.apache.org/licenses/LICENSE-2.0 - * + * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. @@ -24,9 +24,6 @@ import java.lang.management.ManagementFactory; import javax.management.MBeanServer; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; -import javax.management.ReflectionException; -import javax.management.RuntimeErrorException; -import javax.management.RuntimeMBeanException; import javax.management.openmbean.CompositeData; import javax.management.openmbean.TabularData; import javax.servlet.ServletException; @@ -58,20 +55,20 @@ import org.apache.hadoop.hbase.util.JSONBean; * For example http://.../jmx?qry=Hadoop:* will return * all hadoop metrics exposed through JMX. *

        - * The optional get parameter is used to query an specific + * The optional get parameter is used to query an specific * attribute of a JMX bean. The format of the URL is * http://.../jmx?get=MXBeanName::AttributeName *

        - * For example + * For example * * http://../jmx?get=Hadoop:service=NameNode,name=NameNodeInfo::ClusterId * will return the cluster id of the namenode mxbean. *

        - * If the qry or the get parameter is not formatted - * correctly then a 400 BAD REQUEST http response code will be returned. + * If the qry or the get parameter is not formatted + * correctly then a 400 BAD REQUEST http response code will be returned. *

        - * If a resouce such as a mbean or attribute can not be found, - * a 404 SC_NOT_FOUND http response code will be returned. + * If a resouce such as a mbean or attribute can not be found, + * a 404 SC_NOT_FOUND http response code will be returned. *

        * The return format is JSON and in the form *

        @@ -88,23 +85,23 @@ import org.apache.hadoop.hbase.util.JSONBean; *

        * The servlet attempts to convert the the JMXBeans into JSON. Each * bean's attributes will be converted to a JSON object member. - * + * * If the attribute is a boolean, a number, a string, or an array - * it will be converted to the JSON equivalent. - * + * it will be converted to the JSON equivalent. + * * If the value is a {@link CompositeData} then it will be converted * to a JSON object with the keys as the name of the JSON member and * the value is converted following these same rules. - * + * * If the value is a {@link TabularData} then it will be converted * to an array of the {@link CompositeData} elements that it contains. - * + * * All other objects will be converted to a string and output as such. - * + * * The bean's name and modelerType will be returned for all beans. * * Optional paramater "callback" should be used to deliver JSONP response. - * + * */ public class JMXJsonServlet extends HttpServlet { private static final Log LOG = LogFactory.getLog(JMXJsonServlet.class); @@ -138,7 +135,7 @@ public class JMXJsonServlet extends HttpServlet { /** * Process a GET request for the specified resource. - * + * * @param request * The servlet request we are processing * @param response @@ -156,6 +153,7 @@ public class JMXJsonServlet extends HttpServlet { try { writer = response.getWriter(); beanWriter = this.jsonBeanWriter.open(writer); + // "callback" parameter implies JSONP outpout jsonpcb = request.getParameter(CALLBACK_PARAM); if (jsonpcb != null) { @@ -213,4 +211,4 @@ public class JMXJsonServlet extends HttpServlet { response.setStatus(HttpServletResponse.SC_BAD_REQUEST); } } -} +} \ No newline at end of file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/DataOutputOutputStream.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/DataOutputOutputStream.java deleted file mode 100644 index 3804920..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/DataOutputOutputStream.java +++ /dev/null @@ -1,67 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.io; - -import java.io.DataOutput; -import java.io.IOException; -import java.io.OutputStream; - -import org.apache.hadoop.hbase.classification.InterfaceAudience; - -/** - * OutputStream implementation that wraps a DataOutput. - */ -@InterfaceAudience.Private -public class DataOutputOutputStream extends OutputStream { - - private final DataOutput out; - - /** - * Construct an OutputStream from the given DataOutput. If 'out' - * is already an OutputStream, simply returns it. Otherwise, wraps - * it in an OutputStream. - * @param out the DataOutput to wrap - * @return an OutputStream instance that outputs to 'out' - */ - public static OutputStream constructOutputStream(DataOutput out) { - if (out instanceof OutputStream) { - return (OutputStream)out; - } else { - return new DataOutputOutputStream(out); - } - } - - private DataOutputOutputStream(DataOutput out) { - this.out = out; - } - - @Override - public void write(int b) throws IOException { - out.writeByte(b); - } - - @Override - public void write(byte[] b, int off, int len) throws IOException { - out.write(b, off, len); - } - - @Override - public void write(byte[] b) throws IOException { - out.write(b); - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java index b7cab0f..7d96920 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/FileLink.java @@ -18,11 +18,13 @@ package org.apache.hadoop.hbase.io; +import java.util.ArrayList; import java.util.Collection; import java.io.IOException; import java.io.InputStream; import java.io.FileNotFoundException; +import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -137,12 +139,12 @@ public class FileLink { } @Override - public int read(byte b[]) throws IOException { + public int read(byte[] b) throws IOException { return read(b, 0, b.length); } @Override - public int read(byte b[], int off, int len) throws IOException { + public int read(byte[] b, int off, int len) throws IOException { int n; try { n = in.read(b, off, len); @@ -290,7 +292,7 @@ public class FileLink { if (pos != 0) in.seek(pos); assert(in.getPos() == pos) : "Link unable to seek to the right position=" + pos; if (LOG.isTraceEnabled()) { - if (currentPath != null) { + if (currentPath == null) { LOG.debug("link open path=" + path); } else { LOG.trace("link switch from path=" + currentPath + " to path=" + path); @@ -422,9 +424,18 @@ public class FileLink { */ protected void setLocations(Path originPath, Path... alternativePaths) { assert this.locations == null : "Link locations already set"; - this.locations = new Path[1 + alternativePaths.length]; - this.locations[0] = originPath; - System.arraycopy(alternativePaths, 0, this.locations, 1, alternativePaths.length); + + List paths = new ArrayList(alternativePaths.length +1); + if (originPath != null) { + paths.add(originPath); + } + + for (int i = 0; i < alternativePaths.length; i++) { + if (alternativePaths[i] != null) { + paths.add(alternativePaths[i]); + } + } + this.locations = paths.toArray(new Path[0]); } /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java index 2ef59d1..ff33951 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java @@ -92,25 +92,41 @@ public class HFileLink extends FileLink { private final Path tempPath; /** + * Dead simple hfile link constructor + */ + public HFileLink(final Path originPath, final Path tempPath, + final Path archivePath) { + this.tempPath = tempPath; + this.originPath = originPath; + this.archivePath = archivePath; + + setLocations(originPath, tempPath, archivePath); + } + + /** * @param conf {@link Configuration} from which to extract specific archive locations - * @param path The path of the HFile Link. + * @param hFileLinkPattern The path ending with a HFileLink pattern. (table=region-hfile) * @throws IOException on unexpected error. */ - public HFileLink(Configuration conf, Path path) throws IOException { - this(FSUtils.getRootDir(conf), HFileArchiveUtil.getArchivePath(conf), path); + public static final HFileLink buildFromHFileLinkPattern(Configuration conf, Path hFileLinkPattern) + throws IOException { + return buildFromHFileLinkPattern(FSUtils.getRootDir(conf), + HFileArchiveUtil.getArchivePath(conf), hFileLinkPattern); } /** * @param rootDir Path to the root directory where hbase files are stored * @param archiveDir Path to the hbase archive directory - * @param path The path of the HFile Link. + * @param hFileLinkPattern The path of the HFile Link. */ - public HFileLink(final Path rootDir, final Path archiveDir, final Path path) { - Path hfilePath = getRelativeTablePath(path); - this.tempPath = new Path(new Path(rootDir, HConstants.HBASE_TEMP_DIRECTORY), hfilePath); - this.originPath = new Path(rootDir, hfilePath); - this.archivePath = new Path(archiveDir, hfilePath); - setLocations(originPath, tempPath, archivePath); + public final static HFileLink buildFromHFileLinkPattern(final Path rootDir, + final Path archiveDir, + final Path hFileLinkPattern) { + Path hfilePath = getHFileLinkPatternRelativePath(hFileLinkPattern); + Path tempPath = new Path(new Path(rootDir, HConstants.HBASE_TEMP_DIRECTORY), hfilePath); + Path originPath = new Path(rootDir, hfilePath); + Path archivePath = new Path(archiveDir, hfilePath); + return new HFileLink(originPath, tempPath, archivePath); } /** @@ -122,7 +138,7 @@ public class HFileLink extends FileLink { * @return the relative Path to open the specified table/region/family/hfile link */ public static Path createPath(final TableName table, final String region, - final String family, final String hfile) { + final String family, final String hfile) { if (HFileLink.isHFileLink(hfile)) { return new Path(family, hfile); } @@ -139,9 +155,10 @@ public class HFileLink extends FileLink { * @return Link to the file with the specified table/region/family/hfile location * @throws IOException on unexpected error. */ - public static HFileLink create(final Configuration conf, final TableName table, - final String region, final String family, final String hfile) throws IOException { - return new HFileLink(conf, createPath(table, region, family, hfile)); + public static HFileLink build(final Configuration conf, final TableName table, + final String region, final String family, final String hfile) + throws IOException { + return HFileLink.buildFromHFileLinkPattern(conf, createPath(table, region, family, hfile)); } /** @@ -186,11 +203,11 @@ public class HFileLink extends FileLink { * @return Relative table path * @throws IOException on unexpected error. */ - private static Path getRelativeTablePath(final Path path) { + private static Path getHFileLinkPatternRelativePath(final Path path) { // table=region-hfile Matcher m = REF_OR_HFILE_LINK_PATTERN.matcher(path.getName()); if (!m.matches()) { - throw new IllegalArgumentException(path.getName() + " is not a valid HFileLink name!"); + throw new IllegalArgumentException(path.getName() + " is not a valid HFileLink pattern!"); } // Convert the HFileLink name into a real table/region/cf/hfile path. @@ -255,7 +272,7 @@ public class HFileLink extends FileLink { public static String createHFileLinkName(final HRegionInfo hfileRegionInfo, final String hfileName) { return createHFileLinkName(hfileRegionInfo.getTable(), - hfileRegionInfo.getEncodedName(), hfileName); + hfileRegionInfo.getEncodedName(), hfileName); } /** @@ -397,7 +414,7 @@ public class HFileLink extends FileLink { Path tablePath = regionPath.getParent(); String linkName = createHFileLinkName(FSUtils.getTableName(tablePath), - regionPath.getName(), hfileName); + regionPath.getName(), hfileName); Path linkTableDir = FSUtils.getTableDir(rootDir, linkTableName); Path regionDir = HRegion.getRegionDir(linkTableDir, linkRegionName); return new Path(new Path(regionDir, familyPath.getName()), linkName); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java index bc7d658..05c996f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/HalfStoreFileReader.java @@ -304,10 +304,18 @@ public class HalfStoreFileReader extends StoreFile.Reader { // The equals sign isn't strictly necessary just here to be consistent // with seekTo if (getComparator().compareOnlyKeyPortion(key, splitCell) >= 0) { - return this.delegate.seekBefore(splitCell); + boolean ret = this.delegate.seekBefore(splitCell); + if (ret) { + atEnd = false; + } + return ret; } } - return this.delegate.seekBefore(key); + boolean ret = this.delegate.seekBefore(key); + if (ret) { + atEnd = false; + } + return ret; } }; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java index 59943fb..a38e3c1 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java @@ -23,6 +23,7 @@ import java.io.DataInput; import java.io.DataInputStream; import java.io.IOException; import java.io.InputStream; +import java.util.Arrays; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.classification.InterfaceAudience; @@ -96,9 +97,11 @@ public class Reference { /** * Used by serializations. + * @deprecated need by pb serialization */ @Deprecated - // Make this private when it comes time to let go of this constructor. Needed by pb serialization. + // Make this private when it comes time to let go of this constructor. + // Needed by pb serialization. public Reference() { this(null, Range.bottom); } @@ -213,4 +216,22 @@ public class Reference { byte [] toByteArray() throws IOException { return ProtobufUtil.prependPBMagic(convert().toByteArray()); } + + @Override + public int hashCode() { + return Arrays.hashCode(splitkey) + region.hashCode(); + } + + public boolean equals(Object o) { + if (this == o) return true; + if (o == null) return false; + if (!(o instanceof Reference)) return false; + + Reference r = (Reference) o; + if (splitkey != null && r.splitkey == null) return false; + if (splitkey == null && r.splitkey != null) return false; + if (splitkey != null && !Arrays.equals(splitkey, r.splitkey)) return false; + + return region.equals(r.region); + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java index f938020..1e97f63 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java @@ -55,6 +55,7 @@ import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.protobuf.ProtobufMagic; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BytesBytesPair; @@ -668,7 +669,7 @@ public class HFile { bbpBuilder.setSecond(ByteStringer.wrap(e.getValue())); builder.addMapEntry(bbpBuilder.build()); } - out.write(ProtobufUtil.PB_MAGIC); + out.write(ProtobufMagic.PB_MAGIC); builder.build().writeDelimitedTo(out); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java index 002a706..b096185 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java @@ -216,11 +216,11 @@ public class HFileBlock implements Cacheable { this.uncompressedSizeWithoutHeader = uncompressedSizeWithoutHeader; this.prevBlockOffset = prevBlockOffset; this.buf = buf; - if (fillHeader) - overwriteHeader(); this.offset = offset; this.onDiskDataSizeWithHeader = onDiskDataSizeWithHeader; this.fileContext = fileContext; + if (fillHeader) + overwriteHeader(); this.buf.rewind(); } @@ -322,6 +322,11 @@ public class HFileBlock implements Cacheable { buf.putInt(onDiskSizeWithoutHeader); buf.putInt(uncompressedSizeWithoutHeader); buf.putLong(prevBlockOffset); + if (this.fileContext.isUseHBaseChecksum()) { + buf.put(fileContext.getChecksumType().getCode()); + buf.putInt(fileContext.getBytesPerChecksum()); + buf.putInt(onDiskDataSizeWithHeader); + } } /** @@ -339,10 +344,9 @@ public class HFileBlock implements Cacheable { /** * Returns the buffer this block stores internally. The clients must not * modify the buffer object. This method has to be public because it is - * used in {@link org.apache.hadoop.hbase.util.CompoundBloomFilter} - * to avoid object creation on every Bloom filter lookup, but has to - * be used with caution. Checksum data is not included in the returned - * buffer but header data is. + * used in {@link org.apache.hadoop.hbase.util.CompoundBloomFilter} to avoid object + * creation on every Bloom filter lookup, but has to be used with caution. + * Checksum data is not included in the returned buffer but header data is. * * @return the buffer of this block for read-only operations */ @@ -1176,7 +1180,7 @@ public class HFileBlock implements Cacheable { cacheConf.shouldCacheCompressed(blockType.getCategory()) ? getOnDiskBufferWithHeader() : getUncompressedBufferWithHeader(), - DONT_FILL_HEADER, startOffset, + FILL_HEADER, startOffset, onDiskBytesWithHeader.length + onDiskChecksum.length, newContext); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java index fb82b7e..9413364 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java @@ -54,9 +54,9 @@ import org.apache.hadoop.util.StringUtils; * ({@link BlockIndexReader}) single-level and multi-level block indexes. * * Examples of how to use the block index writer can be found in - * {@link org.apache.hadoop.hbase.util.CompoundBloomFilterWriter} - * and {@link HFileWriterV2}. Examples of how to use the reader can be - * found in {@link HFileReaderV2} and TestHFileBlockIndex. + * {@link org.apache.hadoop.hbase.util.CompoundBloomFilterWriter} and + * {@link HFileWriterV2}. Examples of how to use the reader can be + * found in {@link HFileReaderV2} and TestHFileBlockIndex. */ @InterfaceAudience.Private public class HFileBlockIndex { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java index d7da5f3..82df5f7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java @@ -57,18 +57,16 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder; * {@link ConcurrentHashMap} and with a non-blocking eviction thread giving * constant-time {@link #cacheBlock} and {@link #getBlock} operations.

        * - * Contains three levels of block priority to allow for - * scan-resistance and in-memory families - * {@link org.apache.hadoop.hbase.HColumnDescriptor#setInMemory(boolean)} (An - * in-memory column family is a column family that should be served from memory if possible): + * Contains three levels of block priority to allow for scan-resistance and in-memory families + * {@link org.apache.hadoop.hbase.HColumnDescriptor#setInMemory(boolean)} (An in-memory column + * family is a column family that should be served from memory if possible): * single-access, multiple-accesses, and in-memory priority. * A block is added with an in-memory priority flag if - * {@link org.apache.hadoop.hbase.HColumnDescriptor#isInMemory()}, - * otherwise a block becomes a single access - * priority the first time it is read into this block cache. If a block is accessed again while - * in cache, it is marked as a multiple access priority block. This delineation of blocks is used - * to prevent scans from thrashing the cache adding a least-frequently-used - * element to the eviction algorithm.

        + * {@link org.apache.hadoop.hbase.HColumnDescriptor#isInMemory()}, otherwise a block becomes a + * single access priority the first time it is read into this block cache. If a block is + * accessed again while in cache, it is marked as a multiple access priority block. This + * delineation of blocks is used to prevent scans from thrashing the cache adding a + * least-frequently-used element to the eviction algorithm.

        * * Each priority is given its own chunk of the total cache to ensure * fairness during eviction. Each priority will retain close to its maximum diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java index df652f8..902e948 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java @@ -20,8 +20,8 @@ package org.apache.hadoop.hbase.io.hfile.bucket; -import java.util.ArrayList; import java.util.Arrays; +import java.util.LinkedList; import java.util.List; import java.util.Map; import java.util.concurrent.atomic.AtomicLong; @@ -177,13 +177,13 @@ public final class BucketAllocator { private int sizeIndex; BucketSizeInfo(int sizeIndex) { - bucketList = new ArrayList(); - freeBuckets = new ArrayList(); - completelyFreeBuckets = new ArrayList(); + bucketList = new LinkedList(); + freeBuckets = new LinkedList(); + completelyFreeBuckets = new LinkedList(); this.sizeIndex = sizeIndex; } - public void instantiateBucket(Bucket b) { + public synchronized void instantiateBucket(Bucket b) { assert b.isUninstantiated() || b.isCompletelyFree(); b.reconfigure(sizeIndex, bucketSizes, bucketCapacity); bucketList.add(b); @@ -233,7 +233,7 @@ public final class BucketAllocator { return b; } - private void removeBucket(Bucket b) { + private synchronized void removeBucket(Bucket b) { assert b.isCompletelyFree(); bucketList.remove(b); freeBuckets.remove(b); @@ -249,7 +249,7 @@ public final class BucketAllocator { if (b.isCompletelyFree()) completelyFreeBuckets.add(b); } - public IndexStatistics statistics() { + public synchronized IndexStatistics statistics() { long free = 0, used = 0; for (Bucket b : bucketList) { free += b.freeCount(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java index c094e45..d3b303a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java @@ -82,8 +82,8 @@ import com.google.common.util.concurrent.ThreadFactoryBuilder; * {@link org.apache.hadoop.hbase.io.hfile.LruBlockCache} * *

        BucketCache can be used as mainly a block cache (see - * {@link org.apache.hadoop.hbase.io.hfile.CombinedBlockCache}), - * combined with LruBlockCache to decrease CMS GC and heap fragmentation. + * {@link org.apache.hadoop.hbase.io.hfile.CombinedBlockCache}), combined with + * LruBlockCache to decrease CMS GC and heap fragmentation. * *

        It also can be used as a secondary cache (e.g. using a file on ssd/fusionio to store * blocks) to enlarge cache space via @@ -686,6 +686,8 @@ public class BucketCache implements BlockCache, HeapSize { } } + } catch (Throwable t) { + LOG.warn("Failed freeing space", t); } finally { cacheStats.evict(); freeInProgress = false; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java index e8194a6..3936f10 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java @@ -495,6 +495,10 @@ public class RpcServer implements RpcServerInterface { this.responder.doRespond(this); } } + + public UserGroupInformation getRemoteUser() { + return connection.user; + } } /** Listens on the socket. Creates jobs for the handler threads*/ @@ -2381,6 +2385,7 @@ public class RpcServer implements RpcServerInterface { } } + @Override public RpcScheduler getScheduler() { return scheduler; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java index b133ed6..ab8b485 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java @@ -73,4 +73,6 @@ public interface RpcServerInterface { */ @VisibleForTesting void refreshAuthManager(PolicyProvider pp); + + RpcScheduler getScheduler(); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java index 0ce64c3..fd9a60c 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java @@ -38,7 +38,7 @@ import org.apache.hadoop.util.ToolRunner; /** * A job with a map to count rows. * Map outputs table rows IF the input row has columns that have content. - * Uses org.apache.hadoop.mapred.lib.IdentityReducer + * Uses a org.apache.hadoop.mapred.lib.IdentityReducer */ @InterfaceAudience.Public @InterfaceStability.Stable diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java index 1065579..fbfd984 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java @@ -103,12 +103,13 @@ implements InputFormat { * Calculates the splits that will serve as input for the map tasks. *

          * Splits are created in number equal to the smallest between numSplits and - * the number of {@link org.apache.hadoop.hbase.regionserver.HRegion}s - * in the table. If the number of splits is smaller than the number of - * {@link org.apache.hadoop.hbase.regionserver.HRegion}s then splits are - * spanned across multiple {@link org.apache.hadoop.hbase.regionserver.HRegion}s - * and are grouped the most evenly possible. In the case splits are uneven the - * bigger splits are placed first in the {@link InputSplit} array. + * the number of {@link org.apache.hadoop.hbase.regionserver.HRegion}s in the table. + * If the number of splits is smaller than the number of + * {@link org.apache.hadoop.hbase.regionserver.HRegion}s then splits are spanned across + * multiple {@link org.apache.hadoop.hbase.regionserver.HRegion}s + * and are grouped the most evenly possible. In the + * case splits are uneven the bigger splits are placed first in the + * {@link InputSplit} array. * * @param job the map task {@link JobConf} * @param numSplits a hint to calculate the number of splits (mapred.map.tasks). diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java index 461ea6d..218c670 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCounter.java @@ -26,6 +26,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; @@ -46,6 +47,8 @@ import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; import com.google.common.base.Preconditions; @@ -69,7 +72,7 @@ import com.google.common.base.Preconditions; */ @InterfaceAudience.Public @InterfaceStability.Stable -public class CellCounter { +public class CellCounter extends Configured implements Tool { private static final Log LOG = LogFactory.getLog(CellCounter.class.getName()); @@ -79,6 +82,8 @@ public class CellCounter { */ static final String NAME = "CellCounter"; + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * Mapper that runs the count. */ @@ -187,7 +192,7 @@ public class CellCounter { Path outputDir = new Path(args[1]); String reportSeparatorString = (args.length > 2) ? args[2]: ":"; conf.set("ReportSeparator", reportSeparatorString); - Job job = new Job(conf, NAME + "_" + tableName); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); job.setJarByClass(CellCounter.class); Scan scan = getConfiguredScanForJob(conf, args); TableMapReduceUtil.initTableMapperJob(tableName, scan, @@ -263,15 +268,10 @@ public class CellCounter { endTime = endTime == 0 ? HConstants.LATEST_TIMESTAMP : endTime; return new long [] {startTime, endTime}; } - /** - * Main entry point. - * - * @param args The command line parameters. - * @throws Exception When running the job fails. - */ - public static void main(String[] args) throws Exception { - Configuration conf = HBaseConfiguration.create(); - String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); + + @Override + public int run(String[] args) throws Exception { + String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs(); if (otherArgs.length < 2) { System.err.println("ERROR: Wrong number of parameters: " + args.length); System.err.println("Usage: CellCounter "); @@ -285,9 +285,20 @@ public class CellCounter { "string : used to separate the rowId/column family name and qualifier name."); System.err.println(" [^[regex pattern] or [Prefix] parameter can be used to limit the cell counter count " + "operation to a limited subset of rows from the table based on regex or prefix pattern."); - System.exit(-1); + return -1; } - Job job = createSubmittableJob(conf, otherArgs); - System.exit(job.waitForCompletion(true) ? 0 : 1); + Job job = createSubmittableJob(getConf(), otherArgs); + return (job.waitForCompletion(true) ? 0 : 1); } + + /** + * Main entry point. + * @param args The command line parameters. + * @throws Exception When running the job fails. + */ + public static void main(String[] args) throws Exception { + int errCode = ToolRunner.run(HBaseConfiguration.create(), new CellCounter(), args); + System.exit(errCode); + } + } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java index 8d930d1..e88d6df 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java @@ -18,9 +18,6 @@ */ package org.apache.hadoop.hbase.mapreduce; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; - import java.io.IOException; import java.util.HashMap; import java.util.Map; @@ -35,6 +32,8 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Scan; @@ -90,7 +89,6 @@ public class CopyTable extends Configured implements Tool { } Job job = Job.getInstance(getConf(), getConf().get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); - job.setJarByClass(CopyTable.class); Scan scan = new Scan(); scan.setCacheBlocks(false); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java index 67a9f7a..14786ab 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java @@ -25,6 +25,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Result; @@ -41,6 +42,8 @@ import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; /** * Export an HBase table. @@ -49,12 +52,14 @@ import org.apache.hadoop.util.GenericOptionsParser; */ @InterfaceAudience.Public @InterfaceStability.Stable -public class Export { +public class Export extends Configured implements Tool { private static final Log LOG = LogFactory.getLog(Export.class); final static String NAME = "export"; final static String RAW_SCAN = "hbase.mapreduce.include.deleted.rows"; final static String EXPORT_BATCHING = "hbase.export.scanner.batch"; + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * Sets up the actual job. * @@ -67,7 +72,7 @@ public class Export { throws IOException { String tableName = args[0]; Path outputDir = new Path(args[1]); - Job job = new Job(conf, NAME + "_" + tableName); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); job.setJobName(NAME + "_" + tableName); job.setJarByClass(Export.class); // Set optional scan parameters @@ -163,6 +168,8 @@ public class Export { System.err.println(" -D " + RAW_SCAN + "=true"); System.err.println(" -D " + TableInputFormat.SCAN_ROW_START + "="); System.err.println(" -D " + TableInputFormat.SCAN_ROW_STOP + "="); + System.err.println(" -D " + JOB_NAME_CONF_KEY + + "=jobName - use the specified mapreduce job name for the export"); System.err.println("For performance consider the following properties:\n" + " -Dhbase.client.scanner.caching=100\n" + " -Dmapreduce.map.speculative=false\n" @@ -171,20 +178,25 @@ public class Export { + " -D" + EXPORT_BATCHING + "=10"); } + + @Override + public int run(String[] args) throws Exception { + String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs(); + if (otherArgs.length < 2) { + usage("Wrong number of arguments: " + otherArgs.length); + return -1; + } + Job job = createSubmittableJob(getConf(), otherArgs); + return (job.waitForCompletion(true) ? 0 : 1); + } + /** * Main entry point. - * - * @param args The command line parameters. + * @param args The command line parameters. * @throws Exception When running the job fails. */ public static void main(String[] args) throws Exception { - Configuration conf = HBaseConfiguration.create(); - String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); - if (otherArgs.length < 2) { - usage("Wrong number of arguments: " + otherArgs.length); - System.exit(-1); - } - Job job = createSubmittableJob(conf, otherArgs); - System.exit(job.waitForCompletion(true)? 0 : 1); + int errCode = ToolRunner.run(HBaseConfiguration.create(), new Export(), args); + System.exit(errCode); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java index 402381b..6d6feb1 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java @@ -27,6 +27,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Table; @@ -87,7 +88,8 @@ public class HFileOutputFormat extends FileOutputFormat> cls) throws IOException { - Configuration conf = job.getConfiguration(); + /** + * Configure a MapReduce Job to perform an incremental load into the given + * table. This + *
            + *
          • Inspects the table to configure a total order partitioner
          • + *
          • Uploads the partitions file to the cluster and adds it to the DistributedCache
          • + *
          • Sets the number of reduce tasks to match the current number of regions
          • + *
          • Sets the output key/value class to match HFileOutputFormat2's requirements
          • + *
          • Sets the reducer up to perform the appropriate sorting (either KeyValueSortReducer or + * PutSortReducer)
          • + *
          + * The user should be sure to set the map output value class to either KeyValue or Put before + * running this function. + */ + public static void configureIncrementalLoad(Job job, HTableDescriptor tableDescriptor, + RegionLocator regionLocator) throws IOException { + configureIncrementalLoad(job, tableDescriptor, regionLocator, HFileOutputFormat2.class); + } + static void configureIncrementalLoad(Job job, HTableDescriptor tableDescriptor, + RegionLocator regionLocator, Class> cls) throws IOException, + UnsupportedEncodingException { + Configuration conf = job.getConfiguration(); job.setOutputKeyClass(ImmutableBytesWritable.class); job.setOutputValueClass(KeyValue.class); job.setOutputFormatClass(cls); @@ -412,7 +431,7 @@ public class HFileOutputFormat2 KeyValueSerialization.class.getName()); // Use table's region boundaries for TOP split points. - LOG.info("Looking up current regions for table " + table.getName()); + LOG.info("Looking up current regions for table " + regionLocator.getName()); List startKeys = getRegionStartKeys(regionLocator); LOG.info("Configuring " + startKeys.size() + " reduce partitions " + "to match current region count"); @@ -420,14 +439,14 @@ public class HFileOutputFormat2 configurePartitioner(job, startKeys); // Set compression algorithms based on column families - configureCompression(table, conf); - configureBloomType(table, conf); - configureBlockSize(table, conf); - configureDataBlockEncoding(table, conf); + configureCompression(conf, tableDescriptor); + configureBloomType(tableDescriptor, conf); + configureBlockSize(tableDescriptor, conf); + configureDataBlockEncoding(tableDescriptor, conf); TableMapReduceUtil.addDependencyJars(job); TableMapReduceUtil.initCredentials(job); - LOG.info("Incremental table " + table.getName() + " output configured."); + LOG.info("Incremental table " + regionLocator.getName() + " output configured."); } public static void configureIncrementalLoadMap(Job job, Table table) throws IOException { @@ -438,10 +457,11 @@ public class HFileOutputFormat2 job.setOutputFormatClass(HFileOutputFormat2.class); // Set compression algorithms based on column families - configureCompression(table, conf); - configureBloomType(table, conf); - configureBlockSize(table, conf); - configureDataBlockEncoding(table, conf); + configureCompression(conf, table.getTableDescriptor()); + configureBloomType(table.getTableDescriptor(), conf); + configureBlockSize(table.getTableDescriptor(), conf); + HTableDescriptor tableDescriptor = table.getTableDescriptor(); + configureDataBlockEncoding(tableDescriptor, conf); TableMapReduceUtil.addDependencyJars(job); TableMapReduceUtil.initCredentials(job); @@ -590,10 +610,9 @@ public class HFileOutputFormat2 @edu.umd.cs.findbugs.annotations.SuppressWarnings( value="RCN_REDUNDANT_NULLCHECK_OF_NONNULL_VALUE") @VisibleForTesting - static void configureCompression( - Table table, Configuration conf) throws IOException { + static void configureCompression(Configuration conf, HTableDescriptor tableDescriptor) + throws UnsupportedEncodingException { StringBuilder compressionConfigValue = new StringBuilder(); - HTableDescriptor tableDescriptor = table.getTableDescriptor(); if(tableDescriptor == null){ // could happen with mock table instance return; @@ -617,17 +636,16 @@ public class HFileOutputFormat2 /** * Serialize column family to block size map to configuration. * Invoked while configuring the MR job for incremental load. - * - * @param table to read the properties from + * @param tableDescriptor to read the properties from * @param conf to persist serialized values into + * * @throws IOException * on failure to read column family descriptors */ @VisibleForTesting - static void configureBlockSize( - Table table, Configuration conf) throws IOException { + static void configureBlockSize(HTableDescriptor tableDescriptor, Configuration conf) + throws UnsupportedEncodingException { StringBuilder blockSizeConfigValue = new StringBuilder(); - HTableDescriptor tableDescriptor = table.getTableDescriptor(); if (tableDescriptor == null) { // could happen with mock table instance return; @@ -651,16 +669,15 @@ public class HFileOutputFormat2 /** * Serialize column family to bloom type map to configuration. * Invoked while configuring the MR job for incremental load. - * - * @param table to read the properties from + * @param tableDescriptor to read the properties from * @param conf to persist serialized values into + * * @throws IOException * on failure to read column family descriptors */ @VisibleForTesting - static void configureBloomType( - Table table, Configuration conf) throws IOException { - HTableDescriptor tableDescriptor = table.getTableDescriptor(); + static void configureBloomType(HTableDescriptor tableDescriptor, Configuration conf) + throws UnsupportedEncodingException { if (tableDescriptor == null) { // could happen with mock table instance return; @@ -694,9 +711,8 @@ public class HFileOutputFormat2 * on failure to read column family descriptors */ @VisibleForTesting - static void configureDataBlockEncoding(Table table, - Configuration conf) throws IOException { - HTableDescriptor tableDescriptor = table.getTableDescriptor(); + static void configureDataBlockEncoding(HTableDescriptor tableDescriptor, + Configuration conf) throws UnsupportedEncodingException { if (tableDescriptor == null) { // could happen with mock table instance return; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HLogInputFormat.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HLogInputFormat.java index 4ed0672..763d802 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HLogInputFormat.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HLogInputFormat.java @@ -23,7 +23,6 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.regionserver.wal.HLogKey; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.mapreduce.InputFormat; @@ -33,7 +32,8 @@ import org.apache.hadoop.mapreduce.RecordReader; import org.apache.hadoop.mapreduce.TaskAttemptContext; /** - * Simple {@link InputFormat} for {@link WAL} files. + * Simple {@link InputFormat} for {@link org.apache.hadoop.hbase.wal.WAL} + * files. * @deprecated use {@link WALInputFormat} */ @Deprecated diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java index 5ed3185..ec3192e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/IdentityTableReducer.java @@ -29,8 +29,8 @@ import org.apache.hadoop.io.Writable; /** * Convenience class that simply writes all values (which must be - * {@link org.apache.hadoop.hbase.client.Put} or - * {@link org.apache.hadoop.hbase.client.Delete} instances) + * {@link org.apache.hadoop.hbase.client.Put Put} or + * {@link org.apache.hadoop.hbase.client.Delete Delete} instances) * passed to it out to the configured HBase table. This works in combination * with {@link TableOutputFormat} which actually does the writing to HBase.

          * @@ -45,8 +45,8 @@ import org.apache.hadoop.io.Writable; * * This will also set the proper {@link TableOutputFormat} which is given the * table parameter. The - * {@link org.apache.hadoop.hbase.client.Put} or - * {@link org.apache.hadoop.hbase.client.Delete} define the + * {@link org.apache.hadoop.hbase.client.Put Put} or + * {@link org.apache.hadoop.hbase.client.Delete Delete} define the * row and columns implicitly. */ @InterfaceAudience.Public @@ -60,13 +60,12 @@ extends TableReducer { /** * Writes each given record, consisting of the row key and the given values, * to the configured {@link org.apache.hadoop.mapreduce.OutputFormat}. - * It is emitting the row key and each - * {@link org.apache.hadoop.hbase.client.Put Put} or - * {@link org.apache.hadoop.hbase.client.Delete} as separate pairs. + * It is emitting the row key and each {@link org.apache.hadoop.hbase.client.Put Put} + * or {@link org.apache.hadoop.hbase.client.Delete Delete} as separate pairs. * * @param key The current row key. * @param values The {@link org.apache.hadoop.hbase.client.Put Put} or - * {@link org.apache.hadoop.hbase.client.Delete} list for the given + * {@link org.apache.hadoop.hbase.client.Delete Delete} list for the given * row. * @param context The context of the reduce. * @throws IOException When writing the record fails. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java index 399d607..bd44518 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Import.java @@ -18,21 +18,10 @@ */ package org.apache.hadoop.hbase.mapreduce; -import java.io.IOException; -import java.lang.reflect.InvocationTargetException; -import java.lang.reflect.Method; -import java.util.ArrayList; -import java.util.Collections; -import java.util.List; -import java.util.Map; -import java.util.TreeMap; -import java.util.UUID; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; @@ -41,13 +30,18 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.HBaseAdmin; -import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionLocator; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.util.Bytes; @@ -58,15 +52,27 @@ import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; import org.apache.zookeeper.KeeperException; +import java.io.IOException; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.TreeMap; +import java.util.UUID; + /** * Import data written by {@link Export}. */ @InterfaceAudience.Public @InterfaceStability.Stable -public class Import { +public class Import extends Configured implements Tool { private static final Log LOG = LogFactory.getLog(Import.class); final static String NAME = "import"; public final static String CF_RENAME_PROP = "HBASE_IMPORTER_RENAME_CFS"; @@ -76,6 +82,8 @@ public class Import { public final static String TABLE_NAME = "import.table.name"; public final static String WAL_DURABILITY = "import.wal.durability"; + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * A mapper that just writes out KeyValues. */ @@ -423,7 +431,7 @@ public class Import { TableName tableName = TableName.valueOf(args[0]); conf.set(TABLE_NAME, tableName.getNameAsString()); Path inputDir = new Path(args[1]); - Job job = new Job(conf, NAME + "_" + tableName); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); job.setJarByClass(Importer.class); FileInputFormat.setInputPaths(job, inputDir); job.setInputFormatClass(SequenceFileInputFormat.class); @@ -441,15 +449,18 @@ public class Import { if (hfileOutPath != null) { job.setMapperClass(KeyValueImporter.class); - HTable table = new HTable(conf, tableName); - job.setReducerClass(KeyValueSortReducer.class); - Path outputDir = new Path(hfileOutPath); - FileOutputFormat.setOutputPath(job, outputDir); - job.setMapOutputKeyClass(ImmutableBytesWritable.class); - job.setMapOutputValueClass(KeyValue.class); - HFileOutputFormat2.configureIncrementalLoad(job, table, table); - TableMapReduceUtil.addDependencyJars(job.getConfiguration(), - com.google.common.base.Preconditions.class); + try (Connection conn = ConnectionFactory.createConnection(conf); + Table table = conn.getTable(tableName); + RegionLocator regionLocator = conn.getRegionLocator(tableName)){ + job.setReducerClass(KeyValueSortReducer.class); + Path outputDir = new Path(hfileOutPath); + FileOutputFormat.setOutputPath(job, outputDir); + job.setMapOutputKeyClass(ImmutableBytesWritable.class); + job.setMapOutputValueClass(KeyValue.class); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); + TableMapReduceUtil.addDependencyJars(job.getConfiguration(), + com.google.common.base.Preconditions.class); + } } else { // No reducers. Just write straight to table. Call initTableReducerJob // because it sets up the TableOutputFormat. @@ -482,6 +493,8 @@ public class Import { + " Filter#filterKeyValue(KeyValue) method to determine if the KeyValue should be added;" + " Filter.ReturnCode#INCLUDE and #INCLUDE_AND_NEXT_COL will be considered as including" + " the KeyValue."); + System.err.println(" -D " + JOB_NAME_CONF_KEY + + "=jobName - use the specified mapreduce job name for the import"); System.err.println("For performance consider the following options:\n" + " -Dmapreduce.map.speculative=false\n" + " -Dmapreduce.reduce.speculative=false\n" @@ -515,29 +528,34 @@ public class Import { } } - /** - * Main entry point. - * - * @param args The command line parameters. - * @throws Exception When running the job fails. - */ - public static void main(String[] args) throws Exception { - Configuration conf = HBaseConfiguration.create(); - String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); + @Override + public int run(String[] args) throws Exception { + String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs(); if (otherArgs.length < 2) { usage("Wrong number of arguments: " + otherArgs.length); - System.exit(-1); + return -1; } String inputVersionString = System.getProperty(ResultSerialization.IMPORT_FORMAT_VER); if (inputVersionString != null) { - conf.set(ResultSerialization.IMPORT_FORMAT_VER, inputVersionString); + getConf().set(ResultSerialization.IMPORT_FORMAT_VER, inputVersionString); } - Job job = createSubmittableJob(conf, otherArgs); + Job job = createSubmittableJob(getConf(), otherArgs); boolean isJobSuccessful = job.waitForCompletion(true); if(isJobSuccessful){ // Flush all the regions of the table - flushRegionsIfNecessary(conf); + flushRegionsIfNecessary(getConf()); } - System.exit(job.waitForCompletion(true) ? 0 : 1); + return (isJobSuccessful ? 0 : 1); + } + + /** + * Main entry point. + * @param args The command line parameters. + * @throws Exception When running the job fails. + */ + public static void main(String[] args) throws Exception { + int errCode = ToolRunner.run(HBaseConfiguration.create(), new Import(), args); + System.exit(errCode); } + } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java index 54e0034..90f2f0e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java @@ -20,17 +20,13 @@ package org.apache.hadoop.hbase.mapreduce; import static java.lang.String.format; -import java.io.File; -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashSet; -import java.util.Set; +import com.google.common.base.Preconditions; +import com.google.common.base.Splitter; +import com.google.common.collect.Lists; import org.apache.commons.lang.StringUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; @@ -40,11 +36,14 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; -import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionLocator; +import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.util.Base64; import org.apache.hadoop.hbase.util.Bytes; @@ -59,9 +58,11 @@ import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; -import com.google.common.base.Preconditions; -import com.google.common.base.Splitter; -import com.google.common.collect.Lists; +import java.io.File; +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.Set; /** * Tool to import data from a TSV file. @@ -496,7 +497,8 @@ public class ImportTsv extends Configured implements Tool { throw new TableNotFoundException(errorMsg); } } - try (HTable table = (HTable)connection.getTable(tableName)) { + try (Table table = connection.getTable(tableName); + RegionLocator regionLocator = connection.getRegionLocator(tableName)) { boolean noStrict = conf.getBoolean(NO_STRICT_COL_FAMILY, false); // if no.strict is false then check column family if(!noStrict) { @@ -534,7 +536,8 @@ public class ImportTsv extends Configured implements Tool { job.setMapOutputValueClass(Put.class); job.setCombinerClass(PutCombiner.class); } - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), + regionLocator); } } else { if (!admin.tableExists(tableName)) { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java index b4b6adc..0333e36 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java @@ -20,42 +20,21 @@ package org.apache.hadoop.hbase.mapreduce; import static java.lang.String.format; -import java.io.FileNotFoundException; -import java.io.IOException; -import java.io.InterruptedIOException; -import java.nio.ByteBuffer; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.Deque; -import java.util.HashMap; -import java.util.HashSet; -import java.util.LinkedList; -import java.util.List; -import java.util.Map; -import java.util.Map.Entry; -import java.util.Set; -import java.util.TreeMap; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Future; -import java.util.concurrent.LinkedBlockingQueue; -import java.util.concurrent.ThreadPoolExecutor; -import java.util.concurrent.TimeUnit; +import com.google.common.collect.HashMultimap; +import com.google.common.collect.Multimap; +import com.google.common.collect.Multimaps; +import com.google.common.util.concurrent.ThreadFactoryBuilder; import org.apache.commons.lang.mutable.MutableInt; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; -import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; @@ -64,10 +43,14 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.RegionLocator; import org.apache.hadoop.hbase.client.RegionServerCallable; import org.apache.hadoop.hbase.client.RpcRetryingCallerFactory; import org.apache.hadoop.hbase.client.Table; @@ -95,12 +78,30 @@ import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; -import com.google.common.collect.HashMultimap; -import com.google.common.collect.Multimap; -import com.google.common.collect.Multimaps; -import com.google.common.util.concurrent.ThreadFactoryBuilder; - +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InterruptedIOException; +import java.nio.ByteBuffer; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.Deque; +import java.util.HashMap; +import java.util.HashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Set; +import java.util.TreeMap; import java.util.UUID; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Future; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; /** * Tool to load the output of HFileOutputFormat into an existing table. @@ -235,12 +236,24 @@ public class LoadIncrementalHFiles extends Configured implements Tool { public void doBulkLoad(Path hfofDir, final HTable table) throws TableNotFoundException, IOException { - final HConnection conn = table.getConnection(); + doBulkLoad(hfofDir, table.getConnection().getAdmin(), table, table.getRegionLocator()); + } + + /** + * Perform a bulk load of the given directory into the given + * pre-existing table. This method is not threadsafe. + * + * @param hfofDir the directory that was provided as the output path + * of a job using HFileOutputFormat + * @param table the table to load into + * @throws TableNotFoundException if table does not yet exist + */ + @SuppressWarnings("deprecation") + public void doBulkLoad(Path hfofDir, final Admin admin, Table table, + RegionLocator regionLocator) throws TableNotFoundException, IOException { - if (!conn.isTableAvailable(table.getName())) { - throw new TableNotFoundException("Table " + - Bytes.toStringBinary(table.getTableName()) + - "is not currently available."); + if (!admin.isTableAvailable(regionLocator.getName())) { + throw new TableNotFoundException("Table " + table.getName() + "is not currently available."); } // initialize thread pools @@ -276,7 +289,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { String msg = "Unmatched family names found: unmatched family names in HFiles to be bulkloaded: " + unmatchedFamilies + "; valid family names of table " - + Bytes.toString(table.getTableName()) + " are: " + familyNames; + + table.getName() + " are: " + familyNames; LOG.error(msg); throw new IOException(msg); } @@ -300,7 +313,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { // Assumes that region splits can happen while this occurs. while (!queue.isEmpty()) { // need to reload split keys each iteration. - final Pair startEndKeys = table.getStartEndKeys(); + final Pair startEndKeys = regionLocator.getStartEndKeys(); if (count != 0) { LOG.info("Split occured while grouping HFiles, retry attempt " + + count + " with " + queue.size() + " files remaining to group or split"); @@ -323,7 +336,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { + " hfiles to one family of one region"); } - bulkLoadPhase(table, conn, pool, queue, regionGroups); + bulkLoadPhase(table, admin.getConnection(), pool, queue, regionGroups); // NOTE: The next iteration's split / group could happen in parallel to // atomic bulkloads assuming that there are splits and no merges, and @@ -359,7 +372,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { * them. Any failures are re-queued for another pass with the * groupOrSplitPhase. */ - protected void bulkLoadPhase(final Table table, final HConnection conn, + protected void bulkLoadPhase(final Table table, final Connection conn, ExecutorService pool, Deque queue, final Multimap regionGroups) throws IOException { // atomically bulk load the groups. @@ -431,7 +444,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { * @return A Multimap that groups LQI by likely * bulk load region targets. */ - private Multimap groupOrSplitPhase(final HTable table, + private Multimap groupOrSplitPhase(final Table table, ExecutorService pool, Deque queue, final Pair startEndKeys) throws IOException { // need synchronized only within this scope of this @@ -524,7 +537,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { * @throws IOException */ protected List groupOrSplit(Multimap regionGroups, - final LoadQueueItem item, final HTable table, + final LoadQueueItem item, final Table table, final Pair startEndKeys) throws IOException { final Path hfilePath = item.hfilePath; @@ -569,18 +582,18 @@ public class LoadIncrementalHFiles extends Configured implements Tool { */ if (indexForCallable < 0) { throw new IOException("The first region info for table " - + Bytes.toString(table.getTableName()) + + table.getName() + " cann't be found in hbase:meta.Please use hbck tool to fix it first."); } else if ((indexForCallable == startEndKeys.getFirst().length - 1) && !Bytes.equals(startEndKeys.getSecond()[indexForCallable], HConstants.EMPTY_BYTE_ARRAY)) { throw new IOException("The last region info for table " - + Bytes.toString(table.getTableName()) + + table.getName() + " cann't be found in hbase:meta.Please use hbck tool to fix it first."); } else if (indexForCallable + 1 < startEndKeys.getFirst().length && !(Bytes.compareTo(startEndKeys.getSecond()[indexForCallable], startEndKeys.getFirst()[indexForCallable + 1]) == 0)) { throw new IOException("The endkey of one region for table " - + Bytes.toString(table.getTableName()) + + table.getName() + " is not equal to the startkey of the next region in hbase:meta." + "Please use hbck tool to fix it first."); } @@ -623,7 +636,7 @@ public class LoadIncrementalHFiles extends Configured implements Tool { * @return empty list if success, list of items to retry on recoverable * failure */ - protected List tryAtomicRegionLoad(final HConnection conn, + protected List tryAtomicRegionLoad(final Connection conn, final TableName tableName, final byte[] first, Collection lqis) throws IOException { final List> famPaths = @@ -690,7 +703,8 @@ public class LoadIncrementalHFiles extends Configured implements Tool { try { List toRetry = new ArrayList(); Configuration conf = getConf(); - boolean success = RpcRetryingCallerFactory.instantiate(conf). newCaller() + boolean success = RpcRetryingCallerFactory.instantiate(conf, + null). newCaller() .callWithRetries(svrCallable, Integer.MAX_VALUE); if (!success) { LOG.warn("Attempt to bulk load region containing " diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java index 5c253cb..890cfdd 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableInputFormatBase.java @@ -46,6 +46,9 @@ import org.apache.hadoop.mapreduce.JobContext; import org.apache.hadoop.mapreduce.RecordReader; import org.apache.hadoop.mapreduce.TaskAttemptContext; +import java.util.Map; +import java.util.HashMap; +import java.util.Iterator; /** * A base for {@link MultiTableInputFormat}s. Receives a list of * {@link Scan} instances that define the input tables and @@ -129,75 +132,83 @@ public abstract class MultiTableInputFormatBase extends if (scans.isEmpty()) { throw new IOException("No scans were provided."); } - List splits = new ArrayList(); + Map> tableMaps = new HashMap>(); for (Scan scan : scans) { byte[] tableNameBytes = scan.getAttribute(Scan.SCAN_ATTRIBUTES_TABLE_NAME); if (tableNameBytes == null) throw new IOException("A scan object did not have a table name"); TableName tableName = TableName.valueOf(tableNameBytes); - Table table = null; - RegionLocator regionLocator = null; - Connection conn = null; - try { - conn = ConnectionFactory.createConnection(context.getConfiguration()); - table = conn.getTable(tableName); - regionLocator = conn.getRegionLocator(tableName); - regionLocator = (RegionLocator) table; - Pair keys = regionLocator.getStartEndKeys(); - if (keys == null || keys.getFirst() == null || - keys.getFirst().length == 0) { - throw new IOException("Expecting at least one region for table : " - + tableName.getNameAsString()); - } - int count = 0; - byte[] startRow = scan.getStartRow(); - byte[] stopRow = scan.getStopRow(); + List scanList = tableMaps.get(tableName); + if (scanList == null) { + scanList = new ArrayList(); + tableMaps.put(tableName, scanList); + } + scanList.add(scan); + } - RegionSizeCalculator sizeCalculator = new RegionSizeCalculator( - regionLocator, conn.getAdmin()); + List splits = new ArrayList(); + Iterator iter = tableMaps.entrySet().iterator(); + while (iter.hasNext()) { + Map.Entry> entry = (Map.Entry>) iter.next(); + TableName tableName = entry.getKey(); + List scanList = entry.getValue(); - for (int i = 0; i < keys.getFirst().length; i++) { - if (!includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) { - continue; + try (Connection conn = ConnectionFactory.createConnection(context.getConfiguration()); + Table table = conn.getTable(tableName); + RegionLocator regionLocator = conn.getRegionLocator(tableName)) { + RegionSizeCalculator sizeCalculator = new RegionSizeCalculator( + regionLocator, conn.getAdmin()); + Pair keys = regionLocator.getStartEndKeys(); + for (Scan scan : scanList) { + if (keys == null || keys.getFirst() == null || keys.getFirst().length == 0) { + throw new IOException("Expecting at least one region for table : " + + tableName.getNameAsString()); } - HRegionLocation hregionLocation = regionLocator.getRegionLocation( - keys.getFirst()[i], false); - String regionHostname = hregionLocation.getHostname(); - HRegionInfo regionInfo = hregionLocation.getRegionInfo(); + int count = 0; + + byte[] startRow = scan.getStartRow(); + byte[] stopRow = scan.getStopRow(); + + for (int i = 0; i < keys.getFirst().length; i++) { + if (!includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) { + continue; + } - // determine if the given start and stop keys fall into the range - if ((startRow.length == 0 || keys.getSecond()[i].length == 0 || - Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && - (stopRow.length == 0 || - Bytes.compareTo(stopRow, keys.getFirst()[i]) > 0)) { - byte[] splitStart = - startRow.length == 0 || - Bytes.compareTo(keys.getFirst()[i], startRow) >= 0 ? keys - .getFirst()[i] : startRow; - byte[] splitStop = - (stopRow.length == 0 || Bytes.compareTo(keys.getSecond()[i], - stopRow) <= 0) && keys.getSecond()[i].length > 0 ? keys - .getSecond()[i] : stopRow; + if ((startRow.length == 0 || keys.getSecond()[i].length == 0 || + Bytes.compareTo(startRow, keys.getSecond()[i]) < 0) && + (stopRow.length == 0 || Bytes.compareTo(stopRow, + keys.getFirst()[i]) > 0)) { + byte[] splitStart = startRow.length == 0 || + Bytes.compareTo(keys.getFirst()[i], startRow) >= 0 ? + keys.getFirst()[i] : startRow; + byte[] splitStop = (stopRow.length == 0 || + Bytes.compareTo(keys.getSecond()[i], stopRow) <= 0) && + keys.getSecond()[i].length > 0 ? + keys.getSecond()[i] : stopRow; - long regionSize = sizeCalculator.getRegionSize(regionInfo.getRegionName()); - TableSplit split = - new TableSplit(regionLocator.getName(), - scan, splitStart, splitStop, regionHostname, regionSize); + HRegionLocation hregionLocation = regionLocator.getRegionLocation( + keys.getFirst()[i], false); + String regionHostname = hregionLocation.getHostname(); + HRegionInfo regionInfo = hregionLocation.getRegionInfo(); + long regionSize = sizeCalculator.getRegionSize( + regionInfo.getRegionName()); - splits.add(split); - if (LOG.isDebugEnabled()) - LOG.debug("getSplits: split -> " + (count++) + " -> " + split); + TableSplit split = new TableSplit(table.getName(), + scan, splitStart, splitStop, regionHostname, regionSize); + + splits.add(split); + + if (LOG.isDebugEnabled()) + LOG.debug("getSplits: split -> " + (count++) + " -> " + split); + } } } - } finally { - if (null != table) table.close(); - if (null != regionLocator) regionLocator.close(); - if (null != conn) conn.close(); } } + return splits; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java index 04450f8..5a506e1 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java @@ -27,6 +27,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.conf.Configured; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; @@ -37,6 +38,8 @@ import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; +import org.apache.hadoop.util.Tool; +import org.apache.hadoop.util.ToolRunner; /** * A job with a just a map phase to count rows. Map outputs table rows IF the @@ -44,11 +47,13 @@ import org.apache.hadoop.util.GenericOptionsParser; */ @InterfaceAudience.Public @InterfaceStability.Stable -public class RowCounter { +public class RowCounter extends Configured implements Tool { /** Name of this 'program'. */ static final String NAME = "rowcounter"; + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * Mapper that runs the count. */ @@ -130,7 +135,7 @@ public class RowCounter { } } - Job job = new Job(conf, NAME + "_" + tableName); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); job.setJarByClass(RowCounter.class); Scan scan = new Scan(); scan.setCacheBlocks(false); @@ -190,23 +195,28 @@ public class RowCounter { + "-Dmapreduce.map.speculative=false"); } - /** - * Main entry point. - * - * @param args The command line parameters. - * @throws Exception When running the job fails. - */ - public static void main(String[] args) throws Exception { - Configuration conf = HBaseConfiguration.create(); - String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); + @Override + public int run(String[] args) throws Exception { + String[] otherArgs = new GenericOptionsParser(getConf(), args).getRemainingArgs(); if (otherArgs.length < 1) { printUsage("Wrong number of parameters: " + args.length); - System.exit(-1); + return -1; } - Job job = createSubmittableJob(conf, otherArgs); + Job job = createSubmittableJob(getConf(), otherArgs); if (job == null) { - System.exit(-1); + return -1; } - System.exit(job.waitForCompletion(true) ? 0 : 1); + return (job.waitForCompletion(true) ? 0 : 1); } + + /** + * Main entry point. + * @param args The command line parameters. + * @throws Exception When running the job fails. + */ + public static void main(String[] args) throws Exception { + int errCode = ToolRunner.run(HBaseConfiguration.create(), new RowCounter(), args); + System.exit(errCode); + } + } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java index 4123467..d6e814d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java @@ -31,6 +31,7 @@ import javax.naming.NamingException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.HConstants; @@ -91,6 +92,16 @@ import org.apache.hadoop.util.StringUtils; public abstract class TableInputFormatBase extends InputFormat { + /** Specify if we enable auto-balance for input in M/R jobs.*/ + public static final String MAPREDUCE_INPUT_AUTOBALANCE = "hbase.mapreduce.input.autobalance"; + /** Specify if ratio for data skew in M/R jobs, it goes well with the enabling hbase.mapreduce + * .input.autobalance property.*/ + public static final String INPUT_AUTOBALANCE_MAXSKEWRATIO = "hbase.mapreduce.input.autobalance" + + ".maxskewratio"; + /** Specify if the row key in table is text (ASCII between 32~126), + * default is true. False means the table is using binary row key*/ + public static final String TABLE_ROW_TEXTKEY = "hbase.table.row.textkey"; + final Log LOG = LogFactory.getLog(TableInputFormatBase.class); /** Holds the details for the internal scanner. @@ -223,7 +234,7 @@ extends InputFormat { } List splits = new ArrayList(keys.getFirst().length); for (int i = 0; i < keys.getFirst().length; i++) { - if ( !includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) { + if (!includeRegionInSplit(keys.getFirst()[i], keys.getSecond()[i])) { continue; } HRegionLocation location = regionLocator.getRegionLocation(keys.getFirst()[i], false); @@ -266,7 +277,26 @@ extends InputFormat { } } } - return splits; + //The default value of "hbase.mapreduce.input.autobalance" is false, which means not enabled. + boolean enableAutoBalance = context.getConfiguration() + .getBoolean(MAPREDUCE_INPUT_AUTOBALANCE, false); + if (enableAutoBalance) { + long totalRegionSize=0; + for (int i = 0; i < splits.size(); i++){ + TableSplit ts = (TableSplit)splits.get(i); + totalRegionSize += ts.getLength(); + } + long averageRegionSize = totalRegionSize / splits.size(); + // the averageRegionSize must be positive. + if (averageRegionSize <= 0) { + LOG.warn("The averageRegionSize is not positive: "+ averageRegionSize + ", " + + "set it to 1."); + averageRegionSize = 1; + } + return calculateRebalancedSplits(splits, context, averageRegionSize); + } else { + return splits; + } } public String reverseDNS(InetAddress ipAddress) throws NamingException, UnknownHostException { @@ -289,6 +319,170 @@ extends InputFormat { } /** + * Calculates the number of MapReduce input splits for the map tasks. The number of + * MapReduce input splits depends on the average region size and the "data skew ratio" user set in + * configuration. + * + * @param list The list of input splits before balance. + * @param context The current job context. + * @param average The average size of all regions . + * @return The list of input splits. + * @throws IOException When creating the list of splits fails. + * @see org.apache.hadoop.mapreduce.InputFormat#getSplits( + * org.apache.hadoop.mapreduce.JobContext) + */ + public List calculateRebalancedSplits(List list, JobContext context, + long average) throws IOException { + List resultList = new ArrayList(); + Configuration conf = context.getConfiguration(); + //The default data skew ratio is 3 + long dataSkewRatio = conf.getLong(INPUT_AUTOBALANCE_MAXSKEWRATIO, 3); + //It determines which mode to use: text key mode or binary key mode. The default is text mode. + boolean isTextKey = context.getConfiguration().getBoolean(TABLE_ROW_TEXTKEY, true); + long dataSkewThreshold = dataSkewRatio * average; + int count = 0; + while (count < list.size()) { + TableSplit ts = (TableSplit)list.get(count); + String regionLocation = ts.getRegionLocation(); + long regionSize = ts.getLength(); + if (regionSize >= dataSkewThreshold) { + // if the current region size is large than the data skew threshold, + // split the region into two MapReduce input splits. + byte[] splitKey = getSplitKey(ts.getStartRow(), ts.getEndRow(), isTextKey); + //Set the size of child TableSplit as 1/2 of the region size. The exact size of the + // MapReduce input splits is not far off. + TableSplit t1 = new TableSplit(table.getName(), ts.getStartRow(), splitKey, regionLocation, + regionSize / 2); + TableSplit t2 = new TableSplit(table.getName(), splitKey, ts.getEndRow(), regionLocation, + regionSize - regionSize / 2); + resultList.add(t1); + resultList.add(t2); + count++; + } else if (regionSize >= average) { + // if the region size between average size and data skew threshold size, + // make this region as one MapReduce input split. + resultList.add(ts); + count++; + } else { + // if the total size of several small continuous regions less than the average region size, + // combine them into one MapReduce input split. + long totalSize = regionSize; + byte[] splitStartKey = ts.getStartRow(); + byte[] splitEndKey = ts.getEndRow(); + count++; + for (; count < list.size(); count++) { + TableSplit nextRegion = (TableSplit)list.get(count); + long nextRegionSize = nextRegion.getLength(); + if (totalSize + nextRegionSize <= dataSkewThreshold) { + totalSize = totalSize + nextRegionSize; + splitEndKey = nextRegion.getEndRow(); + } else { + break; + } + } + TableSplit t = new TableSplit(table.getName(), splitStartKey, splitEndKey, + regionLocation, totalSize); + resultList.add(t); + } + } + return resultList; + } + + /** + * select a split point in the region. The selection of the split point is based on an uniform + * distribution assumption for the keys in a region. + * Here are some examples: + * startKey: aaabcdefg endKey: aaafff split point: aaad + * startKey: 111000 endKey: 1125790 split point: 111b + * startKey: 1110 endKey: 1120 split point: 111_ + * startKey: binary key { 13, -19, 126, 127 }, endKey: binary key { 13, -19, 127, 0 }, + * split point: binary key { 13, -19, 127, -64 } + * Set this function as "public static", make it easier for test. + * + * @param start Start key of the region + * @param end End key of the region + * @param isText It determines to use text key mode or binary key mode + * @return The split point in the region. + */ + public static byte[] getSplitKey(byte[] start, byte[] end, boolean isText) { + byte upperLimitByte; + byte lowerLimitByte; + //Use text mode or binary mode. + if (isText) { + //The range of text char set in ASCII is [32,126], the lower limit is space and the upper + // limit is '~'. + upperLimitByte = '~'; + lowerLimitByte = ' '; + } else { + upperLimitByte = Byte.MAX_VALUE; + lowerLimitByte = Byte.MIN_VALUE; + } + // For special case + // Example 1 : startkey=null, endkey="hhhqqqwww", splitKey="h" + // Example 2 (text key mode): startKey="ffffaaa", endKey=null, splitkey="f~~~~~~" + if (start.length == 0 && end.length == 0){ + return new byte[]{(byte) ((lowerLimitByte + upperLimitByte) / 2)}; + } + if (start.length == 0 && end.length != 0){ + return new byte[]{ end[0] }; + } + if (start.length != 0 && end.length == 0){ + byte[] result =new byte[start.length]; + result[0]=start[0]; + for (int k = 1; k < start.length; k++){ + result[k] = upperLimitByte; + } + return result; + } + // A list to store bytes in split key + List resultBytesList = new ArrayList(); + int maxLength = start.length > end.length ? start.length : end.length; + for (int i = 0; i < maxLength; i++) { + //calculate the midpoint byte between the first difference + //for example: "11ae" and "11chw", the midpoint is "11b" + //another example: "11ae" and "11bhw", the first different byte is 'a' and 'b', + // there is no midpoint between 'a' and 'b', so we need to check the next byte. + if (start[i] == end[i]) { + resultBytesList.add(start[i]); + //For special case like: startKey="aaa", endKey="aaaz", splitKey="aaaM" + if (i + 1 == start.length) { + resultBytesList.add((byte) ((lowerLimitByte + end[i + 1]) / 2)); + break; + } + } else { + //if the two bytes differ by 1, like ['a','b'], We need to check the next byte to find + // the midpoint. + if ((int)end[i] - (int)start[i] == 1) { + //get next byte after the first difference + byte startNextByte = (i + 1 < start.length) ? start[i + 1] : lowerLimitByte; + byte endNextByte = (i + 1 < end.length) ? end[i + 1] : lowerLimitByte; + int byteRange = (upperLimitByte - startNextByte) + (endNextByte - lowerLimitByte) + 1; + int halfRange = byteRange / 2; + if ((int)startNextByte + halfRange > (int)upperLimitByte) { + resultBytesList.add(end[i]); + resultBytesList.add((byte) (startNextByte + halfRange - upperLimitByte + + lowerLimitByte)); + } else { + resultBytesList.add(start[i]); + resultBytesList.add((byte) (startNextByte + halfRange)); + } + } else { + //calculate the midpoint key by the fist different byte (normal case), + // like "11ae" and "11chw", the midpoint is "11b" + resultBytesList.add((byte) ((start[i] + end[i]) / 2)); + } + break; + } + } + //transform the List of bytes to byte[] + byte result[] = new byte[resultBytesList.size()]; + for (int k = 0; k < resultBytesList.size(); k++) { + result[k] = (byte) resultBytesList.get(k); + } + return result; + } + + /** * * * Test if the given region is to be included in the InputSplit while splitting @@ -355,7 +549,7 @@ extends InputFormat { @Deprecated protected void setHTable(HTable table) throws IOException { this.table = table; - this.regionLocator = table; + this.regionLocator = table.getRegionLocator(); this.admin = table.getConnection().getAdmin(); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java index cd69a5b..107e7b6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java index a487878..06ab5c4 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/WALPlayer.java @@ -17,16 +17,8 @@ */ package org.apache.hadoop.hbase.mapreduce; -import java.io.IOException; -import java.text.ParseException; -import java.text.SimpleDateFormat; -import java.util.Map; -import java.util.TreeMap; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.Path; @@ -36,14 +28,19 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionLocator; +import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; -import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; @@ -52,6 +49,12 @@ import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; +import java.io.IOException; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.Map; +import java.util.TreeMap; + /** * A tool to replay WAL files as a M/R job. * The WAL can be replayed for a set of tables or all tables, @@ -81,6 +84,8 @@ public class WALPlayer extends Configured implements Tool { Configuration.addDeprecation(HLogInputFormat.END_TIME_KEY, WALInputFormat.END_TIME_KEY); } + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * A mapper that just writes out KeyValues. * This one can be used together with {@link KeyValueSortReducer} @@ -246,7 +251,7 @@ public class WALPlayer extends Configured implements Tool { } conf.setStrings(TABLES_KEY, tables); conf.setStrings(TABLE_MAP_KEY, tableMap); - Job job = new Job(conf, NAME + "_" + inputDir); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + inputDir)); job.setJarByClass(WALPlayer.class); FileInputFormat.setInputPaths(job, inputDir); job.setInputFormatClass(WALInputFormat.class); @@ -257,13 +262,17 @@ public class WALPlayer extends Configured implements Tool { if (tables.length != 1) { throw new IOException("Exactly one table must be specified for the bulk export option"); } - HTable table = new HTable(conf, TableName.valueOf(tables[0])); + TableName tableName = TableName.valueOf(tables[0]); job.setMapperClass(WALKeyValueMapper.class); job.setReducerClass(KeyValueSortReducer.class); Path outputDir = new Path(hfileOutPath); FileOutputFormat.setOutputPath(job, outputDir); job.setMapOutputValueClass(KeyValue.class); - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + try (Connection conn = ConnectionFactory.createConnection(conf); + Table table = conn.getTable(tableName); + RegionLocator regionLocator = conn.getRegionLocator(tableName)) { + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); + } TableMapReduceUtil.addDependencyJars(job.getConfiguration(), com.google.common.base.Preconditions.class); } else { @@ -300,6 +309,8 @@ public class WALPlayer extends Configured implements Tool { System.err.println("Other options: (specify time range to WAL edit to consider)"); System.err.println(" -D" + WALInputFormat.START_TIME_KEY + "=[date|ms]"); System.err.println(" -D" + WALInputFormat.END_TIME_KEY + "=[date|ms]"); + System.err.println(" -D " + JOB_NAME_CONF_KEY + + "=jobName - use the specified mapreduce job name for the wal player"); System.err.println("For performance also consider the following options:\n" + " -Dmapreduce.map.speculative=false\n" + " -Dmapreduce.reduce.speculative=false"); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java index f94dac9..c091312 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/replication/VerifyReplication.java @@ -76,6 +76,8 @@ public class VerifyReplication extends Configured implements Tool { static String families = null; static String peerId = null; + private final static String JOB_NAME_CONF_KEY = "mapreduce.job.name"; + /** * Map-only comparator for 2 tables */ @@ -253,7 +255,7 @@ public class VerifyReplication extends Configured implements Tool { conf.set(NAME + ".peerQuorumAddress", peerQuorumAddress); LOG.info("Peer Quorum Address: " + peerQuorumAddress); - Job job = new Job(conf, NAME + "_" + tableName); + Job job = Job.getInstance(conf, conf.get(JOB_NAME_CONF_KEY, NAME + "_" + tableName)); job.setJarByClass(VerifyReplication.class); Scan scan = new Scan(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java index e21d11a..4513a5d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java @@ -34,18 +34,16 @@ public class AssignCallable implements Callable { private AssignmentManager assignmentManager; private HRegionInfo hri; - private boolean newPlan; public AssignCallable( - AssignmentManager assignmentManager, HRegionInfo hri, boolean newPlan) { + AssignmentManager assignmentManager, HRegionInfo hri) { this.assignmentManager = assignmentManager; - this.newPlan = newPlan; this.hri = hri; } @Override public Object call() throws Exception { - assignmentManager.assign(hri, true, newPlan); + assignmentManager.assign(hri); return null; } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java index e39adc8..2f6679f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java @@ -19,9 +19,7 @@ package org.apache.hadoop.hbase.master; import java.io.IOException; -import java.io.InterruptedIOException; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; @@ -29,13 +27,13 @@ import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; -import java.util.Map.Entry; import java.util.NavigableMap; +import java.util.Random; import java.util.Set; import java.util.TreeMap; +import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.CopyOnWriteArrayList; -import java.util.concurrent.ThreadFactory; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; @@ -48,30 +46,21 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.RegionLocations; -import org.apache.hadoop.hbase.RegionTransition; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; -import org.apache.hadoop.hbase.TableStateManager; import org.apache.hadoop.hbase.client.RegionReplicaUtil; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; -import org.apache.hadoop.hbase.coordination.RegionMergeCoordination; -import org.apache.hadoop.hbase.coordination.SplitTransactionCoordination.SplitTransactionDetails; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; -import org.apache.hadoop.hbase.coordination.ZkRegionMergeCoordination; -import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.executor.ExecutorService; @@ -81,17 +70,12 @@ import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException; import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.balancer.FavoredNodeAssignmentHelper; import org.apache.hadoop.hbase.master.balancer.FavoredNodeLoadBalancer; -import org.apache.hadoop.hbase.master.handler.ClosedRegionHandler; import org.apache.hadoop.hbase.master.handler.DisableTableHandler; import org.apache.hadoop.hbase.master.handler.EnableTableHandler; -import org.apache.hadoop.hbase.master.handler.OpenedRegionHandler; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.regionserver.RegionAlreadyInTransitionException; import org.apache.hadoop.hbase.regionserver.RegionOpeningState; import org.apache.hadoop.hbase.regionserver.RegionServerStoppedException; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.wal.DefaultWALProvider; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; @@ -99,39 +83,20 @@ import org.apache.hadoop.hbase.util.KeyLocker; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.PairOfSameType; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.util.Triple; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener; import org.apache.hadoop.ipc.RemoteException; -import org.apache.zookeeper.AsyncCallback; import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NoNodeException; -import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.apache.zookeeper.data.Stat; import com.google.common.annotations.VisibleForTesting; -import com.google.common.collect.LinkedHashMultimap; /** * Manages and performs region assignment. - *

          - * Monitors ZooKeeper for events related to regions in transition. - *

          - * Handles existing regions in transition during master failover. + * Related communications with regionserver are all done over RPC. */ @InterfaceAudience.Private -public class AssignmentManager extends ZooKeeperListener { +public class AssignmentManager { private static final Log LOG = LogFactory.getLog(AssignmentManager.class); - public static final ServerName HBCK_CODE_SERVERNAME = ServerName.valueOf(HConstants.HBCK_CODE_NAME, - -1, -1L); - - static final String ALREADY_IN_TRANSITION_WAITTIME - = "hbase.assignment.already.intransition.waittime"; - static final int DEFAULT_ALREADY_IN_TRANSITION_WAITTIME = 60000; // 1 minute - protected final Server server; private ServerManager serverManager; @@ -148,6 +113,8 @@ public class AssignmentManager extends ZooKeeperListener { final private KeyLocker locker = new KeyLocker(); + Set replicasToClose = Collections.synchronizedSet(new HashSet()); + /** * Map of regions to reopen after the schema of a table is changed. Key - * encoded region name, value - HRegionInfo @@ -161,15 +128,6 @@ public class AssignmentManager extends ZooKeeperListener { private final int maximumAttempts; /** - * Map of two merging regions from the region to be created. - */ - private final Map> mergingRegions - = new HashMap>(); - - private final Map> splitRegions - = new HashMap>(); - - /** * The sleep time for which the assignment will wait before retrying in case of hbase:meta assignment * failure due to lack of availability of region plan or bad region plan */ @@ -186,21 +144,9 @@ public class AssignmentManager extends ZooKeeperListener { private final ExecutorService executorService; - // For unit tests, keep track of calls to ClosedRegionHandler - private Map closedRegionHandlerCalled = null; - - // For unit tests, keep track of calls to OpenedRegionHandler - private Map openedRegionHandlerCalled = null; - - //Thread pool executor service for timeout monitor + // Thread pool executor service. TODO, consolidate with executorService? private java.util.concurrent.ExecutorService threadPoolExecutorService; - // A bunch of ZK events workers. Each is a single thread executor service - private final java.util.concurrent.ExecutorService zkEventWorkers; - - private List ignoreStatesRSOffline = Arrays.asList( - EventType.RS_ZK_REGION_FAILED_OPEN, EventType.RS_ZK_REGION_CLOSED); - private final RegionStates regionStates; // The threshold to use bulk assigning. Using bulk assignment @@ -235,9 +181,6 @@ public class AssignmentManager extends ZooKeeperListener { private final ConcurrentHashMap failedOpenTracker = new ConcurrentHashMap(); - // A flag to indicate if we are using ZK for region assignment - private final boolean useZKForAssignment; - // In case not using ZK for region assignment, region states // are persisted in meta with a state store private final RegionStateStore regionStateStore; @@ -260,15 +203,14 @@ public class AssignmentManager extends ZooKeeperListener { * @param service Executor service * @param metricsMaster metrics manager * @param tableLockManager TableLock manager - * @throws KeeperException * @throws IOException */ public AssignmentManager(Server server, ServerManager serverManager, final LoadBalancer balancer, final ExecutorService service, MetricsMaster metricsMaster, - final TableLockManager tableLockManager) throws KeeperException, - IOException, CoordinatedStateException { - super(server.getZooKeeper()); + final TableLockManager tableLockManager, + final TableStateManager tableStateManager) + throws IOException { this.server = server; this.serverManager = serverManager; this.executorService = service; @@ -280,15 +222,9 @@ public class AssignmentManager extends ZooKeeperListener { this.shouldAssignRegionsWithFavoredNodes = conf.getClass( HConstants.HBASE_MASTER_LOADBALANCER_CLASS, Object.class).equals( FavoredNodeLoadBalancer.class); - try { - if (server.getCoordinatedStateManager() != null) { - this.tableStateManager = server.getCoordinatedStateManager().getTableStateManager(); - } else { - this.tableStateManager = null; - } - } catch (InterruptedException e) { - throw new InterruptedIOException(); - } + + this.tableStateManager = tableStateManager; + // This is the max attempts, not retries, so it should be at least 1. this.maximumAttempts = Math.max(1, this.server.getConfiguration().getInt("hbase.assignment.maximum.attempts", 10)); @@ -306,14 +242,8 @@ public class AssignmentManager extends ZooKeeperListener { this.bulkAssignThresholdRegions = conf.getInt("hbase.bulk.assignment.threshold.regions", 7); this.bulkAssignThresholdServers = conf.getInt("hbase.bulk.assignment.threshold.servers", 3); - int workers = conf.getInt("hbase.assignment.zkevent.workers", 20); - ThreadFactory threadFactory = Threads.newDaemonThreadFactory("AM.ZK.Worker"); - zkEventWorkers = Threads.getBoundedCachedThreadPool(workers, 60L, - TimeUnit.SECONDS, threadFactory); - this.tableLockManager = tableLockManager; - this.metricsAssignmentManager = new MetricsAssignmentManager(); - useZKForAssignment = ConfigUtil.useZKForAssignment(conf); + this.tableLockManager = tableLockManager; } /** @@ -409,8 +339,7 @@ public class AssignmentManager extends ZooKeeperListener { if (TableName.META_TABLE_NAME.equals(tableName)) { hris = new MetaTableLocator().getMetaRegions(server.getZooKeeper()); } else { - hris = MetaTableAccessor.getTableRegions(server.getZooKeeper(), - server.getConnection(), tableName, true); + hris = MetaTableAccessor.getTableRegions(server.getConnection(), tableName, true); } Integer pending = 0; @@ -457,10 +386,9 @@ public class AssignmentManager extends ZooKeeperListener { * @throws IOException * @throws KeeperException * @throws InterruptedException - * @throws CoordinatedStateException */ void joinCluster() throws IOException, - KeeperException, InterruptedException, CoordinatedStateException { + KeeperException, InterruptedException { long startTime = System.currentTimeMillis(); // Concurrency note: In the below the accesses on regionsInTransition are // outside of a synchronization block where usually all accesses to RIT are @@ -480,10 +408,6 @@ public class AssignmentManager extends ZooKeeperListener { // previous master process. boolean failover = processDeadServersAndRegionsInTransition(deadServers); - if (!useZKForAssignment) { - // Not use ZK for assignment any more, remove the ZNode - ZKUtil.deleteNodeRecursively(watcher, watcher.assignmentZNode); - } recoverTableInDisablingState(); recoverTableInEnablingState(); LOG.info("Joined the cluster in " + (System.currentTimeMillis() @@ -497,22 +421,11 @@ public class AssignmentManager extends ZooKeeperListener { * startup, will assign all user regions. * @param deadServers * Map of dead servers and their regions. Can be null. - * @throws KeeperException * @throws IOException * @throws InterruptedException */ - boolean processDeadServersAndRegionsInTransition( - final Set deadServers) throws KeeperException, - IOException, InterruptedException, CoordinatedStateException { - List nodes = ZKUtil.listChildrenNoWatch(watcher, - watcher.assignmentZNode); - - if (useZKForAssignment && nodes == null) { - String errorMessage = "Failed to get the children from ZK"; - server.abort(errorMessage, new IOException(errorMessage)); - return true; // Doesn't matter in this case - } - + boolean processDeadServersAndRegionsInTransition(final Set deadServers) + throws IOException, InterruptedException { boolean failover = !serverManager.getDeadServers().isEmpty(); if (failover) { // This may not be a failover actually, especially if meta is on this master. @@ -532,29 +445,17 @@ public class AssignmentManager extends ZooKeeperListener { break; } } - if (!failover && nodes != null) { - // If any one region except meta is in transition, it's a failover. - for (String encodedName: nodes) { - RegionState regionState = regionStates.getRegionState(encodedName); - if (regionState != null && !regionState.getRegion().isMetaRegion()) { - LOG.debug("Found " + regionState + " in RITs"); - failover = true; - break; - } - } - } - } - if (!failover && !useZKForAssignment) { - // If any region except meta is in transition on a live server, it's a failover. - Map regionsInTransition = regionStates.getRegionsInTransition(); - if (!regionsInTransition.isEmpty()) { - Set onlineServers = serverManager.getOnlineServers().keySet(); - for (RegionState regionState: regionsInTransition.values()) { - if (!regionState.getRegion().isMetaRegion() - && onlineServers.contains(regionState.getServerName())) { - LOG.debug("Found " + regionState + " in RITs"); - failover = true; - break; + if (!failover) { + // If any region except meta is in transition on a live server, it's a failover. + Map regionsInTransition = regionStates.getRegionsInTransition(); + if (!regionsInTransition.isEmpty()) { + for (RegionState regionState: regionsInTransition.values()) { + if (!regionState.getRegion().isMetaRegion() + && onlineServers.contains(regionState.getServerName())) { + LOG.debug("Found " + regionState + " in RITs"); + failover = true; + break; + } } } } @@ -596,8 +497,8 @@ public class AssignmentManager extends ZooKeeperListener { if (!failover) { disabledOrDisablingOrEnabling = tableStateManager.getTablesInStates( - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING, - ZooKeeperProtos.Table.State.ENABLING); + TableState.State.DISABLED, TableState.State.DISABLING, + TableState.State.ENABLING); // Clean re/start, mark all user regions closed before reassignment allRegions = regionStates.closeAllUserRegions( @@ -607,19 +508,15 @@ public class AssignmentManager extends ZooKeeperListener { // Now region states are restored regionStateStore.start(); - // If we found user regions out on cluster, its a failover. if (failover) { - LOG.info("Found regions out on cluster or in RIT; presuming failover"); - // Process list of dead servers and regions in RIT. - // See HBASE-4580 for more information. - processDeadServersAndRecoverLostRegions(deadServers); - } - - if (!failover && useZKForAssignment) { - // Cleanup any existing ZK nodes and start watching - ZKAssign.deleteAllNodes(watcher); - ZKUtil.listChildrenAndWatchForNewChildren(this.watcher, - this.watcher.assignmentZNode); + if (deadServers != null && !deadServers.isEmpty()) { + for (ServerName serverName: deadServers) { + if (!serverManager.isServerDead(serverName)) { + serverManager.expireServer(serverName); // Let SSH do region re-assign + } + } + } + processRegionsInTransition(regionStates.getRegionsInTransition().values()); } // Now we can safely claim failover cleanup completed and enable @@ -632,251 +529,14 @@ public class AssignmentManager extends ZooKeeperListener { LOG.info("Clean cluster startup. Assigning user regions"); assignAllUserRegions(allRegions); } - return failover; - } - - /** - * If region is up in zk in transition, then do fixup and block and wait until - * the region is assigned and out of transition. Used on startup for - * catalog regions. - * @param hri Region to look for. - * @return True if we processed a region in transition else false if region - * was not up in zk in transition. - * @throws InterruptedException - * @throws KeeperException - * @throws IOException - */ - boolean processRegionInTransitionAndBlockUntilAssigned(final HRegionInfo hri) - throws InterruptedException, KeeperException, IOException { - String encodedRegionName = hri.getEncodedName(); - if (!processRegionInTransition(encodedRegionName, hri)) { - return false; // The region is not in transition - } - LOG.debug("Waiting on " + HRegionInfo.prettyPrint(encodedRegionName)); - while (!this.server.isStopped() && - this.regionStates.isRegionInTransition(encodedRegionName)) { - RegionState state = this.regionStates.getRegionTransitionState(encodedRegionName); - if (state == null || !serverManager.isServerOnline(state.getServerName())) { - // The region is not in transition, or not in transition on an online - // server. Doesn't help to block here any more. Caller need to - // verify the region is actually assigned. - break; - } - this.regionStates.waitForUpdate(100); - } - return true; - } - - /** - * Process failover of new master for region encodedRegionName - * up in zookeeper. - * @param encodedRegionName Region to process failover for. - * @param regionInfo If null we'll go get it from meta table. - * @return True if we processed regionInfo as a RIT. - * @throws KeeperException - * @throws IOException - */ - boolean processRegionInTransition(final String encodedRegionName, - final HRegionInfo regionInfo) throws KeeperException, IOException { - // We need a lock here to ensure that we will not put the same region twice - // It has no reason to be a lock shared with the other operations. - // We can do the lock on the region only, instead of a global lock: what we want to ensure - // is that we don't have two threads working on the same region. - Lock lock = locker.acquireLock(encodedRegionName); - try { - Stat stat = new Stat(); - byte [] data = ZKAssign.getDataAndWatch(watcher, encodedRegionName, stat); - if (data == null) return false; - RegionTransition rt; - try { - rt = RegionTransition.parseFrom(data); - } catch (DeserializationException e) { - LOG.warn("Failed parse znode data", e); - return false; - } - HRegionInfo hri = regionInfo; - if (hri == null) { - // The region info is not passed in. We will try to find the region - // from region states map/meta based on the encoded region name. But we - // may not be able to find it. This is valid for online merge that - // the region may have not been created if the merge is not completed. - // Therefore, it is not in meta at master recovery time. - hri = regionStates.getRegionInfo(rt.getRegionName()); - EventType et = rt.getEventType(); - if (hri == null && et != EventType.RS_ZK_REGION_MERGING - && et != EventType.RS_ZK_REQUEST_REGION_MERGE) { - LOG.warn("Couldn't find the region in recovering " + rt); - return false; - } - } - - // TODO: This code is tied to ZK anyway, so for now leaving it as is, - // will refactor when whole region assignment will be abstracted from ZK - BaseCoordinatedStateManager cp = - (BaseCoordinatedStateManager) this.server.getCoordinatedStateManager(); - OpenRegionCoordination openRegionCoordination = cp.getOpenRegionCoordination(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkOrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkOrd.setVersion(stat.getVersion()); - zkOrd.setServerName(cp.getServer().getServerName()); - - return processRegionsInTransition( - rt, hri, openRegionCoordination, zkOrd); - } finally { - lock.unlock(); - } - } - - /** - * This call is invoked only (1) master assign meta; - * (2) during failover mode startup, zk assignment node processing. - * The locker is set in the caller. It returns true if the region - * is in transition for sure, false otherwise. - * - * It should be private but it is used by some test too. - */ - boolean processRegionsInTransition( - final RegionTransition rt, final HRegionInfo regionInfo, - OpenRegionCoordination coordination, - final OpenRegionCoordination.OpenRegionDetails ord) throws KeeperException { - EventType et = rt.getEventType(); - // Get ServerName. Could not be null. - final ServerName sn = rt.getServerName(); - final byte[] regionName = rt.getRegionName(); - final String encodedName = HRegionInfo.encodeRegionName(regionName); - final String prettyPrintedRegionName = HRegionInfo.prettyPrint(encodedName); - LOG.info("Processing " + prettyPrintedRegionName + " in state: " + et); - - if (regionStates.isRegionInTransition(encodedName) - && (regionInfo.isMetaRegion() || !useZKForAssignment)) { - LOG.info("Processed region " + prettyPrintedRegionName + " in state: " - + et + ", does nothing since the region is already in transition " - + regionStates.getRegionTransitionState(encodedName)); - // Just return - return true; - } - if (!serverManager.isServerOnline(sn)) { - // It was transitioning on a dead server, so it's closed now. - // Force to OFFLINE and put it in transition, but not assign it - // since log splitting for the dead server is not done yet. - LOG.debug("RIT " + encodedName + " in state=" + rt.getEventType() + - " was on deadserver; forcing offline"); - if (regionStates.isRegionOnline(regionInfo)) { - // Meta could still show the region is assigned to the previous - // server. If that server is online, when we reload the meta, the - // region is put back to online, we need to offline it. - regionStates.regionOffline(regionInfo); - sendRegionClosedNotification(regionInfo); - } - // Put it back in transition so that SSH can re-assign it - regionStates.updateRegionState(regionInfo, State.OFFLINE, sn); - - if (regionInfo.isMetaRegion()) { - // If it's meta region, reset the meta location. - // So that master knows the right meta region server. - MetaTableLocator.setMetaLocation(watcher, sn, State.OPEN); - } else { - // No matter the previous server is online or offline, - // we need to reset the last region server of the region. - regionStates.setLastRegionServerOfRegion(sn, encodedName); - // Make sure we know the server is dead. - if (!serverManager.isServerDead(sn)) { - serverManager.expireServer(sn); - } - } - return false; - } - switch (et) { - case M_ZK_REGION_CLOSING: - // Insert into RIT & resend the query to the region server: may be the previous master - // died before sending the query the first time. - final RegionState rsClosing = regionStates.updateRegionState(rt, State.CLOSING); - this.executorService.submit( - new EventHandler(server, EventType.M_MASTER_RECOVERY) { - @Override - public void process() throws IOException { - ReentrantLock lock = locker.acquireLock(regionInfo.getEncodedName()); - try { - final int expectedVersion = ((ZkOpenRegionCoordination.ZkOpenRegionDetails) ord) - .getVersion(); - unassign(regionInfo, rsClosing, expectedVersion, null, useZKForAssignment, null); - if (regionStates.isRegionOffline(regionInfo)) { - assign(regionInfo, true); - } - } finally { - lock.unlock(); - } - } - }); - break; - - case RS_ZK_REGION_CLOSED: - case RS_ZK_REGION_FAILED_OPEN: - // Region is closed, insert into RIT and handle it - regionStates.updateRegionState(regionInfo, State.CLOSED, sn); - invokeAssign(regionInfo); - break; - - case M_ZK_REGION_OFFLINE: - // Insert in RIT and resend to the regionserver - regionStates.updateRegionState(rt, State.PENDING_OPEN); - final RegionState rsOffline = regionStates.getRegionState(regionInfo); - this.executorService.submit( - new EventHandler(server, EventType.M_MASTER_RECOVERY) { - @Override - public void process() throws IOException { - ReentrantLock lock = locker.acquireLock(regionInfo.getEncodedName()); - try { - RegionPlan plan = new RegionPlan(regionInfo, null, sn); - addPlan(encodedName, plan); - assign(rsOffline, false, false); - } finally { - lock.unlock(); - } - } - }); - break; - - case RS_ZK_REGION_OPENING: - regionStates.updateRegionState(rt, State.OPENING); - break; - - case RS_ZK_REGION_OPENED: - // Region is opened, insert into RIT and handle it - // This could be done asynchronously, we would need then to acquire the lock in the - // handler. - regionStates.updateRegionState(rt, State.OPEN); - new OpenedRegionHandler(server, this, regionInfo, coordination, ord).process(); - break; - case RS_ZK_REQUEST_REGION_SPLIT: - case RS_ZK_REGION_SPLITTING: - case RS_ZK_REGION_SPLIT: - // Splitting region should be online. We could have skipped it during - // user region rebuilding since we may consider the split is completed. - // Put it in SPLITTING state to avoid complications. - regionStates.regionOnline(regionInfo, sn); - regionStates.updateRegionState(rt, State.SPLITTING); - if (!handleRegionSplitting( - rt, encodedName, prettyPrintedRegionName, sn)) { - deleteSplittingNode(encodedName, sn); - } - break; - case RS_ZK_REQUEST_REGION_MERGE: - case RS_ZK_REGION_MERGING: - case RS_ZK_REGION_MERGED: - if (!handleRegionMerging( - rt, encodedName, prettyPrintedRegionName, sn)) { - deleteMergingNode(encodedName, sn); - } - break; - default: - throw new IllegalStateException("Received region in state:" + et + " is not valid."); + // unassign replicas of the split parents and the merged regions + // the daughter replicas are opened in assignAllUserRegions if it was + // not already opened. + for (HRegionInfo h : replicasToClose) { + unassign(h); } - LOG.info("Processed region " + prettyPrintedRegionName + " in state " - + et + ", on " + (serverManager.isServerOnline(sn) ? "" : "dead ") - + "server: " + sn); - return true; + replicasToClose.clear(); + return failover; } /** @@ -889,248 +549,6 @@ public class AssignmentManager extends ZooKeeperListener { } } - /** - * Handles various states an unassigned node can be in. - *

          - * Method is called when a state change is suspected for an unassigned node. - *

          - * This deals with skipped transitions (we got a CLOSED but didn't see CLOSING - * yet). - * @param rt region transition - * @param coordination coordination for opening region - * @param ord details about opening region - */ - void handleRegion(final RegionTransition rt, OpenRegionCoordination coordination, - OpenRegionCoordination.OpenRegionDetails ord) { - if (rt == null) { - LOG.warn("Unexpected NULL input for RegionTransition rt"); - return; - } - final ServerName sn = rt.getServerName(); - // Check if this is a special HBCK transition - if (sn.equals(HBCK_CODE_SERVERNAME)) { - handleHBCK(rt); - return; - } - final long createTime = rt.getCreateTime(); - final byte[] regionName = rt.getRegionName(); - String encodedName = HRegionInfo.encodeRegionName(regionName); - String prettyPrintedRegionName = HRegionInfo.prettyPrint(encodedName); - // Verify this is a known server - if (!serverManager.isServerOnline(sn) - && !ignoreStatesRSOffline.contains(rt.getEventType())) { - LOG.warn("Attempted to handle region transition for server but " + - "it is not online: " + prettyPrintedRegionName + ", " + rt); - return; - } - - RegionState regionState = - regionStates.getRegionState(encodedName); - long startTime = System.currentTimeMillis(); - if (LOG.isDebugEnabled()) { - boolean lateEvent = createTime < (startTime - 15000); - LOG.debug("Handling " + rt.getEventType() + - ", server=" + sn + ", region=" + - (prettyPrintedRegionName == null ? "null" : prettyPrintedRegionName) + - (lateEvent ? ", which is more than 15 seconds late" : "") + - ", current_state=" + regionState); - } - // We don't do anything for this event, - // so separate it out, no need to lock/unlock anything - if (rt.getEventType() == EventType.M_ZK_REGION_OFFLINE) { - return; - } - - // We need a lock on the region as we could update it - Lock lock = locker.acquireLock(encodedName); - try { - RegionState latestState = - regionStates.getRegionState(encodedName); - if ((regionState == null && latestState != null) - || (regionState != null && latestState == null) - || (regionState != null && latestState != null - && latestState.getState() != regionState.getState())) { - LOG.warn("Region state changed from " + regionState + " to " - + latestState + ", while acquiring lock"); - } - long waitedTime = System.currentTimeMillis() - startTime; - if (waitedTime > 5000) { - LOG.warn("Took " + waitedTime + "ms to acquire the lock"); - } - regionState = latestState; - switch (rt.getEventType()) { - case RS_ZK_REQUEST_REGION_SPLIT: - case RS_ZK_REGION_SPLITTING: - case RS_ZK_REGION_SPLIT: - if (!handleRegionSplitting( - rt, encodedName, prettyPrintedRegionName, sn)) { - deleteSplittingNode(encodedName, sn); - } - break; - - case RS_ZK_REQUEST_REGION_MERGE: - case RS_ZK_REGION_MERGING: - case RS_ZK_REGION_MERGED: - // Merged region is a new region, we can't find it in the region states now. - // However, the two merging regions are not new. They should be in state for merging. - if (!handleRegionMerging( - rt, encodedName, prettyPrintedRegionName, sn)) { - deleteMergingNode(encodedName, sn); - } - break; - - case M_ZK_REGION_CLOSING: - // Should see CLOSING after we have asked it to CLOSE or additional - // times after already being in state of CLOSING - if (regionState == null - || !regionState.isPendingCloseOrClosingOnServer(sn)) { - LOG.warn("Received CLOSING for " + prettyPrintedRegionName - + " from " + sn + " but the region isn't PENDING_CLOSE/CLOSING here: " - + regionStates.getRegionState(encodedName)); - return; - } - // Transition to CLOSING (or update stamp if already CLOSING) - regionStates.updateRegionState(rt, State.CLOSING); - break; - - case RS_ZK_REGION_CLOSED: - // Should see CLOSED after CLOSING but possible after PENDING_CLOSE - if (regionState == null - || !regionState.isPendingCloseOrClosingOnServer(sn)) { - LOG.warn("Received CLOSED for " + prettyPrintedRegionName - + " from " + sn + " but the region isn't PENDING_CLOSE/CLOSING here: " - + regionStates.getRegionState(encodedName)); - return; - } - // Handle CLOSED by assigning elsewhere or stopping if a disable - // If we got here all is good. Need to update RegionState -- else - // what follows will fail because not in expected state. - new ClosedRegionHandler(server, this, regionState.getRegion()).process(); - updateClosedRegionHandlerTracker(regionState.getRegion()); - break; - - case RS_ZK_REGION_FAILED_OPEN: - if (regionState == null - || !regionState.isPendingOpenOrOpeningOnServer(sn)) { - LOG.warn("Received FAILED_OPEN for " + prettyPrintedRegionName - + " from " + sn + " but the region isn't PENDING_OPEN/OPENING here: " - + regionStates.getRegionState(encodedName)); - return; - } - AtomicInteger failedOpenCount = failedOpenTracker.get(encodedName); - if (failedOpenCount == null) { - failedOpenCount = new AtomicInteger(); - // No need to use putIfAbsent, or extra synchronization since - // this whole handleRegion block is locked on the encoded region - // name, and failedOpenTracker is updated only in this block - failedOpenTracker.put(encodedName, failedOpenCount); - } - if (failedOpenCount.incrementAndGet() >= maximumAttempts) { - regionStates.updateRegionState(rt, State.FAILED_OPEN); - // remove the tracking info to save memory, also reset - // the count for next open initiative - failedOpenTracker.remove(encodedName); - } else { - // Handle this the same as if it were opened and then closed. - regionState = regionStates.updateRegionState(rt, State.CLOSED); - if (regionState != null) { - // When there are more than one region server a new RS is selected as the - // destination and the same is updated in the regionplan. (HBASE-5546) - try { - getRegionPlan(regionState.getRegion(), sn, true); - new ClosedRegionHandler(server, this, regionState.getRegion()).process(); - } catch (HBaseIOException e) { - LOG.warn("Failed to get region plan", e); - } - } - } - break; - - case RS_ZK_REGION_OPENING: - // Should see OPENING after we have asked it to OPEN or additional - // times after already being in state of OPENING - if (regionState == null - || !regionState.isPendingOpenOrOpeningOnServer(sn)) { - LOG.warn("Received OPENING for " + prettyPrintedRegionName - + " from " + sn + " but the region isn't PENDING_OPEN/OPENING here: " - + regionStates.getRegionState(encodedName)); - return; - } - // Transition to OPENING (or update stamp if already OPENING) - regionStates.updateRegionState(rt, State.OPENING); - break; - - case RS_ZK_REGION_OPENED: - // Should see OPENED after OPENING but possible after PENDING_OPEN. - if (regionState == null - || !regionState.isPendingOpenOrOpeningOnServer(sn)) { - LOG.warn("Received OPENED for " + prettyPrintedRegionName - + " from " + sn + " but the region isn't PENDING_OPEN/OPENING here: " - + regionStates.getRegionState(encodedName)); - - if (regionState != null) { - // Close it without updating the internal region states, - // so as not to create double assignments in unlucky scenarios - // mentioned in OpenRegionHandler#process - unassign(regionState.getRegion(), null, -1, null, false, sn); - } - return; - } - // Handle OPENED by removing from transition and deleted zk node - regionState = - regionStates.transitionOpenFromPendingOpenOrOpeningOnServer(rt,regionState, sn); - if (regionState != null) { - failedOpenTracker.remove(encodedName); // reset the count, if any - new OpenedRegionHandler( - server, this, regionState.getRegion(), coordination, ord).process(); - updateOpenedRegionHandlerTracker(regionState.getRegion()); - } - break; - - default: - throw new IllegalStateException("Received event is not valid."); - } - } finally { - lock.unlock(); - } - } - - //For unit tests only - boolean wasClosedHandlerCalled(HRegionInfo hri) { - AtomicBoolean b = closedRegionHandlerCalled.get(hri); - //compareAndSet to be sure that unit tests don't see stale values. Means, - //we will return true exactly once unless the handler code resets to true - //this value. - return b == null ? false : b.compareAndSet(true, false); - } - - //For unit tests only - boolean wasOpenedHandlerCalled(HRegionInfo hri) { - AtomicBoolean b = openedRegionHandlerCalled.get(hri); - //compareAndSet to be sure that unit tests don't see stale values. Means, - //we will return true exactly once unless the handler code resets to true - //this value. - return b == null ? false : b.compareAndSet(true, false); - } - - //For unit tests only - void initializeHandlerTrackers() { - closedRegionHandlerCalled = new HashMap(); - openedRegionHandlerCalled = new HashMap(); - } - - void updateClosedRegionHandlerTracker(HRegionInfo hri) { - if (closedRegionHandlerCalled != null) { //only for unit tests this is true - closedRegionHandlerCalled.put(hri, new AtomicBoolean(true)); - } - } - - void updateOpenedRegionHandlerTracker(HRegionInfo hri) { - if (openedRegionHandlerCalled != null) { //only for unit tests this is true - openedRegionHandlerCalled.put(hri, new AtomicBoolean(true)); - } - } - // TODO: processFavoredNodes might throw an exception, for e.g., if the // meta could not be contacted/updated. We need to see how seriously to treat // this problem as. Should we fail the current assignment. We should be able @@ -1151,420 +569,85 @@ public class AssignmentManager extends ZooKeeperListener { } /** - * Handle a ZK unassigned node transition triggered by HBCK repair tool. + * Marks the region as online. Removes it from regions in transition and + * updates the in-memory assignment information. *

          - * This is handled in a separate code path because it breaks the normal rules. - * @param rt + * Used when a region has been successfully opened on a region server. + * @param regionInfo + * @param sn */ - @SuppressWarnings("deprecation") - private void handleHBCK(RegionTransition rt) { - String encodedName = HRegionInfo.encodeRegionName(rt.getRegionName()); - LOG.info("Handling HBCK triggered transition=" + rt.getEventType() + - ", server=" + rt.getServerName() + ", region=" + - HRegionInfo.prettyPrint(encodedName)); - RegionState regionState = regionStates.getRegionTransitionState(encodedName); - switch (rt.getEventType()) { - case M_ZK_REGION_OFFLINE: - HRegionInfo regionInfo; - if (regionState != null) { - regionInfo = regionState.getRegion(); - } else { - try { - byte [] name = rt.getRegionName(); - Pair p = MetaTableAccessor.getRegion( - this.server.getConnection(), name); - regionInfo = p.getFirst(); - } catch (IOException e) { - LOG.info("Exception reading hbase:meta doing HBCK repair operation", e); - return; - } - } - LOG.info("HBCK repair is triggering assignment of region=" + - regionInfo.getRegionNameAsString()); - // trigger assign, node is already in OFFLINE so don't need to update ZK - assign(regionInfo, false); - break; - - default: - LOG.warn("Received unexpected region state from HBCK: " + rt.toString()); - break; - } - + void regionOnline(HRegionInfo regionInfo, ServerName sn) { + regionOnline(regionInfo, sn, HConstants.NO_SEQNUM); } - // ZooKeeper events + void regionOnline(HRegionInfo regionInfo, ServerName sn, long openSeqNum) { + numRegionsOpened.incrementAndGet(); + regionStates.regionOnline(regionInfo, sn, openSeqNum); + + // Remove plan if one. + clearRegionPlan(regionInfo); + balancer.regionOnline(regionInfo, sn); - /** - * New unassigned node has been created. - * - *

          This happens when an RS begins the OPENING or CLOSING of a region by - * creating an unassigned node. - * - *

          When this happens we must: - *

            - *
          1. Watch the node for further events
          2. - *
          3. Read and handle the state in the node
          4. - *
          - */ - @Override - public void nodeCreated(String path) { - handleAssignmentEvent(path); + // Tell our listeners that a region was opened + sendRegionOpenedNotification(regionInfo, sn); } /** - * Existing unassigned node has had data changed. - * - *

          This happens when an RS transitions from OFFLINE to OPENING, or between - * OPENING/OPENED and CLOSING/CLOSED. - * - *

          When this happens we must: - *

            - *
          1. Watch the node for further events
          2. - *
          3. Read and handle the state in the node
          4. - *
          + * Marks the region as offline. Removes it from regions in transition and + * removes in-memory assignment information. + *

          + * Used when a region has been closed and should remain closed. + * @param regionInfo */ - @Override - public void nodeDataChanged(String path) { - handleAssignmentEvent(path); + public void regionOffline(final HRegionInfo regionInfo) { + regionOffline(regionInfo, null); } + public void offlineDisabledRegion(HRegionInfo regionInfo) { + replicasToClose.remove(regionInfo); + regionOffline(regionInfo); + } - // We don't want to have two events on the same region managed simultaneously. - // For this reason, we need to wait if an event on the same region is currently in progress. - // So we track the region names of the events in progress, and we keep a waiting list. - private final Set regionsInProgress = new HashSet(); - // In a LinkedHashMultimap, the put order is kept when we retrieve the collection back. We need - // this as we want the events to be managed in the same order as we received them. - private final LinkedHashMultimap - zkEventWorkerWaitingList = LinkedHashMultimap.create(); + // Assignment methods /** - * A specific runnable that works only on a region. + * Assigns the specified region. + *

          + * If a RegionPlan is available with a valid destination then it will be used + * to determine what server region is assigned to. If no RegionPlan is + * available, region will be assigned to a random available server. + *

          + * Updates the RegionState and sends the OPEN RPC. + *

          + * This will only succeed if the region is in transition and in a CLOSED or + * OFFLINE state or not in transition, and of course, the + * chosen server is up and running (It may have just crashed!). + * + * @param region server to be assigned */ - private interface RegionRunnable extends Runnable{ - /** - * @return - the name of the region it works on. - */ - String getRegionName(); + public void assign(HRegionInfo region) { + assign(region, false); } /** - * Submit a task, ensuring that there is only one task at a time that working on a given region. - * Order is respected. + * Use care with forceNewPlan. It could cause double assignment. */ - protected void zkEventWorkersSubmit(final RegionRunnable regRunnable) { - - synchronized (regionsInProgress) { - // If we're there is already a task with this region, we add it to the - // waiting list and return. - if (regionsInProgress.contains(regRunnable.getRegionName())) { - synchronized (zkEventWorkerWaitingList){ - zkEventWorkerWaitingList.put(regRunnable.getRegionName(), regRunnable); + public void assign(HRegionInfo region, boolean forceNewPlan) { + if (isDisabledorDisablingRegionInRIT(region)) { + return; + } + String encodedName = region.getEncodedName(); + Lock lock = locker.acquireLock(encodedName); + try { + RegionState state = forceRegionStateToOffline(region, forceNewPlan); + if (state != null) { + if (regionStates.wasRegionOnDeadServer(encodedName)) { + LOG.info("Skip assigning " + region.getRegionNameAsString() + + ", it's host " + regionStates.getLastRegionServerOfRegion(encodedName) + + " is dead but not processed yet"); + return; } - return; - } - - // No event in progress on this region => we can submit a new task immediately. - regionsInProgress.add(regRunnable.getRegionName()); - zkEventWorkers.submit(new Runnable() { - @Override - public void run() { - try { - regRunnable.run(); - } finally { - // now that we have finished, let's see if there is an event for the same region in the - // waiting list. If it's the case, we can now submit it to the pool. - synchronized (regionsInProgress) { - regionsInProgress.remove(regRunnable.getRegionName()); - synchronized (zkEventWorkerWaitingList) { - java.util.Set waiting = zkEventWorkerWaitingList.get( - regRunnable.getRegionName()); - if (!waiting.isEmpty()) { - // We want the first object only. The only way to get it is through an iterator. - RegionRunnable toSubmit = waiting.iterator().next(); - zkEventWorkerWaitingList.remove(toSubmit.getRegionName(), toSubmit); - zkEventWorkersSubmit(toSubmit); - } - } - } - } - } - }); - } - } - - @Override - public void nodeDeleted(final String path) { - if (path.startsWith(watcher.assignmentZNode)) { - final String regionName = ZKAssign.getRegionName(watcher, path); - zkEventWorkersSubmit(new RegionRunnable() { - @Override - public String getRegionName() { - return regionName; - } - - @Override - public void run() { - Lock lock = locker.acquireLock(regionName); - try { - RegionState rs = regionStates.getRegionTransitionState(regionName); - if (rs == null) { - rs = regionStates.getRegionState(regionName); - if (rs == null || !rs.isMergingNew()) { - // MergingNew is an offline state - return; - } - } - - HRegionInfo regionInfo = rs.getRegion(); - String regionNameStr = regionInfo.getRegionNameAsString(); - LOG.debug("Znode " + regionNameStr + " deleted, state: " + rs); - - boolean disabled = getTableStateManager().isTableState(regionInfo.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING); - - ServerName serverName = rs.getServerName(); - if (serverManager.isServerOnline(serverName)) { - if (rs.isOnServer(serverName) && (rs.isOpened() || rs.isSplitting())) { - synchronized (regionStates) { - regionOnline(regionInfo, serverName); - if (rs.isSplitting() && splitRegions.containsKey(regionInfo)) { - // Check if the daugter regions are still there, if they are present, offline - // as its the case of a rollback. - HRegionInfo hri_a = splitRegions.get(regionInfo).getFirst(); - HRegionInfo hri_b = splitRegions.get(regionInfo).getSecond(); - if (!regionStates.isRegionInTransition(hri_a.getEncodedName())) { - LOG.warn("Split daughter region not in transition " + hri_a); - } - if (!regionStates.isRegionInTransition(hri_b.getEncodedName())) { - LOG.warn("Split daughter region not in transition" + hri_b); - } - regionOffline(hri_a); - regionOffline(hri_b); - splitRegions.remove(regionInfo); - } - if (disabled) { - // if server is offline, no hurt to unassign again - LOG.info("Opened " + regionNameStr - + "but this table is disabled, triggering close of region"); - unassign(regionInfo); - } - } - } else if (rs.isMergingNew()) { - synchronized (regionStates) { - String p = regionInfo.getEncodedName(); - PairOfSameType regions = mergingRegions.get(p); - if (regions != null) { - onlineMergingRegion(disabled, regions.getFirst(), serverName); - onlineMergingRegion(disabled, regions.getSecond(), serverName); - } - } - } - } - } finally { - lock.unlock(); - } - } - - private void onlineMergingRegion(boolean disabled, - final HRegionInfo hri, final ServerName serverName) { - RegionState regionState = regionStates.getRegionState(hri); - if (regionState != null && regionState.isMerging() - && regionState.isOnServer(serverName)) { - regionOnline(regionState.getRegion(), serverName); - if (disabled) { - unassign(hri); - } - } - } - }); - } - } - - /** - * New unassigned node has been created. - * - *

          This happens when an RS begins the OPENING, SPLITTING or CLOSING of a - * region by creating a znode. - * - *

          When this happens we must: - *

            - *
          1. Watch the node for further children changed events
          2. - *
          3. Watch all new children for changed events
          4. - *
          - */ - @Override - public void nodeChildrenChanged(String path) { - if (path.equals(watcher.assignmentZNode)) { - zkEventWorkers.submit(new Runnable() { - @Override - public void run() { - try { - // Just make sure we see the changes for the new znodes - List children = - ZKUtil.listChildrenAndWatchForNewChildren( - watcher, watcher.assignmentZNode); - if (children != null) { - Stat stat = new Stat(); - for (String child : children) { - // if region is in transition, we already have a watch - // on it, so no need to watch it again. So, as I know for now, - // this is needed to watch splitting nodes only. - if (!regionStates.isRegionInTransition(child)) { - ZKAssign.getDataAndWatch(watcher, child, stat); - } - } - } - } catch (KeeperException e) { - server.abort("Unexpected ZK exception reading unassigned children", e); - } - } - }); - } - } - - - /** - * Marks the region as online. Removes it from regions in transition and - * updates the in-memory assignment information. - *

          - * Used when a region has been successfully opened on a region server. - * @param regionInfo - * @param sn - */ - void regionOnline(HRegionInfo regionInfo, ServerName sn) { - regionOnline(regionInfo, sn, HConstants.NO_SEQNUM); - } - - void regionOnline(HRegionInfo regionInfo, ServerName sn, long openSeqNum) { - numRegionsOpened.incrementAndGet(); - regionStates.regionOnline(regionInfo, sn, openSeqNum); - - // Remove plan if one. - clearRegionPlan(regionInfo); - balancer.regionOnline(regionInfo, sn); - - // Tell our listeners that a region was opened - sendRegionOpenedNotification(regionInfo, sn); - } - - /** - * Pass the assignment event to a worker for processing. - * Each worker is a single thread executor service. The reason - * for just one thread is to make sure all events for a given - * region are processed in order. - * - * @param path - */ - private void handleAssignmentEvent(final String path) { - if (path.startsWith(watcher.assignmentZNode)) { - final String regionName = ZKAssign.getRegionName(watcher, path); - - zkEventWorkersSubmit(new RegionRunnable() { - @Override - public String getRegionName() { - return regionName; - } - - @Override - public void run() { - try { - Stat stat = new Stat(); - byte [] data = ZKAssign.getDataAndWatch(watcher, path, stat); - if (data == null) return; - - RegionTransition rt = RegionTransition.parseFrom(data); - - // TODO: This code is tied to ZK anyway, so for now leaving it as is, - // will refactor when whole region assignment will be abstracted from ZK - BaseCoordinatedStateManager csm = - (BaseCoordinatedStateManager) server.getCoordinatedStateManager(); - OpenRegionCoordination openRegionCoordination = csm.getOpenRegionCoordination(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkOrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkOrd.setVersion(stat.getVersion()); - zkOrd.setServerName(csm.getServer().getServerName()); - - handleRegion(rt, openRegionCoordination, zkOrd); - } catch (KeeperException e) { - server.abort("Unexpected ZK exception reading unassigned node data", e); - } catch (DeserializationException e) { - server.abort("Unexpected exception deserializing node data", e); - } - } - }); - } - } - - /** - * Marks the region as offline. Removes it from regions in transition and - * removes in-memory assignment information. - *

          - * Used when a region has been closed and should remain closed. - * @param regionInfo - */ - public void regionOffline(final HRegionInfo regionInfo) { - regionOffline(regionInfo, null); - } - - public void offlineDisabledRegion(HRegionInfo regionInfo) { - if (useZKForAssignment) { - // Disabling so should not be reassigned, just delete the CLOSED node - LOG.debug("Table being disabled so deleting ZK node and removing from " + - "regions in transition, skipping assignment of region " + - regionInfo.getRegionNameAsString()); - String encodedName = regionInfo.getEncodedName(); - deleteNodeInStates(encodedName, "closed", null, - EventType.RS_ZK_REGION_CLOSED, EventType.M_ZK_REGION_OFFLINE); - } - regionOffline(regionInfo); - } - - // Assignment methods - - /** - * Assigns the specified region. - *

          - * If a RegionPlan is available with a valid destination then it will be used - * to determine what server region is assigned to. If no RegionPlan is - * available, region will be assigned to a random available server. - *

          - * Updates the RegionState and sends the OPEN RPC. - *

          - * This will only succeed if the region is in transition and in a CLOSED or - * OFFLINE state or not in transition (in-memory not zk), and of course, the - * chosen server is up and running (It may have just crashed!). If the - * in-memory checks pass, the zk node is forced to OFFLINE before assigning. - * - * @param region server to be assigned - * @param setOfflineInZK whether ZK node should be created/transitioned to an - * OFFLINE state before assigning the region - */ - public void assign(HRegionInfo region, boolean setOfflineInZK) { - assign(region, setOfflineInZK, false); - } - - /** - * Use care with forceNewPlan. It could cause double assignment. - */ - public void assign(HRegionInfo region, - boolean setOfflineInZK, boolean forceNewPlan) { - if (isDisabledorDisablingRegionInRIT(region)) { - return; - } - String encodedName = region.getEncodedName(); - Lock lock = locker.acquireLock(encodedName); - try { - RegionState state = forceRegionStateToOffline(region, forceNewPlan); - if (state != null) { - if (regionStates.wasRegionOnDeadServer(encodedName)) { - LOG.info("Skip assigning " + region.getRegionNameAsString() - + ", it's host " + regionStates.getLastRegionServerOfRegion(encodedName) - + " is dead but not processed yet"); - return; - } - assign(state, setOfflineInZK && useZKForAssignment, forceNewPlan); + assign(state, forceNewPlan); } } finally { lock.unlock(); @@ -1594,12 +677,8 @@ public class AssignmentManager extends ZooKeeperListener { List failedToOpenRegions = new ArrayList(); Map locks = locker.acquireLocks(encodedNames); try { - AtomicInteger counter = new AtomicInteger(0); - Map offlineNodesVersions = new ConcurrentHashMap(); - OfflineCallback cb = new OfflineCallback( - watcher, destination, counter, offlineNodesVersions); - Map plans = new HashMap(regions.size()); - List states = new ArrayList(regions.size()); + Map plans = new HashMap(regionCount); + List states = new ArrayList(regionCount); for (HRegionInfo region : regions) { String encodedName = region.getEncodedName(); if (!isDisabledorDisablingRegionInRIT(region)) { @@ -1611,8 +690,7 @@ public class AssignmentManager extends ZooKeeperListener { + ", it's host " + regionStates.getLastRegionServerOfRegion(encodedName) + " is dead but not processed yet"); onDeadServer = true; - } else if (!useZKForAssignment - || asyncSetOfflineInZooKeeper(state, cb, destination)) { + } else { RegionPlan plan = new RegionPlan(region, state.getServerName(), destination); plans.put(encodedName, plan); states.add(state); @@ -1621,8 +699,8 @@ public class AssignmentManager extends ZooKeeperListener { } // Reassign if the region wasn't on a dead server if (!onDeadServer) { - LOG.info("failed to force region state to offline or " - + "failed to set it offline in ZK, will reassign later: " + region); + LOG.info("failed to force region state to offline, " + + "will reassign later: " + region); failedToOpenRegions.add(region); // assign individually later } } @@ -1632,21 +710,6 @@ public class AssignmentManager extends ZooKeeperListener { lock.unlock(); } - if (useZKForAssignment) { - // Wait until all unassigned nodes have been put up and watchers set. - int total = states.size(); - for (int oldCounter = 0; !server.isStopped();) { - int count = counter.get(); - if (oldCounter != count) { - LOG.debug(destination.toString() + " unassigned znodes=" + count + - " of total=" + total + "; oldCounter=" + oldCounter); - oldCounter = count; - } - if (count >= total) break; - Thread.sleep(5); - } - } - if (server.isStopped()) { return false; } @@ -1655,61 +718,40 @@ public class AssignmentManager extends ZooKeeperListener { // that unnecessary timeout on RIT is reduced. this.addPlans(plans); - List>> regionOpenInfos = - new ArrayList>>(states.size()); + List>> regionOpenInfos = + new ArrayList>>(states.size()); for (RegionState state: states) { HRegionInfo region = state.getRegion(); - String encodedRegionName = region.getEncodedName(); - Integer nodeVersion = offlineNodesVersions.get(encodedRegionName); - if (useZKForAssignment && (nodeVersion == null || nodeVersion == -1)) { - LOG.warn("failed to offline in zookeeper: " + region); - failedToOpenRegions.add(region); // assign individually later - Lock lock = locks.remove(encodedRegionName); - lock.unlock(); - } else { - regionStates.updateRegionState( - region, State.PENDING_OPEN, destination); - List favoredNodes = ServerName.EMPTY_SERVER_LIST; - if (this.shouldAssignRegionsWithFavoredNodes) { - favoredNodes = ((FavoredNodeLoadBalancer)this.balancer).getFavoredNodes(region); - } - regionOpenInfos.add(new Triple>( - region, nodeVersion, favoredNodes)); + regionStates.updateRegionState( + region, State.PENDING_OPEN, destination); + List favoredNodes = ServerName.EMPTY_SERVER_LIST; + if (this.shouldAssignRegionsWithFavoredNodes) { + favoredNodes = ((FavoredNodeLoadBalancer)this.balancer).getFavoredNodes(region); } + regionOpenInfos.add(new Pair>( + region, favoredNodes)); } // Move on to open regions. try { // Send OPEN RPC. If it fails on a IOE or RemoteException, // regions will be assigned individually. + Configuration conf = server.getConfiguration(); long maxWaitTime = System.currentTimeMillis() + - this.server.getConfiguration(). - getLong("hbase.regionserver.rpc.startup.waittime", 60000); + conf.getLong("hbase.regionserver.rpc.startup.waittime", 60000); for (int i = 1; i <= maximumAttempts && !server.isStopped(); i++) { try { - // regionOpenInfos is empty if all regions are in failedToOpenRegions list - if (regionOpenInfos.isEmpty()) { - break; - } List regionOpeningStateList = serverManager .sendRegionOpen(destination, regionOpenInfos); - if (regionOpeningStateList == null) { - // Failed getting RPC connection to this server - return false; - } for (int k = 0, n = regionOpeningStateList.size(); k < n; k++) { RegionOpeningState openingState = regionOpeningStateList.get(k); if (openingState != RegionOpeningState.OPENED) { HRegionInfo region = regionOpenInfos.get(k).getFirst(); - if (openingState == RegionOpeningState.ALREADY_OPENED) { - processAlreadyOpenedRegion(region, destination); - } else if (openingState == RegionOpeningState.FAILED_OPENING) { - // Failed opening this region, reassign it later - failedToOpenRegions.add(region); - } else { - LOG.warn("THIS SHOULD NOT HAPPEN: unknown opening state " - + openingState + " in assigning region " + region); - } + LOG.info("Got opening state " + openingState + + ", will reassign later: " + region); + // Failed opening this region, reassign it later + forceRegionStateToOffline(region, true); + failedToOpenRegions.add(region); } } break; @@ -1724,8 +766,10 @@ public class AssignmentManager extends ZooKeeperListener { } else if (e instanceof ServerNotRunningYetException) { long now = System.currentTimeMillis(); if (now < maxWaitTime) { - LOG.debug("Server is not yet up; waiting up to " + - (maxWaitTime - now) + "ms", e); + if (LOG.isDebugEnabled()) { + LOG.debug("Server is not yet up; waiting up to " + + (maxWaitTime - now) + "ms", e); + } Thread.sleep(100); i--; // reset the try count continue; @@ -1745,6 +789,17 @@ public class AssignmentManager extends ZooKeeperListener { Thread.sleep(100); i--; continue; + } else if (e instanceof FailedServerException && i < maximumAttempts) { + // In case the server is in the failed server list, no point to + // retry too soon. Retry after the failed_server_expiry time + long sleepTime = 1 + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, + RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); + if (LOG.isDebugEnabled()) { + LOG.debug(destination + " is on failed server list; waiting " + + sleepTime + "ms", e); + } + Thread.sleep(sleepTime); + continue; } throw e; } @@ -1753,6 +808,10 @@ public class AssignmentManager extends ZooKeeperListener { // Can be a socket timeout, EOF, NoRouteToHost, etc LOG.info("Unable to communicate with " + destination + " in order to assign regions, ", e); + for (RegionState state: states) { + HRegionInfo region = state.getRegion(); + forceRegionStateToOffline(region, true); + } return false; } } finally { @@ -1786,44 +845,23 @@ public class AssignmentManager extends ZooKeeperListener { * on an unexpected server scenario, for an example) */ private void unassign(final HRegionInfo region, - final RegionState state, final int versionOfClosingNode, - final ServerName dest, final boolean transitionInZK, - final ServerName src) { - ServerName server = src; - if (state != null) { - server = state.getServerName(); - } - long maxWaitTime = -1; + final ServerName server, final ServerName dest) { for (int i = 1; i <= this.maximumAttempts; i++) { if (this.server.isStopped() || this.server.isAborted()) { LOG.debug("Server stopped/aborted; skipping unassign of " + region); return; } - // ClosedRegionhandler can remove the server from this.regions if (!serverManager.isServerOnline(server)) { LOG.debug("Offline " + region.getRegionNameAsString() + ", no need to unassign since it's on a dead server: " + server); - if (transitionInZK) { - // delete the node. if no node exists need not bother. - deleteClosingOrClosedNode(region, server); - } - if (state != null) { - regionOffline(region); - } + regionStates.updateRegionState(region, State.OFFLINE); return; } try { // Send CLOSE RPC - if (serverManager.sendRegionClose(server, region, - versionOfClosingNode, dest, transitionInZK)) { + if (serverManager.sendRegionClose(server, region, dest)) { LOG.debug("Sent CLOSE to " + server + " for region " + region.getRegionNameAsString()); - if (useZKForAssignment && !transitionInZK && state != null) { - // Retry to make sure the region is - // closed so as to avoid double assignment. - unassign(region, state, versionOfClosingNode, - dest, transitionInZK, src); - } return; } // This never happens. Currently regionserver close always return true. @@ -1834,72 +872,41 @@ public class AssignmentManager extends ZooKeeperListener { if (t instanceof RemoteException) { t = ((RemoteException)t).unwrapRemoteException(); } - boolean logRetries = true; if (t instanceof NotServingRegionException || t instanceof RegionServerStoppedException || t instanceof ServerNotRunningYetException) { LOG.debug("Offline " + region.getRegionNameAsString() + ", it's not any more on " + server, t); - if (transitionInZK) { - deleteClosingOrClosedNode(region, server); - } - if (state != null) { - regionOffline(region); - } + regionStates.updateRegionState(region, State.OFFLINE); return; - } else if ((t instanceof FailedServerException) || (state != null && - t instanceof RegionAlreadyInTransitionException)) { - long sleepTime = 0; - Configuration conf = this.server.getConfiguration(); - if(t instanceof FailedServerException) { - sleepTime = 1 + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, - RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); - } else { - // RS is already processing this region, only need to update the timestamp - LOG.debug("update " + state + " the timestamp."); - state.updateTimestampToNow(); - if (maxWaitTime < 0) { - maxWaitTime = - EnvironmentEdgeManager.currentTime() - + conf.getLong(ALREADY_IN_TRANSITION_WAITTIME, - DEFAULT_ALREADY_IN_TRANSITION_WAITTIME); - } - long now = EnvironmentEdgeManager.currentTime(); - if (now < maxWaitTime) { - LOG.debug("Region is already in transition; " - + "waiting up to " + (maxWaitTime - now) + "ms", t); - sleepTime = 100; - i--; // reset the try count - logRetries = false; - } - } + } else if (t instanceof FailedServerException && i < maximumAttempts) { + // In case the server is in the failed server list, no point to + // retry too soon. Retry after the failed_server_expiry time try { - if (sleepTime > 0) { - Thread.sleep(sleepTime); + Configuration conf = this.server.getConfiguration(); + long sleepTime = 1 + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, + RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); + if (LOG.isDebugEnabled()) { + LOG.debug(server + " is on failed server list; waiting " + + sleepTime + "ms", t); } + Thread.sleep(sleepTime); } catch (InterruptedException ie) { LOG.warn("Failed to unassign " + region.getRegionNameAsString() + " since interrupted", ie); + regionStates.updateRegionState(region, State.FAILED_CLOSE); Thread.currentThread().interrupt(); - if (state != null) { - regionStates.updateRegionState(region, State.FAILED_CLOSE); - } return; } } - if (logRetries) { - LOG.info("Server " + server + " returned " + t + " for " - + region.getRegionNameAsString() + ", try=" + i - + " of " + this.maximumAttempts, t); - // Presume retry or server will expire. - } + LOG.info("Server " + server + " returned " + t + " for " + + region.getRegionNameAsString() + ", try=" + i + + " of " + this.maximumAttempts, t); } } // Run out of attempts - if (state != null) { - regionStates.updateRegionState(region, State.FAILED_CLOSE); - } + regionStates.updateRegionState(region, State.FAILED_CLOSE); } /** @@ -1913,7 +920,6 @@ public class AssignmentManager extends ZooKeeperListener { state = regionStates.createRegionState(region); } - ServerName sn = state.getServerName(); if (forceNewPlan && LOG.isDebugEnabled()) { LOG.debug("Force region state offline " + state); } @@ -1931,35 +937,16 @@ public class AssignmentManager extends ZooKeeperListener { } case FAILED_CLOSE: case FAILED_OPEN: - unassign(region, state, -1, null, false, null); + regionStates.updateRegionState(region, State.PENDING_CLOSE); + unassign(region, state.getServerName(), null); state = regionStates.getRegionState(region); - if (state.isFailedClose()) { - // If we can't close the region, we can't re-assign - // it so as to avoid possible double assignment/data loss. - LOG.info("Skip assigning " + - region + ", we couldn't close it: " + state); + if (!state.isOffline() && !state.isClosed()) { + // If the region isn't offline, we can't re-assign + // it now. It will be assigned automatically after + // the regionserver reports it's closed. return null; } case OFFLINE: - // This region could have been open on this server - // for a while. If the server is dead and not processed - // yet, we can move on only if the meta shows the - // region is not on this server actually, or on a server - // not dead, or dead and processed already. - // In case not using ZK, we don't need this check because - // we have the latest info in memory, and the caller - // will do another round checking any way. - if (useZKForAssignment - && regionStates.isServerDeadAndNotProcessed(sn) - && wasRegionOnDeadServerByMeta(region, sn)) { - if (!regionStates.isRegionInTransition(region)) { - LOG.info("Updating the state to " + State.OFFLINE + " to allow to be reassigned by SSH"); - regionStates.updateRegionState(region, State.OFFLINE); - } - LOG.info("Skip assigning " + region.getRegionNameAsString() - + ", it is on a dead but not processed yet server: " + sn); - return null; - } case CLOSED: break; default: @@ -1970,53 +957,18 @@ public class AssignmentManager extends ZooKeeperListener { return state; } - @SuppressWarnings("deprecation") - private boolean wasRegionOnDeadServerByMeta( - final HRegionInfo region, final ServerName sn) { - try { - if (region.isMetaRegion()) { - ServerName server = this.server.getMetaTableLocator(). - getMetaRegionLocation(this.server.getZooKeeper()); - return regionStates.isServerDeadAndNotProcessed(server); - } - while (!server.isStopped()) { - try { - this.server.getMetaTableLocator().waitMetaRegionLocation(server.getZooKeeper()); - Result r = MetaTableAccessor.getRegionResult(server.getConnection(), - region.getRegionName()); - if (r == null || r.isEmpty()) return false; - ServerName server = HRegionInfo.getServerName(r); - return regionStates.isServerDeadAndNotProcessed(server); - } catch (IOException ioe) { - LOG.info("Received exception accessing hbase:meta during force assign " - + region.getRegionNameAsString() + ", retrying", ioe); - } - } - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - LOG.info("Interrupted accessing hbase:meta", e); - } - // Call is interrupted or server is stopped. - return regionStates.isServerDeadAndNotProcessed(sn); - } - /** * Caller must hold lock on the passed state object. * @param state - * @param setOfflineInZK * @param forceNewPlan */ - private void assign(RegionState state, - final boolean setOfflineInZK, final boolean forceNewPlan) { + private void assign(RegionState state, boolean forceNewPlan) { long startTime = EnvironmentEdgeManager.currentTime(); try { Configuration conf = server.getConfiguration(); - RegionState currentState = state; - int versionOfOfflineNode = -1; RegionPlan plan = null; long maxWaitTime = -1; HRegionInfo region = state.getRegion(); - RegionOpeningState regionOpenState; Throwable previousException = null; for (int i = 1; i <= maximumAttempts; i++) { if (server.isStopped() || server.isAborted()) { @@ -2052,45 +1004,25 @@ public class AssignmentManager extends ZooKeeperListener { regionStates.updateRegionState(region, State.FAILED_OPEN); return; } - if (setOfflineInZK && versionOfOfflineNode == -1) { - // get the version of the znode after setting it to OFFLINE. - // versionOfOfflineNode will be -1 if the znode was not set to OFFLINE - versionOfOfflineNode = setOfflineInZooKeeper(currentState, plan.getDestination()); - if (versionOfOfflineNode != -1) { - if (isDisabledorDisablingRegionInRIT(region)) { - return; - } - // In case of assignment from EnableTableHandler table state is ENABLING. Any how - // EnableTableHandler will set ENABLED after assigning all the table regions. If we - // try to set to ENABLED directly then client API may think table is enabled. - // When we have a case such as all the regions are added directly into hbase:meta and we call - // assignRegion then we need to make the table ENABLED. Hence in such case the table - // will not be in ENABLING or ENABLED state. - TableName tableName = region.getTable(); - if (!tableStateManager.isTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED, ZooKeeperProtos.Table.State.ENABLING)) { - LOG.debug("Setting table " + tableName + " to ENABLED state."); - setEnabledTable(tableName); - } - } - } - if (setOfflineInZK && versionOfOfflineNode == -1) { - LOG.info("Unable to set offline in ZooKeeper to assign " + region); - // Setting offline in ZK must have been failed due to ZK racing or some - // exception which may make the server to abort. If it is ZK racing, - // we should retry since we already reset the region state, - // existing (re)assignment will fail anyway. - if (!server.isAborted()) { - continue; - } + // In case of assignment from EnableTableHandler table state is ENABLING. Any how + // EnableTableHandler will set ENABLED after assigning all the table regions. If we + // try to set to ENABLED directly then client API may think table is enabled. + // When we have a case such as all the regions are added directly into hbase:meta and we call + // assignRegion then we need to make the table ENABLED. Hence in such case the table + // will not be in ENABLING or ENABLED state. + TableName tableName = region.getTable(); + if (!tableStateManager.isTableState(tableName, + TableState.State.ENABLED, TableState.State.ENABLING)) { + LOG.debug("Setting table " + tableName + " to ENABLED state."); + setEnabledTable(tableName); } LOG.info("Assigning " + region.getRegionNameAsString() + " to " + plan.getDestination().toString()); // Transition RegionState to PENDING_OPEN - currentState = regionStates.updateRegionState(region, + regionStates.updateRegionState(region, State.PENDING_OPEN, plan.getDestination()); - boolean needNewPlan; + boolean needNewPlan = false; final String assignMsg = "Failed assignment of " + region.getRegionNameAsString() + " to " + plan.getDestination(); try { @@ -2098,23 +1030,8 @@ public class AssignmentManager extends ZooKeeperListener { if (this.shouldAssignRegionsWithFavoredNodes) { favoredNodes = ((FavoredNodeLoadBalancer)this.balancer).getFavoredNodes(region); } - regionOpenState = serverManager.sendRegionOpen( - plan.getDestination(), region, versionOfOfflineNode, favoredNodes); - - if (regionOpenState == RegionOpeningState.FAILED_OPENING) { - // Failed opening this region, looping again on a new server. - needNewPlan = true; - LOG.warn(assignMsg + ", regionserver says 'FAILED_OPENING', " + - " trying to assign elsewhere instead; " + - "try=" + i + " of " + this.maximumAttempts); - } else { - // we're done - if (regionOpenState == RegionOpeningState.ALREADY_OPENED) { - processAlreadyOpenedRegion(region, plan.getDestination()); - } - return; - } - + serverManager.sendRegionOpen(plan.getDestination(), region, favoredNodes); + return; // we're done } catch (Throwable t) { if (t instanceof RemoteException) { t = ((RemoteException) t).unwrapRemoteException(); @@ -2122,44 +1039,34 @@ public class AssignmentManager extends ZooKeeperListener { previousException = t; // Should we wait a little before retrying? If the server is starting it's yes. - // If the region is already in transition, it's yes as well: we want to be sure that - // the region will get opened but we don't want a double assignment. - boolean hold = (t instanceof RegionAlreadyInTransitionException || - t instanceof ServerNotRunningYetException); + boolean hold = (t instanceof ServerNotRunningYetException); // In case socket is timed out and the region server is still online, // the openRegion RPC could have been accepted by the server and // just the response didn't go through. So we will retry to - // open the region on the same server to avoid possible - // double assignment. + // open the region on the same server. boolean retry = !hold && (t instanceof java.net.SocketTimeoutException && this.serverManager.isServerOnline(plan.getDestination())); - if (hold) { LOG.warn(assignMsg + ", waiting a little before trying on the same region server " + "try=" + i + " of " + this.maximumAttempts, t); if (maxWaitTime < 0) { - if (t instanceof RegionAlreadyInTransitionException) { - maxWaitTime = EnvironmentEdgeManager.currentTime() - + this.server.getConfiguration().getLong(ALREADY_IN_TRANSITION_WAITTIME, - DEFAULT_ALREADY_IN_TRANSITION_WAITTIME); - } else { - maxWaitTime = EnvironmentEdgeManager.currentTime() - + this.server.getConfiguration().getLong( - "hbase.regionserver.rpc.startup.waittime", 60000); - } + maxWaitTime = EnvironmentEdgeManager.currentTime() + + this.server.getConfiguration().getLong( + "hbase.regionserver.rpc.startup.waittime", 60000); } try { - needNewPlan = false; long now = EnvironmentEdgeManager.currentTime(); if (now < maxWaitTime) { - LOG.debug("Server is not yet up or region is already in transition; " - + "waiting up to " + (maxWaitTime - now) + "ms", t); + if (LOG.isDebugEnabled()) { + LOG.debug("Server is not yet up; waiting up to " + + (maxWaitTime - now) + "ms", t); + } Thread.sleep(100); i--; // reset the try count - } else if (!(t instanceof RegionAlreadyInTransitionException)) { + } else { LOG.debug("Server is not up for a while; try a new one", t); needNewPlan = true; } @@ -2171,9 +1078,10 @@ public class AssignmentManager extends ZooKeeperListener { return; } } else if (retry) { - needNewPlan = false; i--; // we want to retry as many times as needed as long as the RS is not dead. - LOG.warn(assignMsg + ", trying to assign to the same region server due ", t); + if (LOG.isDebugEnabled()) { + LOG.debug(assignMsg + ", trying to assign to the same region server due ", t); + } } else { needNewPlan = true; LOG.warn(assignMsg + ", trying to assign elsewhere instead;" + @@ -2222,8 +1130,7 @@ public class AssignmentManager extends ZooKeeperListener { // Clean out plan we failed execute and one that doesn't look like it'll // succeed anyways; we need a new plan! // Transition back to OFFLINE - currentState = regionStates.updateRegionState(region, State.OFFLINE); - versionOfOfflineNode = -1; + regionStates.updateRegionState(region, State.OFFLINE); plan = newPlan; } else if(plan.getDestination().equals(newPlan.getDestination()) && previousException instanceof FailedServerException) { @@ -2249,21 +1156,10 @@ public class AssignmentManager extends ZooKeeperListener { } } - private void processAlreadyOpenedRegion(HRegionInfo region, ServerName sn) { - // Remove region from in-memory transition and unassigned node from ZK - // While trying to enable the table the regions of the table were - // already enabled. - LOG.debug("ALREADY_OPENED " + region.getRegionNameAsString() - + " to " + sn); - String encodedName = region.getEncodedName(); - deleteNodeInStates(encodedName, "offline", sn, EventType.M_ZK_REGION_OFFLINE); - regionStates.regionOnline(region, sn); - } - private boolean isDisabledorDisablingRegionInRIT(final HRegionInfo region) { if (this.tableStateManager.isTableState(region.getTable(), - ZooKeeperProtos.Table.State.DISABLED, - ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, + TableState.State.DISABLING) || replicasToClose.contains(region)) { LOG.info("Table " + region.getTable() + " is disabled or disabling;" + " skipping assign of " + region.getRegionNameAsString()); offlineDisabledRegion(region); @@ -2273,61 +1169,18 @@ public class AssignmentManager extends ZooKeeperListener { } /** - * Set region as OFFLINED up in zookeeper - * - * @param state - * @return the version of the offline node if setting of the OFFLINE node was - * successful, -1 otherwise. - */ - private int setOfflineInZooKeeper(final RegionState state, final ServerName destination) { - if (!state.isClosed() && !state.isOffline()) { - String msg = "Unexpected state : " + state + " .. Cannot transit it to OFFLINE."; - this.server.abort(msg, new IllegalStateException(msg)); - return -1; - } - regionStates.updateRegionState(state.getRegion(), State.OFFLINE); - int versionOfOfflineNode; - try { - // get the version after setting the znode to OFFLINE - versionOfOfflineNode = ZKAssign.createOrForceNodeOffline(watcher, - state.getRegion(), destination); - if (versionOfOfflineNode == -1) { - LOG.warn("Attempted to create/force node into OFFLINE state before " - + "completing assignment but failed to do so for " + state); - return -1; - } - } catch (KeeperException e) { - server.abort("Unexpected ZK exception creating/setting node OFFLINE", e); - return -1; - } - return versionOfOfflineNode; - } - - /** - * @param region the region to assign - * @return Plan for passed region (If none currently, it creates one or - * if no servers to assign, it returns null). - */ - private RegionPlan getRegionPlan(final HRegionInfo region, - final boolean forceNewPlan) throws HBaseIOException { - return getRegionPlan(region, null, forceNewPlan); - } - - /** * @param region the region to assign - * @param serverToExclude Server to exclude (we know its bad). Pass null if - * all servers are thought to be assignable. * @param forceNewPlan If true, then if an existing plan exists, a new plan * will be generated. * @return Plan for passed region (If none currently, it creates one or * if no servers to assign, it returns null). */ private RegionPlan getRegionPlan(final HRegionInfo region, - final ServerName serverToExclude, final boolean forceNewPlan) throws HBaseIOException { + final boolean forceNewPlan) throws HBaseIOException { // Pickup existing plan or make a new one final String encodedName = region.getEncodedName(); final List destServers = - serverManager.createDestinationServersList(serverToExclude); + serverManager.createDestinationServersList(); if (destServers.isEmpty()){ LOG.warn("Can't move " + encodedName + @@ -2373,15 +1226,19 @@ public class AssignmentManager extends ZooKeeperListener { LOG.warn("Can't find a destination for " + encodedName); return null; } - LOG.debug("No previous transition plan found (or ignoring " + - "an existing plan) for " + region.getRegionNameAsString() + - "; generated random plan=" + randomPlan + "; " + destServers.size() + - " (online=" + serverManager.getOnlineServers().size() + - ") available servers, forceNewPlan=" + forceNewPlan); - return randomPlan; + if (LOG.isDebugEnabled()) { + LOG.debug("No previous transition plan found (or ignoring " + + "an existing plan) for " + region.getRegionNameAsString() + + "; generated random plan=" + randomPlan + "; " + destServers.size() + + " (online=" + serverManager.getOnlineServers().size() + + ") available servers, forceNewPlan=" + forceNewPlan); } - LOG.debug("Using pre-existing plan for " + - region.getRegionNameAsString() + "; plan=" + existingPlan); + return randomPlan; + } + if (LOG.isDebugEnabled()) { + LOG.debug("Using pre-existing plan for " + + region.getRegionNameAsString() + "; plan=" + existingPlan); + } return existingPlan; } @@ -2411,7 +1268,7 @@ public class AssignmentManager extends ZooKeeperListener { * @param region server to be unassigned */ public void unassign(HRegionInfo region) { - unassign(region, false); + unassign(region, null); } @@ -2427,33 +1284,30 @@ public class AssignmentManager extends ZooKeeperListener { * If a RegionPlan is already set, it will remain. * * @param region server to be unassigned - * @param force if region should be closed even if already closing + * @param dest the destination server of the region */ - public void unassign(HRegionInfo region, boolean force, ServerName dest) { + public void unassign(HRegionInfo region, ServerName dest) { // TODO: Method needs refactoring. Ugly buried returns throughout. Beware! LOG.debug("Starting unassign of " + region.getRegionNameAsString() + " (offlining), current state: " + regionStates.getRegionState(region)); String encodedName = region.getEncodedName(); // Grab the state of this region and synchronize on it - int versionOfClosingNode = -1; // We need a lock here as we're going to do a put later and we don't want multiple states // creation ReentrantLock lock = locker.acquireLock(encodedName); RegionState state = regionStates.getRegionTransitionState(encodedName); - boolean reassign = true; try { - if (state == null) { - // Region is not in transition. - // We can unassign it only if it's not SPLIT/MERGED. - state = regionStates.getRegionState(encodedName); - if (state != null && state.isUnassignable()) { - LOG.info("Attempting to unassign " + state + ", ignored"); - // Offline region will be reassigned below - return; - } - // Create the znode in CLOSING state - try { + if (state == null || state.isFailedClose()) { + if (state == null) { + // Region is not in transition. + // We can unassign it only if it's not SPLIT/MERGED. + state = regionStates.getRegionState(encodedName); + if (state != null && state.isUnassignable()) { + LOG.info("Attempting to unassign " + state + ", ignored"); + // Offline region will be reassigned below + return; + } if (state == null || state.getServerName() == null) { // We don't know where the region is, offline it. // No need to send CLOSE RPC @@ -2462,125 +1316,32 @@ public class AssignmentManager extends ZooKeeperListener { regionOffline(region); return; } - if (useZKForAssignment) { - versionOfClosingNode = ZKAssign.createNodeClosing( - watcher, region, state.getServerName()); - if (versionOfClosingNode == -1) { - LOG.info("Attempting to unassign " + - region.getRegionNameAsString() + " but ZK closing node " - + "can't be created."); - reassign = false; // not unassigned at all - return; - } - } - } catch (KeeperException e) { - if (e instanceof NodeExistsException) { - // Handle race between master initiated close and regionserver - // orchestrated splitting. See if existing node is in a - // SPLITTING or SPLIT state. If so, the regionserver started - // an op on node before we could get our CLOSING in. Deal. - NodeExistsException nee = (NodeExistsException)e; - String path = nee.getPath(); - try { - if (isSplitOrSplittingOrMergedOrMerging(path)) { - LOG.debug(path + " is SPLIT or SPLITTING or MERGED or MERGING; " + - "skipping unassign because region no longer exists -- its split or merge"); - reassign = false; // no need to reassign for split/merged region - return; - } - } catch (KeeperException.NoNodeException ke) { - LOG.warn("Failed getData on SPLITTING/SPLIT at " + path + - "; presuming split and that the region to unassign, " + - encodedName + ", no longer exists -- confirm", ke); - return; - } catch (KeeperException ke) { - LOG.error("Unexpected zk state", ke); - } catch (DeserializationException de) { - LOG.error("Failed parse", de); - } - } - // If we get here, don't understand whats going on -- abort. - server.abort("Unexpected ZK exception creating node CLOSING", e); - reassign = false; // heading out already - return; } - state = regionStates.updateRegionState(region, State.PENDING_CLOSE); + state = regionStates.updateRegionState( + region, State.PENDING_CLOSE); } else if (state.isFailedOpen()) { // The region is not open yet regionOffline(region); return; - } else if (force && state.isPendingCloseOrClosing()) { - LOG.debug("Attempting to unassign " + region.getRegionNameAsString() + - " which is already " + state.getState() + - " but forcing to send a CLOSE RPC again "); - if (state.isFailedClose()) { - state = regionStates.updateRegionState(region, State.PENDING_CLOSE); - } - state.updateTimestampToNow(); } else { LOG.debug("Attempting to unassign " + region.getRegionNameAsString() + " but it is " + - "already in transition (" + state.getState() + ", force=" + force + ")"); + "already in transition (" + state.getState()); return; } - unassign(region, state, versionOfClosingNode, dest, useZKForAssignment, null); + unassign(region, state.getServerName(), dest); } finally { lock.unlock(); // Region is expected to be reassigned afterwards - if (reassign && regionStates.isRegionOffline(region)) { - assign(region, true); + if (!replicasToClose.contains(region) + && regionStates.isRegionInState(region, State.OFFLINE)) { + assign(region); } } } - public void unassign(HRegionInfo region, boolean force){ - unassign(region, force, null); - } - - /** - * @param region regioninfo of znode to be deleted. - */ - public void deleteClosingOrClosedNode(HRegionInfo region, ServerName sn) { - String encodedName = region.getEncodedName(); - deleteNodeInStates(encodedName, "closing", sn, EventType.M_ZK_REGION_CLOSING, - EventType.RS_ZK_REGION_CLOSED); - } - - /** - * @param path - * @return True if znode is in SPLIT or SPLITTING or MERGED or MERGING state. - * @throws KeeperException Can happen if the znode went away in meantime. - * @throws DeserializationException - */ - private boolean isSplitOrSplittingOrMergedOrMerging(final String path) - throws KeeperException, DeserializationException { - boolean result = false; - // This may fail if the SPLIT or SPLITTING or MERGED or MERGING znode gets - // cleaned up before we can get data from it. - byte [] data = ZKAssign.getData(watcher, path); - if (data == null) { - LOG.info("Node " + path + " is gone"); - return false; - } - RegionTransition rt = RegionTransition.parseFrom(data); - switch (rt.getEventType()) { - case RS_ZK_REQUEST_REGION_SPLIT: - case RS_ZK_REGION_SPLIT: - case RS_ZK_REGION_SPLITTING: - case RS_ZK_REQUEST_REGION_MERGE: - case RS_ZK_REGION_MERGED: - case RS_ZK_REGION_MERGING: - result = true; - break; - default: - LOG.info("Node " + path + " is in " + rt.getEventType()); - break; - } - return result; - } - /** * Used by unit tests. Return the number of regions opened so far in the life * of the master. Increases by one every time the master opens a region @@ -2619,14 +1380,10 @@ public class AssignmentManager extends ZooKeeperListener { *

          * Assumes that hbase:meta is currently closed and is not being actively served by * any RegionServer. - *

          - * Forcibly unsets the current meta region location in ZooKeeper and assigns - * hbase:meta to a random RegionServer. - * @throws KeeperException */ public void assignMeta() throws KeeperException { - this.server.getMetaTableLocator().deleteMetaLocation(this.watcher); - assign(HRegionInfo.FIRST_META_REGIONINFO, true); + regionStates.updateRegionState(HRegionInfo.FIRST_META_REGIONINFO, State.OFFLINE); + assign(HRegionInfo.FIRST_META_REGIONINFO); } /** @@ -2704,7 +1461,7 @@ public class AssignmentManager extends ZooKeeperListener { " region(s) to " + servers + " server(s)"); } for (Map.Entry> plan: bulkPlan.entrySet()) { - if (!assign(plan.getKey(), plan.getValue())) { + if (!assign(plan.getKey(), plan.getValue()) && !server.isStopped()) { for (HRegionInfo region: plan.getValue()) { if (!regionStates.isRegionOnline(region)) { invokeAssign(region); @@ -2752,7 +1509,7 @@ public class AssignmentManager extends ZooKeeperListener { for (HRegionInfo hri : regionsFromMetaScan) { TableName tableName = hri.getTable(); if (!tableStateManager.isTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED)) { + TableState.State.ENABLED)) { setEnabledTable(tableName); } } @@ -2789,30 +1546,6 @@ public class AssignmentManager extends ZooKeeperListener { } /** - * Wait until no regions in transition. - * @param timeout How long to wait. - * @return True if nothing in regions in transition. - * @throws InterruptedException - */ - boolean waitUntilNoRegionsInTransition(final long timeout) - throws InterruptedException { - // Blocks until there are no regions in transition. It is possible that - // there - // are regions in transition immediately after this returns but guarantees - // that if it returns without an exception that there was a period of time - // with no regions in transition from the point-of-view of the in-memory - // state of the Master. - final long endTime = System.currentTimeMillis() + timeout; - - while (!this.server.isStopped() && regionStates.isRegionsInTransition() - && endTime > System.currentTimeMillis()) { - regionStates.waitForUpdate(100); - } - - return !regionStates.isRegionsInTransition(); - } - - /** * Rebuild the list of user regions and assignment information. *

          * Returns a set of servers that are not found to be online that hosted @@ -2821,14 +1554,14 @@ public class AssignmentManager extends ZooKeeperListener { * @throws IOException */ Set rebuildUserRegions() throws - IOException, KeeperException, CoordinatedStateException { + IOException, KeeperException { Set disabledOrEnablingTables = tableStateManager.getTablesInStates( - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.ENABLING); + TableState.State.DISABLED, TableState.State.ENABLING); Set disabledOrDisablingOrEnabling = tableStateManager.getTablesInStates( - ZooKeeperProtos.Table.State.DISABLED, - ZooKeeperProtos.Table.State.DISABLING, - ZooKeeperProtos.Table.State.ENABLING); + TableState.State.DISABLED, + TableState.State.DISABLING, + TableState.State.ENABLING); // Region assignment from META List results = MetaTableAccessor.fullScanOfMeta(server.getConnection()); @@ -2842,6 +1575,19 @@ public class AssignmentManager extends ZooKeeperListener { LOG.debug("null result from meta - ignoring but this is strange."); continue; } + // keep a track of replicas to close. These were the replicas of the originally + // unmerged regions. The master might have closed them before but it mightn't + // maybe because it crashed. + PairOfSameType p = MetaTableAccessor.getMergeRegions(result); + if (p.getFirst() != null && p.getSecond() != null) { + int numReplicas = ((MasterServices)server).getTableDescriptors().get(p.getFirst(). + getTable()).getRegionReplication(); + for (HRegionInfo merge : p) { + for (int i = 1; i < numReplicas; i++) { + replicasToClose.add(RegionReplicaUtil.getRegionInfoForReplica(merge, i)); + } + } + } RegionLocations rl = MetaTableAccessor.getRegionLocations(result); if (rl == null) continue; HRegionLocation[] locations = rl.getRegionLocations(); @@ -2851,6 +1597,14 @@ public class AssignmentManager extends ZooKeeperListener { if (regionInfo == null) continue; int replicaId = regionInfo.getReplicaId(); State state = RegionStateStore.getRegionState(result, replicaId); + // keep a track of replicas to close. These were the replicas of the split parents + // from the previous life of the master. The master should have closed them before + // but it couldn't maybe because it crashed + if (replicaId == 0 && state.equals(State.SPLIT)) { + for (HRegionLocation h : locations) { + replicasToClose.add(h.getRegionInfo()); + } + } ServerName lastHost = hrl.getServerName(); ServerName regionLocation = RegionStateStore.getRegionServer(result, replicaId); regionStates.createRegionState(regionInfo, state, regionLocation, lastHost); @@ -2862,22 +1616,17 @@ public class AssignmentManager extends ZooKeeperListener { if (!onlineServers.contains(regionLocation)) { // Region is located on a server that isn't online offlineServers.add(regionLocation); - if (useZKForAssignment) { - regionStates.regionOffline(regionInfo); - } } else if (!disabledOrEnablingTables.contains(tableName)) { // Region is being served and on an active server // add only if region not in disabled or enabling table regionStates.regionOnline(regionInfo, regionLocation); balancer.regionOnline(regionInfo, regionLocation); - } else if (useZKForAssignment) { - regionStates.regionOffline(regionInfo); } // need to enable the table if not disabled or disabling or enabling // this will be used in rolling restarts if (!disabledOrDisablingOrEnabling.contains(tableName) && !getTableStateManager().isTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED)) { + TableState.State.ENABLED)) { setEnabledTable(tableName); } } @@ -2894,9 +1643,9 @@ public class AssignmentManager extends ZooKeeperListener { * @throws IOException */ private void recoverTableInDisablingState() - throws KeeperException, IOException, CoordinatedStateException { + throws KeeperException, IOException { Set disablingTables = - tableStateManager.getTablesInStates(ZooKeeperProtos.Table.State.DISABLING); + tableStateManager.getTablesInStates(TableState.State.DISABLING); if (disablingTables.size() != 0) { for (TableName tableName : disablingTables) { // Recover by calling DisableTableHandler @@ -2918,9 +1667,9 @@ public class AssignmentManager extends ZooKeeperListener { * @throws IOException */ private void recoverTableInEnablingState() - throws KeeperException, IOException, CoordinatedStateException { + throws KeeperException, IOException { Set enablingTables = tableStateManager. - getTablesInStates(ZooKeeperProtos.Table.State.ENABLING); + getTablesInStates(TableState.State.ENABLING); if (enablingTables.size() != 0) { for (TableName tableName : enablingTables) { // Recover by calling EnableTableHandler @@ -2943,54 +1692,19 @@ public class AssignmentManager extends ZooKeeperListener { } /** - * Processes list of dead servers from result of hbase:meta scan and regions in RIT - *

          - * This is used for failover to recover the lost regions that belonged to - * RegionServers which failed while there was no active master or regions - * that were in RIT. - *

          - * - * - * @param deadServers - * The list of dead servers which failed while there was no active - * master. Can be null. - * @throws IOException - * @throws KeeperException + * Processes list of regions in transition at startup */ - private void processDeadServersAndRecoverLostRegions( - Set deadServers) throws IOException, KeeperException { - if (deadServers != null && !deadServers.isEmpty()) { - for (ServerName serverName: deadServers) { - if (!serverManager.isServerDead(serverName)) { - serverManager.expireServer(serverName); // Let SSH do region re-assign - } - } - } - - List nodes = useZKForAssignment ? - ZKUtil.listChildrenAndWatchForNewChildren(watcher, watcher.assignmentZNode) - : ZKUtil.listChildrenNoWatch(watcher, watcher.assignmentZNode); - if (nodes != null && !nodes.isEmpty()) { - for (String encodedRegionName : nodes) { - processRegionInTransition(encodedRegionName, null); - } - } else if (!useZKForAssignment) { - processRegionInTransitionZkLess(); - } - } - - void processRegionInTransitionZkLess() { - // We need to send RPC call again for PENDING_OPEN/PENDING_CLOSE regions + void processRegionsInTransition(Collection regionStates) { + // We need to send RPC call again for PENDING_OPEN/PENDING_CLOSE regions // in case the RPC call is not sent out yet before the master was shut down // since we update the state before we send the RPC call. We can't update // the state after the RPC call. Otherwise, we don't know what's happened // to the region if the master dies right after the RPC call is out. - Map rits = regionStates.getRegionsInTransition(); - for (RegionState regionState: rits.values()) { + for (RegionState regionState: regionStates) { if (!serverManager.isServerOnline(regionState.getServerName())) { continue; // SSH will handle it } - State state = regionState.getState(); + RegionState.State state = regionState.getState(); LOG.info("Processing " + regionState); switch (state) { case CLOSED: @@ -3034,34 +1748,45 @@ public class AssignmentManager extends ZooKeeperListener { if (shouldAssignRegionsWithFavoredNodes) { favoredNodes = ((FavoredNodeLoadBalancer)balancer).getFavoredNodes(hri); } - RegionOpeningState regionOpenState = serverManager.sendRegionOpen( - serverName, hri, -1, favoredNodes); - - if (regionOpenState == RegionOpeningState.FAILED_OPENING) { - // Failed opening this region, this means the target server didn't get - // the original region open RPC, so re-assign it with a new plan - LOG.debug("Got failed_opening in retry sendRegionOpen for " - + regionState + ", re-assign it"); - invokeAssign(hri, true); - } - return; // Done. + serverManager.sendRegionOpen(serverName, hri, favoredNodes); + return; // we're done } catch (Throwable t) { if (t instanceof RemoteException) { t = ((RemoteException) t).unwrapRemoteException(); } - // In case SocketTimeoutException/FailedServerException, retry - if (t instanceof java.net.SocketTimeoutException - || t instanceof FailedServerException) { - Threads.sleep(100); - continue; + if (t instanceof FailedServerException && i < maximumAttempts) { + // In case the server is in the failed server list, no point to + // retry too soon. Retry after the failed_server_expiry time + try { + Configuration conf = this.server.getConfiguration(); + long sleepTime = 1 + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, + RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); + if (LOG.isDebugEnabled()) { + LOG.debug(serverName + " is on failed server list; waiting " + + sleepTime + "ms", t); + } + Thread.sleep(sleepTime); + continue; + } catch (InterruptedException ie) { + LOG.warn("Failed to assign " + + hri.getRegionNameAsString() + " since interrupted", ie); + regionStates.updateRegionState(hri, State.FAILED_OPEN); + Thread.currentThread().interrupt(); + return; + } } - // For other exceptions, re-assign it - LOG.debug("Got exception in retry sendRegionOpen for " - + regionState + ", re-assign it", t); - invokeAssign(hri); - return; // Done. + if (serverManager.isServerOnline(serverName) + && t instanceof java.net.SocketTimeoutException) { + i--; // reset the try count + } else { + LOG.info("Got exception in retrying sendRegionOpen for " + + regionState + "; try=" + i + " of " + maximumAttempts, t); + } + Threads.sleep(100); } } + // Run out of attempts + regionStates.updateRegionState(hri, State.FAILED_OPEN); } finally { lock.unlock(); } @@ -3091,35 +1816,45 @@ public class AssignmentManager extends ZooKeeperListener { if (!regionState.equals(regionStates.getRegionState(hri))) { return; // Region is not in the expected state any more } - if (!serverManager.sendRegionClose(serverName, hri, -1, null, false)) { - // This means the region is still on the target server - LOG.debug("Got false in retry sendRegionClose for " - + regionState + ", re-close it"); - invokeUnAssign(hri); - } + serverManager.sendRegionClose(serverName, hri, null); return; // Done. } catch (Throwable t) { if (t instanceof RemoteException) { t = ((RemoteException) t).unwrapRemoteException(); } - // In case SocketTimeoutException/FailedServerException, retry - if (t instanceof java.net.SocketTimeoutException - || t instanceof FailedServerException) { - Threads.sleep(100); - continue; + if (t instanceof FailedServerException && i < maximumAttempts) { + // In case the server is in the failed server list, no point to + // retry too soon. Retry after the failed_server_expiry time + try { + Configuration conf = this.server.getConfiguration(); + long sleepTime = 1 + conf.getInt(RpcClient.FAILED_SERVER_EXPIRY_KEY, + RpcClient.FAILED_SERVER_EXPIRY_DEFAULT); + if (LOG.isDebugEnabled()) { + LOG.debug(serverName + " is on failed server list; waiting " + + sleepTime + "ms", t); + } + Thread.sleep(sleepTime); + continue; + } catch (InterruptedException ie) { + LOG.warn("Failed to unassign " + + hri.getRegionNameAsString() + " since interrupted", ie); + regionStates.updateRegionState(hri, RegionState.State.FAILED_CLOSE); + Thread.currentThread().interrupt(); + return; + } } - if (!(t instanceof NotServingRegionException - || t instanceof RegionAlreadyInTransitionException)) { - // NotServingRegionException/RegionAlreadyInTransitionException - // means the target server got the original region close request. - // For other exceptions, re-close it - LOG.debug("Got exception in retry sendRegionClose for " - + regionState + ", re-close it", t); - invokeUnAssign(hri); + if (serverManager.isServerOnline(serverName) + && t instanceof java.net.SocketTimeoutException) { + i--; // reset the try count + } else { + LOG.info("Got exception in retrying sendRegionClose for " + + regionState + "; try=" + i + " of " + maximumAttempts, t); } - return; // Done. + Threads.sleep(100); } } + // Run out of attempts + regionStates.updateRegionState(hri, State.FAILED_CLOSE); } finally { lock.unlock(); } @@ -3161,7 +1896,7 @@ public class AssignmentManager extends ZooKeeperListener { /** * @param region Region whose plan we are to clear. */ - void clearRegionPlan(final HRegionInfo region) { + private void clearRegionPlan(final HRegionInfo region) { synchronized (this.regionPlans) { this.regionPlans.remove(region.getEncodedName()); } @@ -3208,11 +1943,7 @@ public class AssignmentManager extends ZooKeeperListener { } void invokeAssign(HRegionInfo regionInfo) { - invokeAssign(regionInfo, true); - } - - void invokeAssign(HRegionInfo regionInfo, boolean newPlan) { - threadPoolExecutorService.submit(new AssignCallable(this, regionInfo, newPlan)); + threadPoolExecutorService.submit(new AssignCallable(this, regionInfo)); } void invokeUnAssign(HRegionInfo regionInfo) { @@ -3225,43 +1956,26 @@ public class AssignmentManager extends ZooKeeperListener { /** * Check if the shutdown server carries the specific region. - * We have a bunch of places that store region location - * Those values aren't consistent. There is a delay of notification. - * The location from zookeeper unassigned node has the most recent data; - * but the node could be deleted after the region is opened by AM. - * The AM's info could be old when OpenedRegionHandler - * processing hasn't finished yet when server shutdown occurs. * @return whether the serverName currently hosts the region */ private boolean isCarryingRegion(ServerName serverName, HRegionInfo hri) { - RegionTransition rt = null; - try { - byte [] data = ZKAssign.getData(watcher, hri.getEncodedName()); - // This call can legitimately come by null - rt = data == null? null: RegionTransition.parseFrom(data); - } catch (KeeperException e) { - server.abort("Exception reading unassigned node for region=" + hri.getEncodedName(), e); - } catch (DeserializationException e) { - server.abort("Exception parsing unassigned node for region=" + hri.getEncodedName(), e); + RegionState regionState = regionStates.getRegionTransitionState(hri); + ServerName transitionAddr = regionState != null? regionState.getServerName(): null; + if (transitionAddr != null) { + boolean matchTransitionAddr = transitionAddr.equals(serverName); + LOG.debug("Checking region=" + hri.getRegionNameAsString() + + ", transitioning on server=" + matchTransitionAddr + + " server being checked: " + serverName + + ", matches=" + matchTransitionAddr); + return matchTransitionAddr; } - ServerName addressFromZK = rt != null? rt.getServerName(): null; - if (addressFromZK != null) { - // if we get something from ZK, we will use the data - boolean matchZK = addressFromZK.equals(serverName); - LOG.debug("Checking region=" + hri.getRegionNameAsString() + ", zk server=" + addressFromZK + - " current=" + serverName + ", matches=" + matchZK); - return matchZK; - } - - ServerName addressFromAM = regionStates.getRegionServerOfRegion(hri); - boolean matchAM = (addressFromAM != null && - addressFromAM.equals(serverName)); - LOG.debug("based on AM, current region=" + hri.getRegionNameAsString() + - " is on server=" + (addressFromAM != null ? addressFromAM : "null") + - " server being checked: " + serverName); - - return matchAM; + ServerName assignedAddr = regionStates.getRegionServerOfRegion(hri); + boolean matchAssignedAddr = serverName.equals(assignedAddr); + LOG.debug("based on AM, current region=" + hri.getRegionNameAsString() + + " is on server=" + assignedAddr + ", server being checked: " + + serverName); + return matchAssignedAddr; } /** @@ -3283,8 +1997,8 @@ public class AssignmentManager extends ZooKeeperListener { } } } - List regions = regionStates.serverOffline(watcher, sn); - for (Iterator it = regions.iterator(); it.hasNext(); ) { + List rits = regionStates.serverOffline(sn); + for (Iterator it = rits.iterator(); it.hasNext(); ) { HRegionInfo hri = it.next(); String encodedName = hri.getEncodedName(); @@ -3295,20 +2009,14 @@ public class AssignmentManager extends ZooKeeperListener { regionStates.getRegionTransitionState(encodedName); if (regionState == null || (regionState.getServerName() != null && !regionState.isOnServer(sn)) - || !(regionState.isFailedClose() || regionState.isOffline() - || regionState.isPendingOpenOrOpening())) { + || !RegionStates.isOneOfStates(regionState, State.PENDING_OPEN, + State.OPENING, State.FAILED_OPEN, State.FAILED_CLOSE, State.OFFLINE)) { LOG.info("Skip " + regionState + " since it is not opening/failed_close" + " on the dead server any more: " + sn); it.remove(); } else { - try { - // Delete the ZNode if exists - ZKAssign.deleteNodeFailSilent(watcher, hri); - } catch (KeeperException ke) { - server.abort("Unexpected ZK exception deleting node " + hri, ke); - } if (tableStateManager.isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING)) { regionStates.regionOffline(hri); it.remove(); continue; @@ -3320,7 +2028,7 @@ public class AssignmentManager extends ZooKeeperListener { lock.unlock(); } } - return regions; + return rits; } /** @@ -3330,7 +2038,7 @@ public class AssignmentManager extends ZooKeeperListener { HRegionInfo hri = plan.getRegionInfo(); TableName tableName = hri.getTable(); if (tableStateManager.isTableState(tableName, - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING)) { LOG.info("Ignored moving region of disabling/disabled table " + tableName); return; @@ -3349,36 +2057,23 @@ public class AssignmentManager extends ZooKeeperListener { synchronized (this.regionPlans) { this.regionPlans.put(plan.getRegionName(), plan); } - unassign(hri, false, plan.getDestination()); + unassign(hri, plan.getDestination()); } finally { lock.unlock(); } } public void stop() { - shutdown(); // Stop executor service, etc - } - - /** - * Shutdown the threadpool executor service - */ - public void shutdown() { - // It's an immediate shutdown, so we're clearing the remaining tasks. - synchronized (zkEventWorkerWaitingList){ - zkEventWorkerWaitingList.clear(); - } - // Shutdown the threadpool executor service threadPoolExecutorService.shutdownNow(); - zkEventWorkers.shutdownNow(); regionStateStore.stop(); } protected void setEnabledTable(TableName tableName) { try { this.tableStateManager.setTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED); - } catch (CoordinatedStateException e) { + TableState.State.ENABLED); + } catch (IOException e) { // here we can abort as it is the start up flow String errorMsg = "Unable to ensure that the table " + tableName + " will be" + " enabled because of a ZooKeeper issue"; @@ -3387,67 +2082,20 @@ public class AssignmentManager extends ZooKeeperListener { } } - /** - * Set region as OFFLINED up in zookeeper asynchronously. - * @param state - * @return True if we succeeded, false otherwise (State was incorrect or failed - * updating zk). - */ - private boolean asyncSetOfflineInZooKeeper(final RegionState state, - final AsyncCallback.StringCallback cb, final ServerName destination) { - if (!state.isClosed() && !state.isOffline()) { - this.server.abort("Unexpected state trying to OFFLINE; " + state, - new IllegalStateException()); - return false; - } - regionStates.updateRegionState(state.getRegion(), State.OFFLINE); - try { - ZKAssign.asyncCreateNodeOffline(watcher, state.getRegion(), - destination, cb, state); - } catch (KeeperException e) { - if (e instanceof NodeExistsException) { - LOG.warn("Node for " + state.getRegion() + " already exists"); - } else { - server.abort("Unexpected ZK exception creating/setting node OFFLINE", e); - } - return false; + private String onRegionFailedOpen(final RegionState current, + final HRegionInfo hri, final ServerName serverName) { + // The region must be opening on this server. + // If current state is failed_open on the same server, + // it could be a reportRegionTransition RPC retry. + if (current == null || !current.isOpeningOrFailedOpenOnServer(serverName)) { + return hri.getShortNameToLog() + " is not opening on " + serverName; } - return true; - } - private boolean deleteNodeInStates(String encodedName, - String desc, ServerName sn, EventType... types) { - try { - for (EventType et: types) { - if (ZKAssign.deleteNode(watcher, encodedName, et, sn)) { - return true; - } - } - LOG.info("Failed to delete the " + desc + " node for " - + encodedName + ". The node type may not match"); - } catch (NoNodeException e) { - if (LOG.isDebugEnabled()) { - LOG.debug("The " + desc + " node for " + encodedName + " already deleted"); - } - } catch (KeeperException ke) { - server.abort("Unexpected ZK exception deleting " + desc - + " node for the region " + encodedName, ke); + // Just return in case of retrying + if (current.isFailedOpen()) { + return null; } - return false; - } - - private void deleteMergingNode(String encodedName, ServerName sn) { - deleteNodeInStates(encodedName, "merging", sn, EventType.RS_ZK_REGION_MERGING, - EventType.RS_ZK_REQUEST_REGION_MERGE, EventType.RS_ZK_REGION_MERGED); - } - private void deleteSplittingNode(String encodedName, ServerName sn) { - deleteNodeInStates(encodedName, "splitting", sn, EventType.RS_ZK_REGION_SPLITTING, - EventType.RS_ZK_REQUEST_REGION_SPLIT, EventType.RS_ZK_REGION_SPLIT); - } - - private void onRegionFailedOpen( - final HRegionInfo hri, final ServerName sn) { String encodedName = hri.getEncodedName(); AtomicInteger failedOpenCount = failedOpenTracker.get(encodedName); if (failedOpenCount == null) { @@ -3477,398 +2125,507 @@ public class AssignmentManager extends ZooKeeperListener { // When there are more than one region server a new RS is selected as the // destination and the same is updated in the region plan. (HBASE-5546) if (getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING) || + replicasToClose.contains(hri)) { offlineDisabledRegion(hri); - return; + return null; } - // ZK Node is in CLOSED state, assign it. - regionStates.updateRegionState(hri, RegionState.State.CLOSED); + regionStates.updateRegionState(hri, RegionState.State.CLOSED); // This below has to do w/ online enable/disable of a table removeClosedRegion(hri); try { - getRegionPlan(hri, sn, true); + getRegionPlan(hri, true); } catch (HBaseIOException e) { LOG.warn("Failed to get region plan", e); } - invokeAssign(hri, false); + invokeAssign(hri); } } + // Null means no error + return null; } - private void onRegionOpen( - final HRegionInfo hri, final ServerName sn, long openSeqNum) { - regionOnline(hri, sn, openSeqNum); - if (useZKForAssignment) { - try { - // Delete the ZNode if exists - ZKAssign.deleteNodeFailSilent(watcher, hri); - } catch (KeeperException ke) { - server.abort("Unexpected ZK exception deleting node " + hri, ke); - } + private String onRegionOpen(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be opening on this server. + // If current state is already opened on the same server, + // it could be a reportRegionTransition RPC retry. + if (current == null || !current.isOpeningOrOpenedOnServer(serverName)) { + return hri.getShortNameToLog() + " is not opening on " + serverName; + } + + // Just return in case of retrying + if (current.isOpened()) { + return null; + } + + long openSeqNum = transition.hasOpenSeqNum() + ? transition.getOpenSeqNum() : HConstants.NO_SEQNUM; + if (openSeqNum < 0) { + return "Newly opened region has invalid open seq num " + openSeqNum; } + regionOnline(hri, serverName, openSeqNum); // reset the count, if any failedOpenTracker.remove(hri.getEncodedName()); if (getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING)) { invokeUnAssign(hri); } + return null; } - private void onRegionClosed(final HRegionInfo hri) { - if (getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + private String onRegionClosed(final RegionState current, + final HRegionInfo hri, final ServerName serverName) { + // Region will be usually assigned right after closed. When a RPC retry comes + // in, the region may already have moved away from closed state. However, on the + // region server side, we don't care much about the response for this transition. + // We only make sure master has got and processed this report, either + // successfully or not. So this is fine, not a problem at all. + if (current == null || !current.isClosingOrClosedOnServer(serverName)) { + return hri.getShortNameToLog() + " is not closing on " + serverName; + } + + // Just return in case of retrying + if (current.isClosed()) { + return null; + } + + if (getTableStateManager().isTableState(hri.getTable(), TableState.State.DISABLED, + TableState.State.DISABLING) || replicasToClose.contains(hri)) { offlineDisabledRegion(hri); - return; + return null; } + regionStates.updateRegionState(hri, RegionState.State.CLOSED); sendRegionClosedNotification(hri); // This below has to do w/ online enable/disable of a table removeClosedRegion(hri); - invokeAssign(hri, false); + invokeAssign(hri); + return null; } - private String onRegionSplit(ServerName sn, TransitionCode code, - HRegionInfo p, HRegionInfo a, HRegionInfo b) { - RegionState rs_p = regionStates.getRegionState(p); + private String onRegionReadyToSplit(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be opened on this server. + // If current state is already splitting on the same server, + // it could be a reportRegionTransition RPC retry. + if (current == null || !current.isSplittingOrOpenedOnServer(serverName)) { + return hri.getShortNameToLog() + " is not opening on " + serverName; + } + + // Just return in case of retrying + if (current.isSplitting()) { + return null; + } + + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); RegionState rs_a = regionStates.getRegionState(a); RegionState rs_b = regionStates.getRegionState(b); - if (!(rs_p.isOpenOrSplittingOnServer(sn) - && (rs_a == null || rs_a.isOpenOrSplittingNewOnServer(sn)) - && (rs_b == null || rs_b.isOpenOrSplittingNewOnServer(sn)))) { - return "Not in state good for split"; + if (rs_a != null || rs_b != null) { + return "Some daughter is already existing. " + + "a=" + rs_a + ", b=" + rs_b; + } + + // Server holding is not updated at this stage. + // It is done after PONR. + regionStates.updateRegionState(hri, State.SPLITTING); + regionStates.createRegionState( + a, State.SPLITTING_NEW, serverName, null); + regionStates.createRegionState( + b, State.SPLITTING_NEW, serverName, null); + return null; + } + + private String onRegionSplitPONR(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be splitting on this server, and the daughters must be in + // splitting_new state. To check RPC retry, we use server holding info. + if (current == null || !current.isSplittingOnServer(serverName)) { + return hri.getShortNameToLog() + " is not splitting on " + serverName; } - regionStates.updateRegionState(a, State.SPLITTING_NEW, sn); - regionStates.updateRegionState(b, State.SPLITTING_NEW, sn); - regionStates.updateRegionState(p, State.SPLITTING); + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); - if (code == TransitionCode.SPLIT) { - if (TEST_SKIP_SPLIT_HANDLING) { - return "Skipping split message, TEST_SKIP_SPLIT_HANDLING is set"; - } - regionOffline(p, State.SPLIT); - regionOnline(a, sn, 1); - regionOnline(b, sn, 1); - - // User could disable the table before master knows the new region. - if (getTableStateManager().isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - invokeUnAssign(a); - invokeUnAssign(b); - } - } else if (code == TransitionCode.SPLIT_PONR) { - try { - regionStates.splitRegion(p, a, b, sn); - } catch (IOException ioe) { - LOG.info("Failed to record split region " + p.getShortNameToLog()); - return "Failed to record the splitting in meta"; - } - } else if (code == TransitionCode.SPLIT_REVERTED) { - regionOnline(p, sn); - regionOffline(a); - regionOffline(b); - - if (getTableStateManager().isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - invokeUnAssign(p); - } + // Master could have restarted and lost the new region + // states, if so, they must be lost together + if (rs_a == null && rs_b == null) { + rs_a = regionStates.createRegionState( + a, State.SPLITTING_NEW, serverName, null); + rs_b = regionStates.createRegionState( + b, State.SPLITTING_NEW, serverName, null); + } + + if (rs_a == null || !rs_a.isSplittingNewOnServer(serverName) + || rs_b == null || !rs_b.isSplittingNewOnServer(serverName)) { + return "Some daughter is not known to be splitting on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; + } + + // Just return in case of retrying + if (!regionStates.isRegionOnServer(hri, serverName)) { + return null; + } + + try { + regionStates.splitRegion(hri, a, b, serverName); + } catch (IOException ioe) { + LOG.info("Failed to record split region " + hri.getShortNameToLog()); + return "Failed to record the splitting in meta"; } return null; } - private String onRegionMerge(ServerName sn, TransitionCode code, - HRegionInfo p, HRegionInfo a, HRegionInfo b) { - RegionState rs_p = regionStates.getRegionState(p); + private String onRegionSplit(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be splitting on this server, and the daughters must be in + // splitting_new state. + // If current state is already split on the same server, + // it could be a reportRegionTransition RPC retry. + if (current == null || !current.isSplittingOrSplitOnServer(serverName)) { + return hri.getShortNameToLog() + " is not splitting on " + serverName; + } + + // Just return in case of retrying + if (current.isSplit()) { + return null; + } + + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); RegionState rs_a = regionStates.getRegionState(a); RegionState rs_b = regionStates.getRegionState(b); - if (!(rs_a.isOpenOrMergingOnServer(sn) && rs_b.isOpenOrMergingOnServer(sn) - && (rs_p == null || rs_p.isOpenOrMergingNewOnServer(sn)))) { - return "Not in state good for merge"; - } - - regionStates.updateRegionState(a, State.MERGING); - regionStates.updateRegionState(b, State.MERGING); - regionStates.updateRegionState(p, State.MERGING_NEW, sn); - - String encodedName = p.getEncodedName(); - if (code == TransitionCode.READY_TO_MERGE) { - mergingRegions.put(encodedName, - new PairOfSameType(a, b)); - } else if (code == TransitionCode.MERGED) { - mergingRegions.remove(encodedName); - regionOffline(a, State.MERGED); - regionOffline(b, State.MERGED); - regionOnline(p, sn, 1); - - // User could disable the table before master knows the new region. - if (getTableStateManager().isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - invokeUnAssign(p); - } - } else if (code == TransitionCode.MERGE_PONR) { - try { - regionStates.mergeRegions(p, a, b, sn); - } catch (IOException ioe) { - LOG.info("Failed to record merged region " + p.getShortNameToLog()); - return "Failed to record the merging in meta"; - } + if (rs_a == null || !rs_a.isSplittingNewOnServer(serverName) + || rs_b == null || !rs_b.isSplittingNewOnServer(serverName)) { + return "Some daughter is not known to be splitting on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; + } + + if (TEST_SKIP_SPLIT_HANDLING) { + return "Skipping split message, TEST_SKIP_SPLIT_HANDLING is set"; + } + regionOffline(hri, State.SPLIT); + regionOnline(a, serverName, 1); + regionOnline(b, serverName, 1); + + // User could disable the table before master knows the new region. + if (getTableStateManager().isTableState(hri.getTable(), + TableState.State.DISABLED, TableState.State.DISABLING)) { + invokeUnAssign(a); + invokeUnAssign(b); } else { - mergingRegions.remove(encodedName); - regionOnline(a, sn); - regionOnline(b, sn); - regionOffline(p); - - if (getTableStateManager().isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - invokeUnAssign(a); - invokeUnAssign(b); - } + Callable splitReplicasCallable = new Callable() { + @Override + public Object call() { + doSplittingOfReplicas(hri, a, b); + return null; + } + }; + threadPoolExecutorService.submit(splitReplicasCallable); } return null; } - /** - * A helper to handle region merging transition event. - * It transitions merging regions to MERGING state. - */ - private boolean handleRegionMerging(final RegionTransition rt, final String encodedName, - final String prettyPrintedRegionName, final ServerName sn) { - if (!serverManager.isServerOnline(sn)) { - LOG.warn("Dropped merging! ServerName=" + sn + " unknown."); - return false; + private String onRegionSplitReverted(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be splitting on this server, and the daughters must be in + // splitting_new state. + // If the region is in open state, it could be an RPC retry. + if (current == null || !current.isSplittingOrOpenedOnServer(serverName)) { + return hri.getShortNameToLog() + " is not splitting on " + serverName; } - byte [] payloadOfMerging = rt.getPayload(); - List mergingRegions; - try { - mergingRegions = HRegionInfo.parseDelimitedFrom( - payloadOfMerging, 0, payloadOfMerging.length); - } catch (IOException e) { - LOG.error("Dropped merging! Failed reading " + rt.getEventType() - + " payload for " + prettyPrintedRegionName); - return false; - } - assert mergingRegions.size() == 3; - HRegionInfo p = mergingRegions.get(0); - HRegionInfo hri_a = mergingRegions.get(1); - HRegionInfo hri_b = mergingRegions.get(2); - RegionState rs_p = regionStates.getRegionState(p); - RegionState rs_a = regionStates.getRegionState(hri_a); - RegionState rs_b = regionStates.getRegionState(hri_b); + // Just return in case of retrying + if (current.isOpened()) { + return null; + } - if (!((rs_a == null || rs_a.isOpenOrMergingOnServer(sn)) - && (rs_b == null || rs_b.isOpenOrMergingOnServer(sn)) - && (rs_p == null || rs_p.isOpenOrMergingNewOnServer(sn)))) { - LOG.warn("Dropped merging! Not in state good for MERGING; rs_p=" - + rs_p + ", rs_a=" + rs_a + ", rs_b=" + rs_b); - return false; + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); + if (rs_a == null || !rs_a.isSplittingNewOnServer(serverName) + || rs_b == null || !rs_b.isSplittingNewOnServer(serverName)) { + return "Some daughter is not known to be splitting on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; } - EventType et = rt.getEventType(); - if (et == EventType.RS_ZK_REQUEST_REGION_MERGE) { - try { - RegionMergeCoordination.RegionMergeDetails std = - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().getDefaultDetails(); - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().processRegionMergeRequest(p, hri_a, hri_b, sn, std); - if (((ZkRegionMergeCoordination.ZkRegionMergeDetails) std).getZnodeVersion() == -1) { - byte[] data = ZKAssign.getData(watcher, encodedName); - EventType currentType = null; - if (data != null) { - RegionTransition newRt = RegionTransition.parseFrom(data); - currentType = newRt.getEventType(); - } - if (currentType == null || (currentType != EventType.RS_ZK_REGION_MERGED - && currentType != EventType.RS_ZK_REGION_MERGING)) { - LOG.warn("Failed to transition pending_merge node " - + encodedName + " to merging, it's now " + currentType); - return false; - } - } - } catch (Exception e) { - LOG.warn("Failed to transition pending_merge node " - + encodedName + " to merging", e); - return false; - } + regionOnline(hri, serverName); + regionOffline(a); + regionOffline(b); + if (getTableStateManager().isTableState(hri.getTable(), + TableState.State.DISABLED, TableState.State.DISABLING)) { + invokeUnAssign(hri); } + return null; + } - synchronized (regionStates) { - regionStates.updateRegionState(hri_a, State.MERGING); - regionStates.updateRegionState(hri_b, State.MERGING); - regionStates.updateRegionState(p, State.MERGING_NEW, sn); + private String onRegionReadyToMerge(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be new, and the daughters must be open on this server. + // If the region is in merge_new state, it could be an RPC retry. + if (current != null && !current.isMergingNewOnServer(serverName)) { + return "Merging daughter region already exists, p=" + current; + } - if (et != EventType.RS_ZK_REGION_MERGED) { - this.mergingRegions.put(encodedName, - new PairOfSameType(hri_a, hri_b)); - } else { - this.mergingRegions.remove(encodedName); - regionOffline(hri_a, State.MERGED); - regionOffline(hri_b, State.MERGED); - regionOnline(p, sn); - } + // Just return in case of retrying + if (current != null) { + return null; } - if (et == EventType.RS_ZK_REGION_MERGED) { - LOG.debug("Handling MERGED event for " + encodedName + "; deleting node"); - // Remove region from ZK - try { - boolean successful = false; - while (!successful) { - // It's possible that the RS tickles in between the reading of the - // znode and the deleting, so it's safe to retry. - successful = ZKAssign.deleteNode(watcher, encodedName, - EventType.RS_ZK_REGION_MERGED, sn); - } - } catch (KeeperException e) { - if (e instanceof NoNodeException) { - String znodePath = ZKUtil.joinZNode(watcher.splitLogZNode, encodedName); - LOG.debug("The znode " + znodePath + " does not exist. May be deleted already."); - } else { - server.abort("Error deleting MERGED node " + encodedName, e); - } - } - LOG.info("Handled MERGED event; merged=" + p.getRegionNameAsString() - + ", region_a=" + hri_a.getRegionNameAsString() + ", region_b=" - + hri_b.getRegionNameAsString() + ", on " + sn); - - // User could disable the table before master knows the new region. - if (tableStateManager.isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - unassign(p); + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + Set encodedNames = new HashSet(2); + encodedNames.add(a.getEncodedName()); + encodedNames.add(b.getEncodedName()); + Map locks = locker.acquireLocks(encodedNames); + try { + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); + if (rs_a == null || !rs_a.isOpenedOnServer(serverName) + || rs_b == null || !rs_b.isOpenedOnServer(serverName)) { + return "Some daughter is not in a state to merge on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; + } + + regionStates.updateRegionState(a, State.MERGING); + regionStates.updateRegionState(b, State.MERGING); + regionStates.createRegionState( + hri, State.MERGING_NEW, serverName, null); + return null; + } finally { + for (Lock lock: locks.values()) { + lock.unlock(); } } - return true; } - /** - * A helper to handle region splitting transition event. - */ - private boolean handleRegionSplitting(final RegionTransition rt, final String encodedName, - final String prettyPrintedRegionName, final ServerName sn) { - if (!serverManager.isServerOnline(sn)) { - LOG.warn("Dropped splitting! ServerName=" + sn + " unknown."); - return false; + private String onRegionMergePONR(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be in merging_new state, and the daughters must be + // merging. To check RPC retry, we use server holding info. + if (current != null && !current.isMergingNewOnServer(serverName)) { + return hri.getShortNameToLog() + " is not merging on " + serverName; + } + + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); + if (rs_a == null || !rs_a.isMergingOnServer(serverName) + || rs_b == null || !rs_b.isMergingOnServer(serverName)) { + return "Some daughter is not known to be merging on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; } - byte [] payloadOfSplitting = rt.getPayload(); - List splittingRegions; + + // Master could have restarted and lost the new region state + if (current == null) { + regionStates.createRegionState( + hri, State.MERGING_NEW, serverName, null); + } + + // Just return in case of retrying + if (regionStates.isRegionOnServer(hri, serverName)) { + return null; + } + try { - splittingRegions = HRegionInfo.parseDelimitedFrom( - payloadOfSplitting, 0, payloadOfSplitting.length); - } catch (IOException e) { - LOG.error("Dropped splitting! Failed reading " + rt.getEventType() - + " payload for " + prettyPrintedRegionName); - return false; + regionStates.mergeRegions(hri, a, b, serverName); + } catch (IOException ioe) { + LOG.info("Failed to record merged region " + hri.getShortNameToLog()); + return "Failed to record the merging in meta"; } - assert splittingRegions.size() == 2; - HRegionInfo hri_a = splittingRegions.get(0); - HRegionInfo hri_b = splittingRegions.get(1); + return null; + } - RegionState rs_p = regionStates.getRegionState(encodedName); - RegionState rs_a = regionStates.getRegionState(hri_a); - RegionState rs_b = regionStates.getRegionState(hri_b); + private String onRegionMerged(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be in merging_new state, and the daughters must be + // merging on this server. + // If current state is already opened on the same server, + // it could be a reportRegionTransition RPC retry. + if (current == null || !current.isMergingNewOrOpenedOnServer(serverName)) { + return hri.getShortNameToLog() + " is not merging on " + serverName; + } - if (!((rs_p == null || rs_p.isOpenOrSplittingOnServer(sn)) - && (rs_a == null || rs_a.isOpenOrSplittingNewOnServer(sn)) - && (rs_b == null || rs_b.isOpenOrSplittingNewOnServer(sn)))) { - LOG.warn("Dropped splitting! Not in state good for SPLITTING; rs_p=" - + rs_p + ", rs_a=" + rs_a + ", rs_b=" + rs_b); - return false; + // Just return in case of retrying + if (current.isOpened()) { + return null; } - if (rs_p == null) { - // Splitting region should be online - rs_p = regionStates.updateRegionState(rt, State.OPEN); - if (rs_p == null) { - LOG.warn("Received splitting for region " + prettyPrintedRegionName - + " from server " + sn + " but it doesn't exist anymore," - + " probably already processed its split"); - return false; - } - regionStates.regionOnline(rs_p.getRegion(), sn); + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); + if (rs_a == null || !rs_a.isMergingOnServer(serverName) + || rs_b == null || !rs_b.isMergingOnServer(serverName)) { + return "Some daughter is not known to be merging on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; } - HRegionInfo p = rs_p.getRegion(); - EventType et = rt.getEventType(); - if (et == EventType.RS_ZK_REQUEST_REGION_SPLIT) { - try { - SplitTransactionDetails std = - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().getDefaultDetails(); - if (((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().processTransition(p, hri_a, hri_b, sn, std) == -1) { - byte[] data = ZKAssign.getData(watcher, encodedName); - EventType currentType = null; - if (data != null) { - RegionTransition newRt = RegionTransition.parseFrom(data); - currentType = newRt.getEventType(); - } - if (currentType == null - || (currentType != EventType.RS_ZK_REGION_SPLIT && currentType != EventType.RS_ZK_REGION_SPLITTING)) { - LOG.warn("Failed to transition pending_split node " + encodedName - + " to splitting, it's now " + currentType); - return false; - } + regionOffline(a, State.MERGED); + regionOffline(b, State.MERGED); + regionOnline(hri, serverName, 1); + + // User could disable the table before master knows the new region. + if (getTableStateManager().isTableState(hri.getTable(), + TableState.State.DISABLED, TableState.State.DISABLING)) { + invokeUnAssign(hri); + } else { + Callable mergeReplicasCallable = new Callable() { + @Override + public Object call() { + doMergingOfReplicas(hri, a, b); + return null; } - } catch (Exception e) { - LOG.warn("Failed to transition pending_split node " + encodedName + " to splitting", e); - return false; - } + }; + threadPoolExecutorService.submit(mergeReplicasCallable); } + return null; + } - synchronized (regionStates) { - splitRegions.put(p, new PairOfSameType(hri_a, hri_b)); - regionStates.updateRegionState(hri_a, State.SPLITTING_NEW, sn); - regionStates.updateRegionState(hri_b, State.SPLITTING_NEW, sn); - regionStates.updateRegionState(rt, State.SPLITTING); + private String onRegionMergeReverted(final RegionState current, final HRegionInfo hri, + final ServerName serverName, final RegionStateTransition transition) { + // The region must be in merging_new state, and the daughters must be + // merging on this server. + // If the region is in offline state, it could be an RPC retry. + if (current == null || !current.isMergingNewOrOfflineOnServer(serverName)) { + return hri.getShortNameToLog() + " is not merging on " + serverName; + } - // The below is for testing ONLY! We can't do fault injection easily, so - // resort to this kinda uglyness -- St.Ack 02/25/2011. - if (TEST_SKIP_SPLIT_HANDLING) { - LOG.warn("Skipping split message, TEST_SKIP_SPLIT_HANDLING is set"); - return true; // return true so that the splitting node stays - } + // Just return in case of retrying + if (current.isOffline()) { + return null; + } - if (et == EventType.RS_ZK_REGION_SPLIT) { - regionOffline(p, State.SPLIT); - regionOnline(hri_a, sn); - regionOnline(hri_b, sn); - splitRegions.remove(p); - } + final HRegionInfo a = HRegionInfo.convert(transition.getRegionInfo(1)); + final HRegionInfo b = HRegionInfo.convert(transition.getRegionInfo(2)); + RegionState rs_a = regionStates.getRegionState(a); + RegionState rs_b = regionStates.getRegionState(b); + if (rs_a == null || !rs_a.isMergingOnServer(serverName) + || rs_b == null || !rs_b.isMergingOnServer(serverName)) { + return "Some daughter is not known to be merging on " + serverName + + ", a=" + rs_a + ", b=" + rs_b; } - if (et == EventType.RS_ZK_REGION_SPLIT) { - LOG.debug("Handling SPLIT event for " + encodedName + "; deleting node"); - // Remove region from ZK - try { - boolean successful = false; - while (!successful) { - // It's possible that the RS tickles in between the reading of the - // znode and the deleting, so it's safe to retry. - successful = ZKAssign.deleteNode(watcher, encodedName, - EventType.RS_ZK_REGION_SPLIT, sn); - } - } catch (KeeperException e) { - if (e instanceof NoNodeException) { - String znodePath = ZKUtil.joinZNode(watcher.splitLogZNode, encodedName); - LOG.debug("The znode " + znodePath + " does not exist. May be deleted already."); - } else { - server.abort("Error deleting SPLIT node " + encodedName, e); + regionOnline(a, serverName); + regionOnline(b, serverName); + regionOffline(hri); + + if (getTableStateManager().isTableState(hri.getTable(), + TableState.State.DISABLED, TableState.State.DISABLING)) { + invokeUnAssign(a); + invokeUnAssign(b); + } + return null; + } + + private void doMergingOfReplicas(HRegionInfo mergedHri, final HRegionInfo hri_a, + final HRegionInfo hri_b) { + // Close replicas for the original unmerged regions. create/assign new replicas + // for the merged parent. + List unmergedRegions = new ArrayList(); + unmergedRegions.add(hri_a); + unmergedRegions.add(hri_b); + Map> map = regionStates.getRegionAssignments(unmergedRegions); + Collection> c = map.values(); + for (List l : c) { + for (HRegionInfo h : l) { + if (!RegionReplicaUtil.isDefaultReplica(h)) { + LOG.debug("Unassigning un-merged replica " + h); + unassign(h); } } - LOG.info("Handled SPLIT event; parent=" + p.getRegionNameAsString() - + ", daughter a=" + hri_a.getRegionNameAsString() + ", daughter b=" - + hri_b.getRegionNameAsString() + ", on " + sn); - - // User could disable the table before master knows the new region. - if (tableStateManager.isTableState(p.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - unassign(hri_a); - unassign(hri_b); - } } - return true; + int numReplicas = 1; + try { + numReplicas = ((MasterServices)server).getTableDescriptors().get(mergedHri.getTable()). + getRegionReplication(); + } catch (IOException e) { + LOG.warn("Couldn't get the replication attribute of the table " + mergedHri.getTable() + + " due to " + e.getMessage() + ". The assignment of replicas for the merged region " + + "will not be done"); + } + List regions = new ArrayList(); + for (int i = 1; i < numReplicas; i++) { + regions.add(RegionReplicaUtil.getRegionInfoForReplica(mergedHri, i)); + } + try { + assign(regions); + } catch (IOException ioe) { + LOG.warn("Couldn't assign all replica(s) of region " + mergedHri + " because of " + + ioe.getMessage()); + } catch (InterruptedException ie) { + LOG.warn("Couldn't assign all replica(s) of region " + mergedHri+ " because of " + + ie.getMessage()); + } + } + + private void doSplittingOfReplicas(final HRegionInfo parentHri, final HRegionInfo hri_a, + final HRegionInfo hri_b) { + // create new regions for the replica, and assign them to match with the + // current replica assignments. If replica1 of parent is assigned to RS1, + // the replica1s of daughters will be on the same machine + int numReplicas = 1; + try { + numReplicas = ((MasterServices)server).getTableDescriptors().get(parentHri.getTable()). + getRegionReplication(); + } catch (IOException e) { + LOG.warn("Couldn't get the replication attribute of the table " + parentHri.getTable() + + " due to " + e.getMessage() + ". The assignment of daughter replicas " + + "replicas will not be done"); + } + // unassign the old replicas + List parentRegion = new ArrayList(); + parentRegion.add(parentHri); + Map> currentAssign = + regionStates.getRegionAssignments(parentRegion); + Collection> c = currentAssign.values(); + for (List l : c) { + for (HRegionInfo h : l) { + if (!RegionReplicaUtil.isDefaultReplica(h)) { + LOG.debug("Unassigning parent's replica " + h); + unassign(h); + } + } + } + // assign daughter replicas + Map map = new HashMap(); + for (int i = 1; i < numReplicas; i++) { + prepareDaughterReplicaForAssignment(hri_a, parentHri, i, map); + prepareDaughterReplicaForAssignment(hri_b, parentHri, i, map); + } + try { + assign(map); + } catch (IOException e) { + LOG.warn("Caught exception " + e + " while trying to assign replica(s) of daughter(s)"); + } catch (InterruptedException e) { + LOG.warn("Caught exception " + e + " while trying to assign replica(s) of daughter(s)"); + } + } + + private void prepareDaughterReplicaForAssignment(HRegionInfo daughterHri, HRegionInfo parentHri, + int replicaId, Map map) { + HRegionInfo parentReplica = RegionReplicaUtil.getRegionInfoForReplica(parentHri, replicaId); + HRegionInfo daughterReplica = RegionReplicaUtil.getRegionInfoForReplica(daughterHri, + replicaId); + LOG.debug("Created replica region for daughter " + daughterReplica); + ServerName sn; + if ((sn = regionStates.getRegionServerOfRegion(parentReplica)) != null) { + map.put(daughterReplica, sn); + } else { + List servers = serverManager.getOnlineServersList(); + sn = servers.get((new Random(System.currentTimeMillis())).nextInt(servers.size())); + map.put(daughterReplica, sn); + } + } + + public Set getReplicasToClose() { + return replicasToClose; } /** @@ -3885,6 +2642,25 @@ public class AssignmentManager extends ZooKeeperListener { // Tell our listeners that a region was closed sendRegionClosedNotification(regionInfo); + // also note that all the replicas of the primary should be closed + if (state != null && state.equals(State.SPLIT)) { + Collection c = new ArrayList(1); + c.add(regionInfo); + Map> map = regionStates.getRegionAssignments(c); + Collection> allReplicas = map.values(); + for (List list : allReplicas) { + replicasToClose.addAll(list); + } + } + else if (state != null && state.equals(State.MERGED)) { + Collection c = new ArrayList(1); + c.add(regionInfo); + Map> map = regionStates.getRegionAssignments(c); + Collection> allReplicas = map.values(); + for (List list : allReplicas) { + replicasToClose.addAll(list); + } + } } private void sendRegionOpenedNotification(final HRegionInfo regionInfo, @@ -3952,76 +2728,61 @@ public class AssignmentManager extends ZooKeeperListener { final RegionStateTransition transition) { TransitionCode code = transition.getTransitionCode(); HRegionInfo hri = HRegionInfo.convert(transition.getRegionInfo(0)); - RegionState current = regionStates.getRegionState(hri); - if (LOG.isDebugEnabled()) { - LOG.debug("Got transition " + code + " for " - + (current != null ? current.toString() : hri.getShortNameToLog()) - + " from " + serverName); - } - String errorMsg = null; - switch (code) { - case OPENED: - if (current != null && current.isOpened() && current.isOnServer(serverName)) { - LOG.info("Region " + hri.getShortNameToLog() + " is already " + current.getState() + " on " - + serverName); + Lock lock = locker.acquireLock(hri.getEncodedName()); + try { + RegionState current = regionStates.getRegionState(hri); + if (LOG.isDebugEnabled()) { + LOG.debug("Got transition " + code + " for " + + (current != null ? current.toString() : hri.getShortNameToLog()) + + " from " + serverName); + } + String errorMsg = null; + switch (code) { + case OPENED: + errorMsg = onRegionOpen(current, hri, serverName, transition); + break; + case FAILED_OPEN: + errorMsg = onRegionFailedOpen(current, hri, serverName); + break; + case CLOSED: + errorMsg = onRegionClosed(current, hri, serverName); + break; + case READY_TO_SPLIT: + errorMsg = onRegionReadyToSplit(current, hri, serverName, transition); + break; + case SPLIT_PONR: + errorMsg = onRegionSplitPONR(current, hri, serverName, transition); + break; + case SPLIT: + errorMsg = onRegionSplit(current, hri, serverName, transition); + break; + case SPLIT_REVERTED: + errorMsg = onRegionSplitReverted(current, hri, serverName, transition); + break; + case READY_TO_MERGE: + errorMsg = onRegionReadyToMerge(current, hri, serverName, transition); + break; + case MERGE_PONR: + errorMsg = onRegionMergePONR(current, hri, serverName, transition); + break; + case MERGED: + errorMsg = onRegionMerged(current, hri, serverName, transition); + break; + case MERGE_REVERTED: + errorMsg = onRegionMergeReverted(current, hri, serverName, transition); break; - } - case FAILED_OPEN: - if (current == null - || !current.isPendingOpenOrOpeningOnServer(serverName)) { - errorMsg = hri.getShortNameToLog() - + " is not pending open on " + serverName; - } else if (code == TransitionCode.FAILED_OPEN) { - onRegionFailedOpen(hri, serverName); - } else { - long openSeqNum = HConstants.NO_SEQNUM; - if (transition.hasOpenSeqNum()) { - openSeqNum = transition.getOpenSeqNum(); - } - if (openSeqNum < 0) { - errorMsg = "Newly opened region has invalid open seq num " + openSeqNum; - } else { - onRegionOpen(hri, serverName, openSeqNum); - } - } - break; - case CLOSED: - if (current == null - || !current.isPendingCloseOrClosingOnServer(serverName)) { - errorMsg = hri.getShortNameToLog() - + " is not pending close on " + serverName; - } else { - onRegionClosed(hri); + default: + errorMsg = "Unexpected transition code " + code; } - break; - - case READY_TO_SPLIT: - case SPLIT_PONR: - case SPLIT: - case SPLIT_REVERTED: - errorMsg = onRegionSplit(serverName, code, hri, - HRegionInfo.convert(transition.getRegionInfo(1)), - HRegionInfo.convert(transition.getRegionInfo(2))); - break; - - case READY_TO_MERGE: - case MERGE_PONR: - case MERGED: - case MERGE_REVERTED: - errorMsg = onRegionMerge(serverName, code, hri, - HRegionInfo.convert(transition.getRegionInfo(1)), - HRegionInfo.convert(transition.getRegionInfo(2))); - break; - - default: - errorMsg = "Unexpected transition code " + code; - } - if (errorMsg != null) { - LOG.error("Failed to transtion region from " + current + " to " - + code + " by " + serverName + ": " + errorMsg); + if (errorMsg != null) { + LOG.info("Could not transition region from " + current + " on " + + code + " by " + serverName + ": " + errorMsg); + } + return errorMsg; + } finally { + lock.unlock(); } - return errorMsg; } /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkReOpen.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkReOpen.java index 4fb4856..606dce4 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkReOpen.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/BulkReOpen.java @@ -123,7 +123,7 @@ public class BulkReOpen extends BulkAssigner { if (regionStates.isRegionInTransition(region)) { continue; } - assignmentManager.unassign(region, false); + assignmentManager.unassign(region); while (regionStates.isRegionInTransition(region) && !server.isStopped()) { regionStates.waitForUpdate(100); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java index 25c405c..9f71b90 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java @@ -166,7 +166,7 @@ public class CatalogJanitor extends Chore { // Run full scan of hbase:meta catalog table passing in our custom visitor with // the start row - MetaScanner.metaScan(server.getConfiguration(), this.connection, visitor, tableName); + MetaScanner.metaScan(this.connection, visitor, tableName); return new Triple, Map>( count.get(), mergedRegions, splitParents); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java index 5fffcaa..6e7024c 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java @@ -253,8 +253,6 @@ public class ClusterStatusPublisher extends Chore { @Override public void connect(Configuration conf) throws IOException { - NetworkInterface ni = NetworkInterface.getByInetAddress(Addressing.getIpAddress()); - String mcAddress = conf.get(HConstants.STATUS_MULTICAST_ADDRESS, HConstants.DEFAULT_STATUS_MULTICAST_ADDRESS); int port = conf.getInt(HConstants.STATUS_MULTICAST_PORT, @@ -269,13 +267,19 @@ public class ClusterStatusPublisher extends Chore { } final InetSocketAddress isa = new InetSocketAddress(mcAddress, port); - InternetProtocolFamily family = InternetProtocolFamily.IPv4; + + InternetProtocolFamily family; + InetAddress localAddress; if (ina instanceof Inet6Address) { + localAddress = Addressing.getIp6Address(); family = InternetProtocolFamily.IPv6; + }else{ + localAddress = Addressing.getIp4Address(); + family = InternetProtocolFamily.IPv4; } + NetworkInterface ni = NetworkInterface.getByInetAddress(localAddress); Bootstrap b = new Bootstrap(); - b.group(group) .channelFactory(new HBaseDatagramChannelFactory(NioDatagramChannel.class, family)) .option(ChannelOption.SO_REUSEADDR, true) diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java index 738be01..0bb02f2 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java @@ -60,7 +60,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.MasterNotRunningException; -import org.apache.hadoop.hbase.MetaMigrationConvertingToPB; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.NamespaceNotFoundException; @@ -78,6 +77,7 @@ import org.apache.hadoop.hbase.client.MetaScanner; import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor; import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitorBase; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.executor.ExecutorType; @@ -85,7 +85,6 @@ import org.apache.hadoop.hbase.ipc.RequestContext; import org.apache.hadoop.hbase.ipc.RpcServer; import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException; import org.apache.hadoop.hbase.master.MasterRpcServices.BalanceSwitchMode; -import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.balancer.BalancerChore; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer; import org.apache.hadoop.hbase.master.balancer.ClusterStatusChore; @@ -109,8 +108,8 @@ import org.apache.hadoop.hbase.monitoring.TaskMonitor; import org.apache.hadoop.hbase.procedure.MasterProcedureManagerHost; import org.apache.hadoop.hbase.procedure.flush.MasterFlushTableProcedureManager; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionServerInfo; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; +import org.apache.hadoop.hbase.quotas.MasterQuotaManager; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.RSRpcServices; import org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost; @@ -120,7 +119,6 @@ import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.util.Addressing; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CompressionTest; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EncryptionTest; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HFileArchiveUtil; @@ -128,6 +126,7 @@ import org.apache.hadoop.hbase.util.HasThread; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.util.VersionInfo; +import org.apache.hadoop.hbase.util.ZKDataMigrator; import org.apache.hadoop.hbase.zookeeper.DrainingServerTracker; import org.apache.hadoop.hbase.zookeeper.LoadBalancerTracker; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; @@ -207,8 +206,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { + " consider submitting a bug report including a thread dump of this process."); if (haltOnTimeout) { LOG.error("Zombie Master exiting. Thread dump to stdout"); - org.apache.hadoop.util.ReflectionUtils.printThreadInfo( - new PrintWriter(System.out), "Zombie HMaster"); + Threads.printThreadInfo(System.out, "Zombie HMaster"); System.exit(-1); } } @@ -294,6 +292,11 @@ public class HMaster extends HRegionServer implements MasterServices, Server { // monitor for distributed procedures MasterProcedureManagerHost mpmHost; + private MasterQuotaManager quotaManager; + + // handle table states + private TableStateManager tableStateManager; + /** flag used in test cases in order to simulate RS failures during master initialization */ private volatile boolean initializationBeforeMetaAssignment = false; @@ -327,12 +330,11 @@ public class HMaster extends HRegionServer implements MasterServices, Server { * #finishActiveMasterInitialization(MonitoredTask) after * the master becomes the active one. * - * @throws InterruptedException * @throws KeeperException * @throws IOException */ public HMaster(final Configuration conf, CoordinatedStateManager csm) - throws IOException, KeeperException, InterruptedException { + throws IOException, KeeperException { super(conf, csm); this.rsFatals = new MemoryBoundedLogMessageBuffer( conf.getLong("hbase.master.buffer.for.rs.fatals", 1*1024*1024)); @@ -522,8 +524,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { this.loadBalancerTracker.start(); this.assignmentManager = new AssignmentManager(this, serverManager, this.balancer, this.service, this.metricsMaster, - this.tableLockManager); - zooKeeper.registerListenerFirst(assignmentManager); + this.tableLockManager, tableStateManager); this.regionServerTracker = new RegionServerTracker(zooKeeper, this, this.serverManager); @@ -550,6 +551,14 @@ public class HMaster extends HRegionServer implements MasterServices, Server { this.mpmHost.register(new MasterFlushTableProcedureManager()); this.mpmHost.loadProcedures(conf); this.mpmHost.initialize(this, this.metricsMaster); + + // migrating existent table state from zk + for (Map.Entry entry : ZKDataMigrator + .queryForTableStates(getZooKeeper()).entrySet()) { + LOG.info("Converting state from zk to new states:" + entry); + tableStateManager.setTableState(entry.getKey(), entry.getValue()); + } + ZKUtil.deleteChildrenRecursively(getZooKeeper(), getZooKeeper().tableZNode); } /** @@ -610,6 +619,9 @@ public class HMaster extends HRegionServer implements MasterServices, Server { // Invalidate all write locks held previously this.tableLockManager.reapWriteLocks(); + this.tableStateManager = new TableStateManager(this); + this.tableStateManager.start(); + status.setStatus("Initializing ZK system trackers"); initializeZKBasedSystemTrackers(); @@ -692,13 +704,6 @@ public class HMaster extends HRegionServer implements MasterServices, Server { this.serverManager.processDeadServer(tmpServer, true); } - // Update meta with new PB serialization if required. i.e migrate all HRI to PB serialization - // in meta. This must happen before we assign all user regions or else the assignment will - // fail. - if (this.conf.getBoolean("hbase.MetaMigrationConvertingToPB", true)) { - MetaMigrationConvertingToPB.updateMetaIfNecessary(this); - } - // Fix up assignment manager status status.setStatus("Starting assignment manager"); this.assignmentManager.joinCluster(); @@ -719,6 +724,9 @@ public class HMaster extends HRegionServer implements MasterServices, Server { status.setStatus("Starting namespace manager"); initNamespace(); + status.setStatus("Starting quota manager"); + initQuotaManager(); + if (this.cpHost != null) { try { this.cpHost.preMasterInitialization(); @@ -778,51 +786,30 @@ public class HMaster extends HRegionServer implements MasterServices, Server { int assigned = 0; long timeout = this.conf.getLong("hbase.catalog.verification.timeout", 1000); status.setStatus("Assigning hbase:meta region"); + // Get current meta state from zk. - RegionStates regionStates = assignmentManager.getRegionStates(); RegionState metaState = MetaTableLocator.getMetaRegionState(getZooKeeper()); - ServerName currentMetaServer = metaState.getServerName(); - if (!ConfigUtil.useZKForAssignment(conf)) { - regionStates.createRegionState(HRegionInfo.FIRST_META_REGIONINFO, metaState.getState(), - currentMetaServer, null); - } else { - regionStates.createRegionState(HRegionInfo.FIRST_META_REGIONINFO); - } - boolean rit = this.assignmentManager. - processRegionInTransitionAndBlockUntilAssigned(HRegionInfo.FIRST_META_REGIONINFO); - boolean metaRegionLocation = metaTableLocator.verifyMetaRegionLocation( - this.getConnection(), this.getZooKeeper(), timeout); - if (!metaRegionLocation || !metaState.isOpened()) { - // Meta location is not verified. It should be in transition, or offline. - // We will wait for it to be assigned in enableSSHandWaitForMeta below. - assigned++; - if (!ConfigUtil.useZKForAssignment(conf)) { - assignMetaZkLess(regionStates, metaState, timeout, previouslyFailedMetaRSs); - } else if (!rit) { - // Assign meta since not already in transition + + RegionStates regionStates = assignmentManager.getRegionStates(); + regionStates.createRegionState(HRegionInfo.FIRST_META_REGIONINFO, + metaState.getState(), metaState.getServerName(), null); + + if (!metaState.isOpened() || !metaTableLocator.verifyMetaRegionLocation( + this.getConnection(), this.getZooKeeper(), timeout)) { + ServerName currentMetaServer = metaState.getServerName(); + if (serverManager.isServerOnline(currentMetaServer)) { + LOG.info("Meta was in transition on " + currentMetaServer); + assignmentManager.processRegionsInTransition(Arrays.asList(metaState)); + } else { if (currentMetaServer != null) { - // If the meta server is not known to be dead or online, - // just split the meta log, and don't expire it since this - // could be a full cluster restart. Otherwise, we will think - // this is a failover and lose previous region locations. - // If it is really a failover case, AM will find out in rebuilding - // user regions. Otherwise, we are good since all logs are split - // or known to be replayed before user regions are assigned. - if (serverManager.isServerOnline(currentMetaServer)) { - LOG.info("Forcing expire of " + currentMetaServer); - serverManager.expireServer(currentMetaServer); - } splitMetaLogBeforeAssignment(currentMetaServer); + regionStates.logSplit(HRegionInfo.FIRST_META_REGIONINFO); previouslyFailedMetaRSs.add(currentMetaServer); } + LOG.info("Re-assigning hbase:meta, it was on " + currentMetaServer); assignmentManager.assignMeta(); } - } else { - // Region already assigned. We didn't assign it. Add to in-memory state. - regionStates.updateRegionState( - HRegionInfo.FIRST_META_REGIONINFO, State.OPEN, currentMetaServer); - this.assignmentManager.regionOnline( - HRegionInfo.FIRST_META_REGIONINFO, currentMetaServer); + assigned++; } enableMeta(TableName.META_TABLE_NAME); @@ -840,35 +827,22 @@ public class HMaster extends HRegionServer implements MasterServices, Server { // No need to wait for meta is assigned = 0 when meta is just verified. enableServerShutdownHandler(assigned != 0); - LOG.info("hbase:meta assigned=" + assigned + ", rit=" + rit + - ", location=" + metaTableLocator.getMetaRegionLocation(this.getZooKeeper())); + LOG.info("hbase:meta assigned=" + assigned + ", location=" + + metaTableLocator.getMetaRegionLocation(this.getZooKeeper())); status.setStatus("META assigned."); } - private void assignMetaZkLess(RegionStates regionStates, RegionState regionState, long timeout, - Set previouslyFailedRs) throws IOException, KeeperException { - ServerName currentServer = regionState.getServerName(); - if (serverManager.isServerOnline(currentServer)) { - LOG.info("Meta was in transition on " + currentServer); - assignmentManager.processRegionInTransitionZkLess(); - } else { - if (currentServer != null) { - splitMetaLogBeforeAssignment(currentServer); - regionStates.logSplit(HRegionInfo.FIRST_META_REGIONINFO); - previouslyFailedRs.add(currentServer); - } - LOG.info("Re-assigning hbase:meta, it was on " + currentServer); - regionStates.updateRegionState(HRegionInfo.FIRST_META_REGIONINFO, State.OFFLINE); - assignmentManager.assignMeta(); - } - } - void initNamespace() throws IOException { //create namespace manager tableNamespaceManager = new TableNamespaceManager(this); tableNamespaceManager.start(); } + void initQuotaManager() throws IOException { + quotaManager = new MasterQuotaManager(this); + quotaManager.start(); + } + boolean isCatalogJanitorEnabled() { return catalogJanitorChore != null ? catalogJanitorChore.getEnabled() : false; @@ -900,15 +874,12 @@ public class HMaster extends HRegionServer implements MasterServices, Server { if (waitForMeta) { metaTableLocator.waitMetaRegionLocation(this.getZooKeeper()); - // Above check waits for general meta availability but this does not - // guarantee that the transition has completed - this.assignmentManager.waitForAssignment(HRegionInfo.FIRST_META_REGIONINFO); } } private void enableMeta(TableName metaTableName) { - if (!this.assignmentManager.getTableStateManager().isTableState(metaTableName, - ZooKeeperProtos.Table.State.ENABLED)) { + if (!this.tableStateManager.isTableState(metaTableName, + TableState.State.ENABLED)) { this.assignmentManager.setEnabledTable(metaTableName); } } @@ -947,6 +918,11 @@ public class HMaster extends HRegionServer implements MasterServices, Server { return this.fileSystemManager; } + @Override + public TableStateManager getTableStateManager() { + return tableStateManager; + } + /* * Start up all services. If any of these threads gets an unhandled exception * then they just die with a logged message. This should be fine because @@ -1020,6 +996,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { // Clean up and close up shop if (this.logCleaner!= null) this.logCleaner.interrupt(); if (this.hfileCleaner != null) this.hfileCleaner.interrupt(); + if (this.quotaManager != null) this.quotaManager.stop(); if (this.activeMasterManager != null) this.activeMasterManager.stop(); if (this.serverManager != null) this.serverManager.stop(); if (this.assignmentManager != null) this.assignmentManager.stop(); @@ -1650,7 +1627,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { } }; - MetaScanner.metaScan(conf, visitor, tableName, rowKey, 1); + MetaScanner.metaScan(clusterConnection, visitor, tableName, rowKey, 1); return result.get(); } @@ -1679,7 +1656,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { throw new TableNotFoundException(tableName); } if (!getAssignmentManager().getTableStateManager(). - isTableState(tableName, ZooKeeperProtos.Table.State.DISABLED)) { + isTableState(tableName, TableState.State.DISABLED)) { throw new TableNotDisabledException(tableName); } } @@ -1814,6 +1791,11 @@ public class HMaster extends HRegionServer implements MasterServices, Server { } @Override + public MasterQuotaManager getMasterQuotaManager() { + return quotaManager; + } + + @Override public ServerName getServerName() { return this.serverName; } @@ -1924,7 +1906,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { } public void assignRegion(HRegionInfo hri) { - assignmentManager.assign(hri, true); + assignmentManager.assign(hri); } /** @@ -2141,9 +2123,7 @@ public class HMaster extends HRegionServer implements MasterServices, Server { boolean bypass = false; if (cpHost != null) { - bypass = cpHost.preGetTableDescriptors(tableNameList, descriptors); - // method required for AccessController. - bypass |= cpHost.preGetTableDescriptors(tableNameList, descriptors, regex); + bypass = cpHost.preGetTableDescriptors(tableNameList, descriptors, regex); } if (!bypass) { @@ -2176,8 +2156,6 @@ public class HMaster extends HRegionServer implements MasterServices, Server { } if (cpHost != null) { - cpHost.postGetTableDescriptors(descriptors); - // method required for AccessController. cpHost.postGetTableDescriptors(tableNameList, descriptors, regex); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java index 14c2568..2997172 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterCoprocessorHost.java @@ -19,15 +19,26 @@ package org.apache.hadoop.hbase.master; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.*; -import org.apache.hadoop.hbase.coprocessor.*; -import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; - import java.io.IOException; import java.util.List; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.Coprocessor; +import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; +import org.apache.hadoop.hbase.coprocessor.CoprocessorService; +import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment; +import org.apache.hadoop.hbase.coprocessor.MasterObserver; +import org.apache.hadoop.hbase.coprocessor.ObserverContext; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; + /** * Provides the coprocessor framework and environment for master oriented * operations. {@link HMaster} interacts with the loaded coprocessors @@ -760,6 +771,26 @@ public class MasterCoprocessorHost }); } + public void preListSnapshot(final SnapshotDescription snapshot) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver observer, ObserverContext ctx) + throws IOException { + observer.preListSnapshot(ctx, snapshot); + } + }); + } + + public void postListSnapshot(final SnapshotDescription snapshot) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver observer, ObserverContext ctx) + throws IOException { + observer.postListSnapshot(ctx, snapshot); + } + }); + } + public void preCloneSnapshot(final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { @@ -824,30 +855,6 @@ public class MasterCoprocessorHost }); } - @Deprecated - public boolean preGetTableDescriptors(final List tableNamesList, - final List descriptors) throws IOException { - return execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { - @Override - public void call(MasterObserver oserver, ObserverContext ctx) - throws IOException { - oserver.preGetTableDescriptors(ctx, tableNamesList, descriptors); - } - }); - } - - @Deprecated - public void postGetTableDescriptors(final List descriptors) - throws IOException { - execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { - @Override - public void call(MasterObserver oserver, ObserverContext ctx) - throws IOException { - oserver.postGetTableDescriptors(ctx, descriptors); - } - }); - } - public boolean preGetTableDescriptors(final List tableNamesList, final List descriptors, final String regex) throws IOException { return execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { @@ -912,6 +919,110 @@ public class MasterCoprocessorHost }); } + public void preSetUserQuota(final String user, final Quotas quotas) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.preSetUserQuota(ctx, user, quotas); + } + }); + } + + public void postSetUserQuota(final String user, final Quotas quotas) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.postSetUserQuota(ctx, user, quotas); + } + }); + } + + public void preSetUserQuota(final String user, final TableName table, final Quotas quotas) + throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.preSetUserQuota(ctx, user, table, quotas); + } + }); + } + + public void postSetUserQuota(final String user, final TableName table, final Quotas quotas) + throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.postSetUserQuota(ctx, user, table, quotas); + } + }); + } + + public void preSetUserQuota(final String user, final String namespace, final Quotas quotas) + throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.preSetUserQuota(ctx, user, namespace, quotas); + } + }); + } + + public void postSetUserQuota(final String user, final String namespace, final Quotas quotas) + throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.postSetUserQuota(ctx, user, namespace, quotas); + } + }); + } + + public void preSetTableQuota(final TableName table, final Quotas quotas) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.preSetTableQuota(ctx, table, quotas); + } + }); + } + + public void postSetTableQuota(final TableName table, final Quotas quotas) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.postSetTableQuota(ctx, table, quotas); + } + }); + } + + public void preSetNamespaceQuota(final String namespace, final Quotas quotas) throws IOException { + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.preSetNamespaceQuota(ctx, namespace, quotas); + } + }); + } + + public void postSetNamespaceQuota(final String namespace, final Quotas quotas) throws IOException{ + execOperation(coprocessors.isEmpty() ? null : new CoprocessorOperation() { + @Override + public void call(MasterObserver oserver, ObserverContext ctx) + throws IOException { + oserver.postSetNamespaceQuota(ctx, namespace, quotas); + } + }); + } + private static abstract class CoprocessorOperation extends ObserverContext { public CoprocessorOperation() { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java index 7650b94..fcfa07f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java @@ -36,16 +36,17 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.hbase.ClusterId; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.InvalidFamilyOperationException; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableDescriptor; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; @@ -56,6 +57,7 @@ import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.ipc.RemoteException; /** * This class abstracts a bunch of operations the HMaster needs to interact with @@ -237,6 +239,11 @@ public class MasterFileSystem { return serverNames; } for (FileStatus status : logFolders) { + FileStatus[] curLogFiles = FSUtils.listStatus(this.fs, status.getPath(), null); + if (curLogFiles == null || curLogFiles.length == 0) { + // Empty log folder. No recovery needed + continue; + } final ServerName serverName = DefaultWALProvider.getServerNameFromWALDirectoryName( status.getPath()); if (null == serverName) { @@ -459,12 +466,12 @@ public class MasterFileSystem { } // Create tableinfo-s for hbase:meta if not already there. - + // assume, created table descriptor is for enabling table // meta table is a system table, so descriptors are predefined, // we should get them from registry. FSTableDescriptors fsd = new FSTableDescriptors(c, fs, rd); fsd.createTableDescriptor( - new HTableDescriptor(fsd.get(TableName.META_TABLE_NAME))); + new TableDescriptor(fsd.get(TableName.META_TABLE_NAME), TableState.State.ENABLING)); return rd; } @@ -510,7 +517,8 @@ public class MasterFileSystem { setInfoFamilyCachingForMeta(metaDescriptor, true); HRegion.closeHRegion(meta); } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.error("bootstrap", e); throw e; } @@ -519,8 +527,7 @@ public class MasterFileSystem { /** * Enable in memory caching for hbase:meta */ - public static void setInfoFamilyCachingForMeta(final HTableDescriptor metaDescriptor, - final boolean b) { + public static void setInfoFamilyCachingForMeta(HTableDescriptor metaDescriptor, final boolean b) { for (HColumnDescriptor hcd: metaDescriptor.getColumnFamilies()) { if (Bytes.equals(hcd.getName(), HConstants.CATALOG_FAMILY)) { hcd.setBlockCacheEnabled(b); @@ -529,6 +536,7 @@ public class MasterFileSystem { } } + public void deleteRegion(HRegionInfo region) throws IOException { HFileArchiver.archiveRegion(conf, fs, region); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java index 710d060..6930bf3 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java @@ -22,22 +22,23 @@ import java.io.IOException; import java.net.InetAddress; import java.util.ArrayList; import java.util.List; -import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.PleaseHoldException; import org.apache.hadoop.hbase.ServerLoad; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.UnknownRegionException; -import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.exceptions.MergeRegionException; import org.apache.hadoop.hbase.exceptions.UnknownProtocolException; import org.apache.hadoop.hbase.ipc.RpcServer.BlockingServiceAndInterface; @@ -46,11 +47,14 @@ import org.apache.hadoop.hbase.procedure.MasterProcedureManager; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.protobuf.ResponseConverter; -import org.apache.hadoop.hbase.protobuf.generated.*; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; +import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.AddColumnRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.AddColumnResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.AssignRegionRequest; @@ -124,6 +128,8 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.RunCatalogScanReq import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.RunCatalogScanResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetBalancerRunningRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetBalancerRunningResponse; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ShutdownRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.ShutdownResponse; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SnapshotRequest; @@ -167,7 +173,7 @@ import com.google.protobuf.ServiceException; @InterfaceAudience.Private @SuppressWarnings("deprecation") public class MasterRpcServices extends RSRpcServices - implements MasterService.BlockingInterface, RegionServerStatusService.BlockingInterface { + implements MasterService.BlockingInterface, RegionServerStatusService.BlockingInterface { protected static final Log LOG = LogFactory.getLog(MasterRpcServices.class.getName()); private final HMaster master; @@ -263,8 +269,8 @@ public class MasterRpcServices extends RSRpcServices } catch (IOException ioe) { throw new ServiceException(ioe); } - byte[] regionName = request.getRegionName().toByteArray(); - long seqId = master.serverManager.getLastFlushedSequenceId(regionName); + byte[] encodedRegionName = request.getRegionName().toByteArray(); + long seqId = master.serverManager.getLastFlushedSequenceId(encodedRegionName); return ResponseConverter.buildGetLastFlushedSequenceIdResponse(seqId); } @@ -359,7 +365,7 @@ public class MasterRpcServices extends RSRpcServices } LOG.info(master.getClientIdAuditPrefix() + " assign " + regionInfo.getRegionNameAsString()); - master.assignmentManager.assign(regionInfo, true, true); + master.assignmentManager.assign(regionInfo, true); if (master.cpHost != null) { master.cpHost.postAssign(regionInfo); } @@ -519,6 +525,10 @@ public class MasterRpcServices extends RSRpcServices HRegionInfo regionInfoA = regionStateA.getRegion(); HRegionInfo regionInfoB = regionStateB.getRegion(); + if (regionInfoA.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID || + regionInfoB.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + throw new ServiceException(new MergeRegionException("Can't merge non-default replicas")); + } if (regionInfoA.compareTo(regionInfoB) == 0) { throw new ServiceException(new MergeRegionException( "Unable to merge a region to itself " + regionInfoA + ", " + regionInfoB)); @@ -812,7 +822,7 @@ public class MasterRpcServices extends RSRpcServices public GetTableNamesResponse getTableNames(RpcController controller, GetTableNamesRequest req) throws ServiceException { try { - master.checkInitialized(); + master.checkServiceStarted(); final String regex = req.hasRegex() ? req.getRegex() : null; final String namespace = req.hasNamespace() ? req.getNamespace() : null; @@ -833,6 +843,25 @@ public class MasterRpcServices extends RSRpcServices } @Override + public MasterProtos.GetTableStateResponse getTableState(RpcController controller, + MasterProtos.GetTableStateRequest request) throws ServiceException { + try { + master.checkServiceStarted(); + TableName tableName = ProtobufUtil.toTableName(request.getTableName()); + TableState.State state = master.getTableStateManager() + .getTableState(tableName); + if (state == null) + throw new TableNotFoundException(tableName); + MasterProtos.GetTableStateResponse.Builder builder = + MasterProtos.GetTableStateResponse.newBuilder(); + builder.setTableState(new TableState(tableName, state).convert()); + return builder.build(); + } catch (IOException e) { + throw new ServiceException(e); + } + } + + @Override public IsCatalogJanitorEnabledResponse isCatalogJanitorEnabled(RpcController c, IsCatalogJanitorEnabledRequest req) throws ServiceException { return IsCatalogJanitorEnabledResponse.newBuilder().setValue( @@ -1196,12 +1225,7 @@ public class MasterRpcServices extends RSRpcServices } LOG.debug(master.getClientIdAuditPrefix() + " unassign " + hri.getRegionNameAsString() + " in current location if it is online and reassign.force=" + force); - master.assignmentManager.unassign(hri, force); - if (master.assignmentManager.getRegionStates().isRegionOffline(hri)) { - LOG.debug("Region " + hri.getRegionNameAsString() - + " is not online on any region server, reassigning it."); - master.assignRegion(hri); - } + master.assignmentManager.unassign(hri); if (master.cpHost != null) { master.cpHost.postUnassign(hri, force); } @@ -1240,4 +1264,15 @@ public class MasterRpcServices extends RSRpcServices throw new ServiceException(ioe); } } + + @Override + public SetQuotaResponse setQuota(RpcController c, SetQuotaRequest req) + throws ServiceException { + try { + master.checkInitialized(); + return master.getMasterQuotaManager().setQuota(req); + } catch (Exception e) { + throw new ServiceException(e); + } + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java index 627b3c5..7733256 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java @@ -32,6 +32,7 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.executor.ExecutorService; +import org.apache.hadoop.hbase.quotas.MasterQuotaManager; import com.google.protobuf.Service; @@ -66,11 +67,21 @@ public interface MasterServices extends Server { TableLockManager getTableLockManager(); /** + * @return Master's instance of {@link TableStateManager} + */ + TableStateManager getTableStateManager(); + + /** * @return Master's instance of {@link MasterCoprocessorHost} */ MasterCoprocessorHost getMasterCoprocessorHost(); /** + * @return Master's instance of {@link MasterQuotaManager} + */ + MasterQuotaManager getMasterQuotaManager(); + + /** * Check table is modifiable; i.e. exists and is offline. * @param tableName Name of table to check. * @throws TableNotDisabledException diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/OfflineCallback.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/OfflineCallback.java deleted file mode 100644 index 205377c..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/OfflineCallback.java +++ /dev/null @@ -1,114 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - -import java.util.Map; -import java.util.concurrent.atomic.AtomicInteger; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.AsyncCallback.StringCallback; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.ZooKeeper; -import org.apache.zookeeper.data.Stat; - -/** - * Callback handler for creating unassigned offline znodes - * used during bulk assign, async setting region to offline. - */ -@InterfaceAudience.Private -public class OfflineCallback implements StringCallback { - private static final Log LOG = LogFactory.getLog(OfflineCallback.class); - private final ExistCallback callBack; - private final ZooKeeperWatcher zkw; - private final ServerName destination; - private final AtomicInteger counter; - - OfflineCallback(final ZooKeeperWatcher zkw, - final ServerName destination, final AtomicInteger counter, - final Map offlineNodesVersions) { - this.callBack = new ExistCallback( - destination, counter, offlineNodesVersions); - this.destination = destination; - this.counter = counter; - this.zkw = zkw; - } - - @Override - public void processResult(int rc, String path, Object ctx, String name) { - if (rc == KeeperException.Code.NODEEXISTS.intValue()) { - LOG.warn("Node for " + path + " already exists"); - } else if (rc != 0) { - // This is result code. If non-zero, need to resubmit. - LOG.warn("rc != 0 for " + path + " -- retryable connectionloss -- " + - "FIX see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A2"); - this.counter.addAndGet(1); - return; - } - - if (LOG.isDebugEnabled()) { - LOG.debug("rs=" + ctx + ", server=" + destination); - } - // Async exists to set a watcher so we'll get triggered when - // unassigned node changes. - ZooKeeper zk = this.zkw.getRecoverableZooKeeper().getZooKeeper(); - zk.exists(path, this.zkw, callBack, ctx); - } - - /** - * Callback handler for the exists call that sets watcher on unassigned znodes. - * Used during bulk assign on startup. - */ - static class ExistCallback implements StatCallback { - private static final Log LOG = LogFactory.getLog(ExistCallback.class); - private final Map offlineNodesVersions; - private final AtomicInteger counter; - private ServerName destination; - - ExistCallback(final ServerName destination, - final AtomicInteger counter, - final Map offlineNodesVersions) { - this.offlineNodesVersions = offlineNodesVersions; - this.destination = destination; - this.counter = counter; - } - - @Override - public void processResult(int rc, String path, Object ctx, Stat stat) { - if (rc != 0) { - // This is result code. If non-zero, need to resubmit. - LOG.warn("rc != 0 for " + path + " -- retryable connectionloss -- " + - "FIX see http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A2"); - this.counter.addAndGet(1); - return; - } - - if (LOG.isDebugEnabled()) { - LOG.debug("rs=" + ctx + ", server=" + destination); - } - HRegionInfo region = ((RegionState)ctx).getRegion(); - offlineNodesVersions.put( - region.getEncodedName(), Integer.valueOf(stat.getVersion())); - this.counter.addAndGet(1); - } - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStateStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStateStore.java index 823b180..8f7d0f3 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStateStore.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStateStore.java @@ -28,18 +28,17 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.MultiHConnection; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.zookeeper.KeeperException; @@ -58,10 +57,8 @@ public class RegionStateStore { protected static final char META_REPLICA_ID_DELIMITER = '_'; private volatile HRegion metaRegion; - private MultiHConnection multiHConnection; private volatile boolean initialized; - - private final boolean noPersistence; + private MultiHConnection multiHConnection; private final Server server; /** @@ -133,29 +130,24 @@ public class RegionStateStore { } RegionStateStore(final Server server) { - Configuration conf = server.getConfiguration(); - // No need to persist if using ZK but not migrating - noPersistence = ConfigUtil.useZKForAssignment(conf) - && !conf.getBoolean("hbase.assignment.usezk.migrating", false); this.server = server; initialized = false; } void start() throws IOException { - if (!noPersistence) { - if (server instanceof RegionServerServices) { - metaRegion = ((RegionServerServices)server).getFromOnlineRegions( - HRegionInfo.FIRST_META_REGIONINFO.getEncodedName()); - } - if (metaRegion == null) { - Configuration conf = server.getConfiguration(); - // Config to determine the no of HConnections to META. - // A single HConnection should be sufficient in most cases. Only if - // you are doing lot of writes (>1M) to META, - // increasing this value might improve the write throughput. - multiHConnection = - new MultiHConnection(conf, conf.getInt("hbase.regionstatestore.meta.connection", 1)); - } + if (server instanceof RegionServerServices) { + metaRegion = ((RegionServerServices)server).getFromOnlineRegions( + HRegionInfo.FIRST_META_REGIONINFO.getEncodedName()); + } + // When meta is not colocated on master + if (metaRegion == null) { + Configuration conf = server.getConfiguration(); + // Config to determine the no of HConnections to META. + // A single HConnection should be sufficient in most cases. Only if + // you are doing lot of writes (>1M) to META, + // increasing this value might improve the write throughput. + multiHConnection = + new MultiHConnection(conf, conf.getInt("hbase.regionstatestore.meta.connection", 1)); } initialized = true; } @@ -169,33 +161,30 @@ public class RegionStateStore { void updateRegionState(long openSeqNum, RegionState newState, RegionState oldState) { - - if (noPersistence) { - return; - } - - HRegionInfo hri = newState.getRegion(); try { - // update meta before checking for initialization. - // meta state stored in zk. - if (hri.isMetaRegion()) { - // persist meta state in MetaTableLocator (which in turn is zk storage currently) - try { - MetaTableLocator.setMetaLocation(server.getZooKeeper(), - newState.getServerName(), newState.getState()); - return; // Done - } catch (KeeperException e) { - throw new IOException("Failed to update meta ZNode", e); + HRegionInfo hri = newState.getRegion(); + + // update meta before checking for initialization. + // meta state stored in zk. + if (hri.isMetaRegion()) { + // persist meta state in MetaTableLocator (which in turn is zk storage currently) + try { + MetaTableLocator.setMetaLocation(server.getZooKeeper(), + newState.getServerName(), newState.getState()); + return; // Done + } catch (KeeperException e) { + throw new IOException("Failed to update meta ZNode", e); + } + } + + if (!initialized + || !shouldPersistStateChange(hri, newState, oldState)) { + return; } - } - - if (!initialized || !shouldPersistStateChange(hri, newState, oldState)) { - return; - } - ServerName oldServer = oldState != null ? oldState.getServerName() : null; - ServerName serverName = newState.getServerName(); - State state = newState.getState(); + ServerName oldServer = oldState != null ? oldState.getServerName() : null; + ServerName serverName = newState.getServerName(); + State state = newState.getState(); int replicaId = hri.getReplicaId(); Put put = new Put(MetaTableAccessor.getMetaKeyForRegion(hri)); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java index 46d21c7..cd524b5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java @@ -29,30 +29,24 @@ import java.util.Map; import java.util.Set; import java.util.TreeMap; +import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Preconditions; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RegionTransition; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerLoad; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.TableStateManager; import org.apache.hadoop.hbase.client.RegionReplicaUtil; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.master.RegionState.State; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; - -import com.google.common.annotations.VisibleForTesting; -import com.google.common.base.Preconditions; /** * Region state accountant. It holds the states of all regions in the memory. @@ -341,7 +335,7 @@ public class RegionStates { } lastAssignments.put(encodedName, lastHost); regionAssignments.put(hri, lastHost); - } else if (!regionState.isUnassignable()) { + } else if (!isOneOfStates(regionState, State.MERGED, State.SPLIT, State.OFFLINE)) { regionsInTransition.put(encodedName, regionState); } if (lastHost != null && newState != State.SPLIT) { @@ -366,39 +360,6 @@ public class RegionStates { /** * Update a region state. It will be put in transition if not already there. - * - * If we can't find the region info based on the region name in - * the transition, log a warning and return null. - */ - public RegionState updateRegionState( - final RegionTransition transition, final State state) { - byte [] regionName = transition.getRegionName(); - HRegionInfo regionInfo = getRegionInfo(regionName); - if (regionInfo == null) { - String prettyRegionName = HRegionInfo.prettyPrint( - HRegionInfo.encodeRegionName(regionName)); - LOG.warn("Failed to find region " + prettyRegionName - + " in updating its state to " + state - + " based on region transition " + transition); - return null; - } - return updateRegionState(regionInfo, state, - transition.getServerName()); - } - - /** - * Transition a region state to OPEN from OPENING/PENDING_OPEN - */ - public synchronized RegionState transitionOpenFromPendingOpenOrOpeningOnServer( - final RegionTransition transition, final RegionState fromState, final ServerName sn) { - if(fromState.isPendingOpenOrOpeningOnServer(sn)){ - return updateRegionState(transition, State.OPEN); - } - return null; - } - - /** - * Update a region state. It will be put in transition if not already there. */ public RegionState updateRegionState( final HRegionInfo hri, final State state, final ServerName serverName) { @@ -433,7 +394,11 @@ public class RegionStates { regionsInTransition.remove(encodedName); ServerName oldServerName = regionAssignments.put(hri, serverName); if (!serverName.equals(oldServerName)) { - LOG.info("Onlined " + hri.getShortNameToLog() + " on " + serverName); + if (LOG.isDebugEnabled()) { + LOG.debug("Onlined " + hri.getShortNameToLog() + " on " + serverName + " " + hri); + } else { + LOG.debug("Onlined " + hri.getShortNameToLog() + " on " + serverName); + } addToServerHoldings(serverName, hri); addToReplicaMapping(hri); if (oldServerName == null) { @@ -565,7 +530,7 @@ public class RegionStates { if (oldServerName != null && serverHoldings.containsKey(oldServerName)) { if (newState == State.MERGED || newState == State.SPLIT || hri.isMetaRegion() || tableStateManager.isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING)) { // Offline the region only if it's merged/split, or the table is disabled/disabling. // Otherwise, offline it from this server only when it is online on a different server. LOG.info("Offlined " + hri.getShortNameToLog() + " from " + oldServerName); @@ -583,8 +548,7 @@ public class RegionStates { /** * A server is offline, all regions on it are dead. */ - public synchronized List serverOffline( - final ZooKeeperWatcher watcher, final ServerName sn) { + public synchronized List serverOffline(final ServerName sn) { // Offline all regions on this server not already in transition. List rits = new ArrayList(); Set assignedRegions = serverHoldings.get(sn); @@ -600,13 +564,7 @@ public class RegionStates { regionsToOffline.add(region); } else if (isRegionInState(region, State.SPLITTING, State.MERGING)) { LOG.debug("Offline splitting/merging region " + getRegionState(region)); - try { - // Delete the ZNode if exists - ZKAssign.deleteNodeFailSilent(watcher, region); - regionsToOffline.add(region); - } catch (KeeperException ke) { - server.abort("Unexpected ZK exception deleting node " + region, ke); - } + regionsToOffline.add(region); } } @@ -629,7 +587,8 @@ public class RegionStates { // Offline state is also kind of pending open if the region is in // transition. The region could be in failed_close state too if we have // tried several times to open it while this region server is not reachable) - if (state.isPendingOpenOrOpening() || state.isFailedClose() || state.isOffline()) { + if (isOneOfStates(state, State.OPENING, State.PENDING_OPEN, + State.FAILED_OPEN, State.FAILED_CLOSE, State.OFFLINE)) { LOG.info("Found region in " + state + " to be reassigned by SSH for " + sn); rits.add(hri); } else { @@ -792,6 +751,12 @@ public class RegionStates { lastAssignments.put(encodedName, serverName); } + synchronized boolean isRegionOnServer( + final HRegionInfo hri, final ServerName serverName) { + Set regions = serverHoldings.get(serverName); + return regions == null ? false : regions.contains(hri); + } + void splitRegion(HRegionInfo p, HRegionInfo a, HRegionInfo b, ServerName sn) throws IOException { regionStateStore.splitRegion(p, a, b, sn); @@ -998,8 +963,8 @@ public class RegionStates { * Update a region state. It will be put in transition if not already there. */ private RegionState updateRegionState(final HRegionInfo hri, - final State state, final ServerName serverName, long openSeqNum) { - if (state == State.FAILED_CLOSE || state == State.FAILED_OPEN) { + final RegionState.State state, final ServerName serverName, long openSeqNum) { + if (state == RegionState.State.FAILED_CLOSE || state == RegionState.State.FAILED_OPEN) { LOG.warn("Failed to open/close " + hri.getShortNameToLog() + " on " + serverName + ", set to " + state); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java index da4b827..796cc8a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java @@ -62,7 +62,7 @@ import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.R import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.RegionOpeningState; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.Triple; +import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.KeeperException; @@ -93,7 +93,6 @@ import com.google.protobuf.ServiceException; * and has completed the handling. */ @InterfaceAudience.Private -@SuppressWarnings("deprecation") public class ServerManager { public static final String WAIT_ON_REGIONSERVERS_MAXTOSTART = "hbase.master.wait.on.regionservers.maxtostart"; @@ -156,7 +155,7 @@ public class ServerManager { * handler is not enabled, is queued up. *

          * So this is a set of region servers known to be dead but not submitted to - * ServerShutdownHander for processing yet. + * ServerShutdownHandler for processing yet. */ private Set queuedDeadServers = new HashSet(); @@ -255,7 +254,8 @@ public class ServerManager { private void updateLastFlushedSequenceIds(ServerName sn, ServerLoad hsl) { Map regionsLoad = hsl.getRegionsLoad(); for (Entry entry : regionsLoad.entrySet()) { - Long existingValue = flushedSequenceIdByRegion.get(entry.getKey()); + byte[] encodedRegionName = Bytes.toBytes(HRegionInfo.encodeRegionName(entry.getKey())); + Long existingValue = flushedSequenceIdByRegion.get(encodedRegionName); long l = entry.getValue().getCompleteSequenceId(); if (existingValue != null) { if (l != -1 && l < existingValue) { @@ -265,11 +265,10 @@ public class ServerManager { existingValue + ") for region " + Bytes.toString(entry.getKey()) + " Ignoring."); - continue; // Don't let smaller sequence ids override greater - // sequence ids. + continue; // Don't let smaller sequence ids override greater sequence ids. } } - flushedSequenceIdByRegion.put(entry.getKey(), l); + flushedSequenceIdByRegion.put(encodedRegionName, l); } } @@ -295,7 +294,7 @@ public class ServerManager { * Check is a server of same host and port already exists, * if not, or the existed one got a smaller start code, record it. * - * @param sn the server to check and record + * @param serverName the server to check and record * @param sl the server load on the server * @return true if the server is recorded, otherwise, false */ @@ -408,10 +407,10 @@ public class ServerManager { this.rsAdmins.remove(serverName); } - public long getLastFlushedSequenceId(byte[] regionName) { - long seqId = -1; - if (flushedSequenceIdByRegion.containsKey(regionName)) { - seqId = flushedSequenceIdByRegion.get(regionName); + public long getLastFlushedSequenceId(byte[] encodedRegionName) { + long seqId = -1L; + if (flushedSequenceIdByRegion.containsKey(encodedRegionName)) { + seqId = flushedSequenceIdByRegion.get(encodedRegionName); } return seqId; } @@ -681,21 +680,18 @@ public class ServerManager { *

          * @param server server to open a region * @param region region to open - * @param versionOfOfflineNode that needs to be present in the offline node - * when RS tries to change the state from OFFLINE to other states. * @param favoredNodes */ public RegionOpeningState sendRegionOpen(final ServerName server, - HRegionInfo region, int versionOfOfflineNode, List favoredNodes) + HRegionInfo region, List favoredNodes) throws IOException { AdminService.BlockingInterface admin = getRsAdmin(server); if (admin == null) { - LOG.warn("Attempting to send OPEN RPC to server " + server.toString() + + throw new IOException("Attempting to send OPEN RPC to server " + server.toString() + " failed because no RPC connection found to this server"); - return RegionOpeningState.FAILED_OPENING; } - OpenRegionRequest request = RequestConverter.buildOpenRegionRequest(server, - region, versionOfOfflineNode, favoredNodes, + OpenRegionRequest request = RequestConverter.buildOpenRegionRequest(server, + region, favoredNodes, (RecoveryMode.LOG_REPLAY == this.services.getMasterFileSystem().getLogRecoveryMode())); try { OpenRegionResponse response = admin.openRegion(null, request); @@ -715,13 +711,12 @@ public class ServerManager { * @return a list of region opening states */ public List sendRegionOpen(ServerName server, - List>> regionOpenInfos) + List>> regionOpenInfos) throws IOException { AdminService.BlockingInterface admin = getRsAdmin(server); if (admin == null) { - LOG.warn("Attempting to send OPEN RPC to server " + server.toString() + + throw new IOException("Attempting to send OPEN RPC to server " + server.toString() + " failed because no RPC connection found to this server"); - return null; } OpenRegionRequest request = RequestConverter.buildOpenRegionRequest(server, regionOpenInfos, @@ -741,15 +736,11 @@ public class ServerManager { * have the specified region or the region is being split. * @param server server to open a region * @param region region to open - * @param versionOfClosingNode - * the version of znode to compare when RS transitions the znode from - * CLOSING state. * @param dest - if the region is moved to another server, the destination server. null otherwise. - * @return true if server acknowledged close, false if not * @throws IOException */ public boolean sendRegionClose(ServerName server, HRegionInfo region, - int versionOfClosingNode, ServerName dest, boolean transitionInZK) throws IOException { + ServerName dest) throws IOException { if (server == null) throw new NullPointerException("Passed server is null"); AdminService.BlockingInterface admin = getRsAdmin(server); if (admin == null) { @@ -759,12 +750,12 @@ public class ServerManager { " failed because no RPC connection found to this server"); } return ProtobufUtil.closeRegion(admin, server, region.getRegionName(), - versionOfClosingNode, dest, transitionInZK); + dest); } public boolean sendRegionClose(ServerName server, - HRegionInfo region, int versionOfClosingNode) throws IOException { - return sendRegionClose(server, region, versionOfClosingNode, null, true); + HRegionInfo region) throws IOException { + return sendRegionClose(server, region, null); } /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java index 23ef6a5..bc798cd 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java @@ -46,6 +46,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.hbase.Chore; +import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; @@ -74,8 +75,8 @@ import com.google.common.annotations.VisibleForTesting; * timeoutMonitor thread. If a task's progress is slow then * {@link SplitLogManagerCoordination#checkTasks} will take away the * task from the owner {@link org.apache.hadoop.hbase.regionserver.SplitLogWorker} - * and the task will be up for grabs again. When the task is done then it is deleted - * by SplitLogManager. + * and the task will be up for grabs again. When the task is done then it is + * deleted by SplitLogManager. * *

          Clients call {@link #splitLogDistributed(Path)} to split a region server's * log files. The caller thread waits in this method until all the log files @@ -587,9 +588,9 @@ public class SplitLogManager { * @return whether log is replaying */ public boolean isLogReplaying() { - if (server.getCoordinatedStateManager() == null) return false; - return ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitLogManagerCoordination().isReplaying(); + CoordinatedStateManager m = server.getCoordinatedStateManager(); + if (m == null) return false; + return ((BaseCoordinatedStateManager)m).getSplitLogManagerCoordination().isReplaying(); } /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java index c22294b..31d3fab 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableNamespaceManager.java @@ -45,6 +45,7 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.constraint.ConstraintException; import org.apache.hadoop.hbase.master.handler.CreateTableHandler; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -92,11 +93,11 @@ public class TableNamespaceManager { // So that it should be initialized later on lazily. long startTime = EnvironmentEdgeManager.currentTime(); int timeout = conf.getInt(NS_INIT_TIMEOUT, DEFAULT_NS_INIT_TIMEOUT); - while (!isTableAssigned()) { + while (!(isTableAssigned() && isTableEnabled())) { if (EnvironmentEdgeManager.currentTime() - startTime + 100 > timeout) { // We can't do anything if ns is not online. throw new IOException("Timedout " + timeout + "ms waiting for namespace table to " + - "be assigned"); + "be assigned and enabled: " + getTableState()); } Thread.sleep(100); } @@ -256,7 +257,7 @@ public class TableNamespaceManager { } // Now check if the table is assigned, if not then fail fast - if (isTableAssigned()) { + if (isTableAssigned() && isTableEnabled()) { try { nsTable = this.masterServices.getConnection().getTable(TableName.NAMESPACE_TABLE_NAME); zkNamespaceManager = new ZKNamespaceManager(masterServices.getZooKeeper()); @@ -272,7 +273,7 @@ public class TableNamespaceManager { ResultScanner scanner = nsTable.getScanner(HTableDescriptor.NAMESPACE_FAMILY_INFO_BYTES); try { for (Result result : scanner) { - byte[] val = CellUtil.cloneValue(result.getColumnLatest( + byte[] val = CellUtil.cloneValue(result.getColumnLatestCell( HTableDescriptor.NAMESPACE_FAMILY_INFO_BYTES, HTableDescriptor.NAMESPACE_COL_DESC_BYTES)); NamespaceDescriptor ns = @@ -296,8 +297,16 @@ public class TableNamespaceManager { return false; } + private TableState.State getTableState() throws IOException { + return masterServices.getTableStateManager().getTableState(TableName.NAMESPACE_TABLE_NAME); + } + + private boolean isTableEnabled() throws IOException { + return getTableState().equals(TableState.State.ENABLED); + } + private boolean isTableAssigned() { - return !masterServices.getAssignmentManager().getRegionStates(). - getRegionsOfTable(TableName.NAMESPACE_TABLE_NAME).isEmpty(); + return !masterServices.getAssignmentManager() + .getRegionStates().getRegionsOfTable(TableName.NAMESPACE_TABLE_NAME).isEmpty(); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableStateManager.java new file mode 100644 index 0000000..04cc17c --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableStateManager.java @@ -0,0 +1,216 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master; + +import java.io.IOException; +import java.util.Map; +import java.util.Set; + +import com.google.common.collect.Maps; +import com.google.common.collect.Sets; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.TableDescriptor; +import org.apache.hadoop.hbase.TableDescriptors; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.client.TableState; + +/** + * This is a helper class used to manage table states. + * States persisted in tableinfo and cached internally. + */ +@InterfaceAudience.Private +public class TableStateManager { + private static final Log LOG = LogFactory.getLog(TableStateManager.class); + private final TableDescriptors descriptors; + + private final Map tableStates = Maps.newConcurrentMap(); + + public TableStateManager(MasterServices master) { + this.descriptors = master.getTableDescriptors(); + } + + public void start() throws IOException { + Map all = descriptors.getAllDescriptors(); + for (TableDescriptor table : all.values()) { + TableName tableName = table.getHTableDescriptor().getTableName(); + if (LOG.isDebugEnabled()) { + LOG.debug("Adding table state: " + tableName + + ": " + table.getTableState()); + } + tableStates.put(tableName, table.getTableState()); + } + } + + /** + * Set table state to provided. + * Caller should lock table on write. + * @param tableName table to change state for + * @param newState new state + * @throws IOException + */ + public void setTableState(TableName tableName, TableState.State newState) throws IOException { + synchronized (tableStates) { + TableDescriptor descriptor = readDescriptor(tableName); + if (descriptor == null) { + throw new TableNotFoundException(tableName); + } + if (descriptor.getTableState() != newState) { + writeDescriptor( + new TableDescriptor(descriptor.getHTableDescriptor(), newState)); + } + } + } + + /** + * Set table state to provided but only if table in specified states + * Caller should lock table on write. + * @param tableName table to change state for + * @param newState new state + * @param states states to check against + * @throws IOException + */ + public boolean setTableStateIfInStates(TableName tableName, + TableState.State newState, + TableState.State... states) + throws IOException { + synchronized (tableStates) { + TableDescriptor descriptor = readDescriptor(tableName); + if (descriptor == null) { + throw new TableNotFoundException(tableName); + } + if (TableState.isInStates(descriptor.getTableState(), states)) { + writeDescriptor( + new TableDescriptor(descriptor.getHTableDescriptor(), newState)); + return true; + } else { + return false; + } + } + } + + + /** + * Set table state to provided but only if table not in specified states + * Caller should lock table on write. + * @param tableName table to change state for + * @param newState new state + * @param states states to check against + * @throws IOException + */ + public boolean setTableStateIfNotInStates(TableName tableName, + TableState.State newState, + TableState.State... states) + throws IOException { + synchronized (tableStates) { + TableDescriptor descriptor = readDescriptor(tableName); + if (descriptor == null) { + throw new TableNotFoundException(tableName); + } + if (!TableState.isInStates(descriptor.getTableState(), states)) { + writeDescriptor( + new TableDescriptor(descriptor.getHTableDescriptor(), newState)); + return true; + } else { + return false; + } + } + } + + public boolean isTableState(TableName tableName, TableState.State... states) { + TableState.State tableState = null; + try { + tableState = getTableState(tableName); + } catch (IOException e) { + LOG.error("Unable to get table state, probably table not exists"); + return false; + } + return tableState != null && TableState.isInStates(tableState, states); + } + + public void setDeletedTable(TableName tableName) throws IOException { + TableState.State remove = tableStates.remove(tableName); + if (remove == null) { + LOG.warn("Moving table " + tableName + " state to deleted but was " + + "already deleted"); + } + } + + public boolean isTablePresent(TableName tableName) throws IOException { + return getTableState(tableName) != null; + } + + /** + * Return all tables in given states. + * + * @param states filter by states + * @return tables in given states + * @throws IOException + */ + public Set getTablesInStates(TableState.State... states) throws IOException { + Set rv = Sets.newHashSet(); + for (Map.Entry entry : tableStates.entrySet()) { + if (TableState.isInStates(entry.getValue(), states)) + rv.add(entry.getKey()); + } + return rv; + } + + public TableState.State getTableState(TableName tableName) throws IOException { + TableState.State tableState = tableStates.get(tableName); + if (tableState == null) { + TableDescriptor descriptor = readDescriptor(tableName); + if (descriptor != null) + tableState = descriptor.getTableState(); + } + return tableState; + } + + /** + * Write descriptor in place, update cache of states. + * Write lock should be hold by caller. + * + * @param descriptor what to write + */ + private void writeDescriptor(TableDescriptor descriptor) throws IOException { + TableName tableName = descriptor.getHTableDescriptor().getTableName(); + TableState.State state = descriptor.getTableState(); + descriptors.add(descriptor); + LOG.debug("Table " + tableName + " written descriptor for state " + state); + tableStates.put(tableName, state); + LOG.debug("Table " + tableName + " updated state to " + state); + } + + /** + * Read current descriptor for table, update cache of states. + * + * @param table descriptor to read + * @return descriptor + * @throws IOException + */ + private TableDescriptor readDescriptor(TableName tableName) throws IOException { + TableDescriptor descriptor = descriptors.getDescriptor(tableName); + if (descriptor == null) + tableStates.remove(tableName); + else + tableStates.put(tableName, descriptor.getTableState()); + return descriptor; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/UnAssignCallable.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/UnAssignCallable.java index ce669c4..ccff6f0 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/UnAssignCallable.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/UnAssignCallable.java @@ -41,7 +41,7 @@ public class UnAssignCallable implements Callable { @Override public Object call() throws Exception { - assignmentManager.unassign(hri, true); + assignmentManager.unassign(hri); return null; } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java index 7aca88e..d8dfbd0 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.RegionLoad; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.conf.ConfigurationObserver; import org.apache.hadoop.hbase.master.LoadBalancer; @@ -50,6 +51,7 @@ import org.apache.hadoop.hbase.master.MasterServices; import org.apache.hadoop.hbase.master.RackManager; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.Cluster.Action.Type; +import org.apache.hadoop.hbase.security.access.AccessControlLists; import org.apache.hadoop.util.StringUtils; import com.google.common.base.Joiner; @@ -60,8 +62,8 @@ import com.google.common.collect.Sets; /** * The base class for load balancers. It provides the the functions used to by * {@link org.apache.hadoop.hbase.master.AssignmentManager} to assign regions - * in the edge cases. It doesn't provide an implementation of the actual - * balancing algorithm. + * in the edge cases. It doesn't provide an implementation of the + * actual balancing algorithm. * */ public abstract class BaseLoadBalancer implements LoadBalancer { @@ -793,6 +795,12 @@ public abstract class BaseLoadBalancer implements LoadBalancer { private static final Random RANDOM = new Random(System.currentTimeMillis()); private static final Log LOG = LogFactory.getLog(BaseLoadBalancer.class); + // Regions of these tables are put on the master by default. + private static final String[] DEFAULT_TABLES_ON_MASTER = + new String[] {AccessControlLists.ACL_TABLE_NAME.getNameAsString(), + TableName.NAMESPACE_TABLE_NAME.getNameAsString(), + TableName.META_TABLE_NAME.getNameAsString()}; + public static final String TABLES_ON_MASTER = "hbase.balancer.tablesOnMaster"; @@ -802,12 +810,19 @@ public abstract class BaseLoadBalancer implements LoadBalancer { protected ServerName masterServerName; protected MasterServices services; + /** + * By default, regions of some small system tables such as meta, + * namespace, and acl are assigned to the active master. If you don't + * want to assign any region to the active master, you need to + * configure "hbase.balancer.tablesOnMaster" to "none". + */ protected static String[] getTablesOnMaster(Configuration conf) { String valueString = conf.get(TABLES_ON_MASTER); - if (valueString != null) { - valueString = valueString.trim(); + if (valueString == null) { + return DEFAULT_TABLES_ON_MASTER; } - if (valueString == null || valueString.equalsIgnoreCase("none")) { + valueString = valueString.trim(); + if (valueString.equalsIgnoreCase("none")) { return null; } return StringUtils.getStrings(valueString); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java index 3560447..6db82a5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/FavoredNodeLoadBalancer.java @@ -41,16 +41,17 @@ import org.apache.hadoop.hbase.master.balancer.FavoredNodesPlan.Position; import org.apache.hadoop.hbase.util.Pair; /** - * An implementation of the {@link org.apache.hadoop.hbase.master.LoadBalancer} - * that assigns favored nodes for each region. There is a Primary RegionServer - * that hosts the region, and then there is Secondary and Tertiary RegionServers. - * Currently, the favored nodes information is used in creating HDFS files - the Primary - * RegionServer passes the primary, secondary, tertiary node addresses as hints to the - * DistributedFileSystem API for creating files on the filesystem. These nodes are treated - * as hints by the HDFS to place the blocks of the file. This alleviates the problem to - * do with reading from remote nodes (since we can make the Secondary RegionServer as the - * new Primary RegionServer) after a region is recovered. This should help provide - * consistent read latencies for the regions even when their primary region servers die. + * An implementation of the {@link org.apache.hadoop.hbase.master.LoadBalancer} that + * assigns favored nodes for each region. There is a Primary RegionServer that hosts + * the region, and then there is Secondary and Tertiary RegionServers. Currently, the + * favored nodes information is used in creating HDFS files - the Primary RegionServer + * passes the primary, secondary, tertiary node addresses as hints to the + * DistributedFileSystem API for creating files on the filesystem. These nodes are + * treated as hints by the HDFS to place the blocks of the file. This alleviates the + * problem to do with reading from remote nodes (since we can make the Secondary + * RegionServer as the new Primary RegionServer) after a region is recovered. This + * should help provide consistent read latencies for the regions even when their + * primary region servers die. * */ @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java index 9d8c7cb..e58f855 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java @@ -229,10 +229,19 @@ public class StochasticLoadBalancer extends BaseLoadBalancer { clusterState.remove(masterServerName); } + // On clusters with lots of HFileLinks or lots of reference files, + // instantiating the storefile infos can be quite expensive. + // Allow turning this feature off if the locality cost is not going to + // be used in any computations. + RegionLocationFinder finder = null; + if (this.localityCost != null && this.localityCost.getMultiplier() > 0) { + finder = this.regionFinder; + } + //The clusterState that is given to this method contains the state //of all the regions in the table(s) (that's true today) // Keep track of servers to iterate through them. - Cluster cluster = new Cluster(clusterState, loads, regionFinder, rackManager); + Cluster cluster = new Cluster(clusterState, loads, finder, rackManager); if (!needsBalance(cluster)) { return null; } @@ -1017,7 +1026,9 @@ public class StochasticLoadBalancer extends BaseLoadBalancer { } if (index < 0) { - cost += 1; + if (regionLocations.length > 0) { + cost += 1; + } } else { cost += (double) index / (double) regionLocations.length; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java index 6e2f4fd..294131e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/cleaner/CleanerChore.java @@ -28,9 +28,9 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Chore; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.ipc.RemoteException; import com.google.common.annotations.VisibleForTesting; import com.google.common.collect.ImmutableSet; @@ -123,7 +123,8 @@ public abstract class CleanerChore extends Chore FileStatus[] files = FSUtils.listStatus(this.fs, this.oldFileDir); checkAndDeleteEntries(files); } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.warn("Error while cleaning the logs", e); } } @@ -182,7 +183,8 @@ public abstract class CleanerChore extends Chore // if the directory still has children, we can't delete it, so we are done if (!allChildrenDeleted) return false; } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.warn("Error while listing directory: " + dir, e); // couldn't list directory, so don't try to delete, and don't return success return false; @@ -224,7 +226,7 @@ public abstract class CleanerChore extends Chore Iterable deletableValidFiles = validFiles; // check each of the cleaners for the valid files for (T cleaner : cleanersChain) { - if (cleaner.isStopped() || this.stopper.isStopped()) { + if (cleaner.isStopped() || this.getStopper().isStopped()) { LOG.warn("A file cleaner" + this.getName() + " is stopped, won't delete any more files in:" + this.oldFileDir); return false; @@ -261,7 +263,8 @@ public abstract class CleanerChore extends Chore + ", but couldn't. Run cleaner chain and attempt to delete on next pass."); } } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.warn("Error while deleting: " + filePath, e); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java deleted file mode 100644 index ac61dc0..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ClosedRegionHandler.java +++ /dev/null @@ -1,107 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master.handler; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.executor.EventHandler; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.AssignmentManager; -import org.apache.hadoop.hbase.master.RegionState; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; - -/** - * Handles CLOSED region event on Master. - *

          - * If table is being disabled, deletes ZK unassigned node and removes from - * regions in transition. - *

          - * Otherwise, assigns the region to another server. - */ -@InterfaceAudience.Private -public class ClosedRegionHandler extends EventHandler implements TotesHRegionInfo { - private static final Log LOG = LogFactory.getLog(ClosedRegionHandler.class); - private final AssignmentManager assignmentManager; - private final HRegionInfo regionInfo; - private final ClosedPriority priority; - - private enum ClosedPriority { - META (1), - USER (2); - - private final int value; - ClosedPriority(int value) { - this.value = value; - } - public int getValue() { - return value; - } - }; - - public ClosedRegionHandler(Server server, AssignmentManager assignmentManager, - HRegionInfo regionInfo) { - super(server, EventType.RS_ZK_REGION_CLOSED); - this.assignmentManager = assignmentManager; - this.regionInfo = regionInfo; - if(regionInfo.isMetaRegion()) { - priority = ClosedPriority.META; - } else { - priority = ClosedPriority.USER; - } - } - - @Override - public int getPriority() { - return priority.getValue(); - } - - @Override - public HRegionInfo getHRegionInfo() { - return this.regionInfo; - } - - @Override - public String toString() { - String name = "UnknownServerName"; - if(server != null && server.getServerName() != null) { - name = server.getServerName().toString(); - } - return getClass().getSimpleName() + "-" + name + "-" + getSeqid(); - } - - @Override - public void process() { - LOG.debug("Handling CLOSED event for " + regionInfo.getEncodedName()); - // Check if this table is being disabled or not - if (this.assignmentManager.getTableStateManager().isTableState(this.regionInfo.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { - assignmentManager.offlineDisabledRegion(regionInfo); - return; - } - // ZK Node is in CLOSED state, assign it. - assignmentManager.getRegionStates().updateRegionState( - regionInfo, RegionState.State.CLOSED); - // This below has to do w/ online enable/disable of a table - assignmentManager.removeClosedRegion(regionInfo); - assignmentManager.assign(regionInfo, true); - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/CreateTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/CreateTableHandler.java index 5466090..adf1004 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/CreateTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/CreateTableHandler.java @@ -31,14 +31,16 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CoordinatedStateException; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException; import org.apache.hadoop.hbase.Server; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableExistsException; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.RegionReplicaUtil; -import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.ipc.RequestContext; @@ -49,12 +51,12 @@ import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; import org.apache.hadoop.hbase.master.TableLockManager; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.ModifyRegionUtils; +import org.apache.hadoop.hbase.util.ServerRegionReplicaUtil; /** * Handler to create a table. @@ -120,8 +122,6 @@ public class CreateTableHandler extends EventHandler { if (MetaTableAccessor.tableExists(this.server.getConnection(), tableName)) { throw new TableExistsException(tableName); } - - checkAndSetEnablingTable(assignmentManager, tableName); success = true; } finally { if (!success) { @@ -131,47 +131,6 @@ public class CreateTableHandler extends EventHandler { return this; } - static void checkAndSetEnablingTable(final AssignmentManager assignmentManager, - final TableName tableName) throws IOException { - // If we have multiple client threads trying to create the table at the - // same time, given the async nature of the operation, the table - // could be in a state where hbase:meta table hasn't been updated yet in - // the process() function. - // Use enabling state to tell if there is already a request for the same - // table in progress. This will introduce a new zookeeper call. Given - // createTable isn't a frequent operation, that should be ok. - // TODO: now that we have table locks, re-evaluate above -- table locks are not enough. - // We could have cleared the hbase.rootdir and not zk. How can we detect this case? - // Having to clean zk AND hdfs is awkward. - try { - if (!assignmentManager.getTableStateManager().setTableStateIfNotInStates(tableName, - ZooKeeperProtos.Table.State.ENABLING, - ZooKeeperProtos.Table.State.ENABLING, - ZooKeeperProtos.Table.State.ENABLED)) { - throw new TableExistsException(tableName); - } - } catch (CoordinatedStateException e) { - throw new IOException("Unable to ensure that the table will be" + - " enabling because of a ZooKeeper issue", e); - } - } - - static void removeEnablingTable(final AssignmentManager assignmentManager, - final TableName tableName) { - // Try deleting the enabling node in case of error - // If this does not happen then if the client tries to create the table - // again with the same Active master - // It will block the creation saying TableAlreadyExists. - try { - assignmentManager.getTableStateManager().checkAndRemoveTableState(tableName, - ZooKeeperProtos.Table.State.ENABLING, false); - } catch (CoordinatedStateException e) { - // Keeper exception should not happen here - LOG.error("Got a keeper exception while removing the ENABLING table znode " - + tableName, e); - } - } - @Override public String toString() { String name = "UnknownServerName"; @@ -215,12 +174,8 @@ public class CreateTableHandler extends EventHandler { */ protected void completed(final Throwable exception) { releaseTableLock(); - String msg = exception == null ? null : exception.getMessage(); LOG.info("Table, " + this.hTableDescriptor.getTableName() + ", creation " + - msg == null ? "successful" : "failed. " + msg); - if (exception != null) { - removeEnablingTable(this.assignmentManager, this.hTableDescriptor.getTableName()); - } + (exception == null ? "successful" : "failed. " + exception)); } /** @@ -243,9 +198,12 @@ public class CreateTableHandler extends EventHandler { FileSystem fs = fileSystemManager.getFileSystem(); // 1. Create Table Descriptor + // using a copy of descriptor, table will be created enabling first + TableDescriptor underConstruction = new TableDescriptor( + this.hTableDescriptor, TableState.State.ENABLING); Path tempTableDir = FSUtils.getTableDir(tempdir, tableName); new FSTableDescriptors(this.conf).createTableDescriptorForTableDirectory( - tempTableDir, this.hTableDescriptor, false); + tempTableDir, underConstruction, false); Path tableDir = FSUtils.getTableDir(fileSystemManager.getRootDir(), tableName); // 2. Create Regions @@ -262,27 +220,27 @@ public class CreateTableHandler extends EventHandler { // 5. Add replicas if needed regionInfos = addReplicas(hTableDescriptor, regionInfos); - // 6. Trigger immediate assignment of the regions in round-robin fashion + // 6. Setup replication for region replicas if needed + if (hTableDescriptor.getRegionReplication() > 1) { + ServerRegionReplicaUtil.setupRegionReplicaReplication(conf); + } + + // 7. Trigger immediate assignment of the regions in round-robin fashion ModifyRegionUtils.assignRegions(assignmentManager, regionInfos); } - // 7. Set table enabled flag up in zk. - try { - assignmentManager.getTableStateManager().setTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED); - } catch (CoordinatedStateException e) { - throw new IOException("Unable to ensure that " + tableName + " will be" + - " enabled because of a ZooKeeper issue", e); - } + // 8. Enable table + assignmentManager.getTableStateManager().setTableState(tableName, + TableState.State.ENABLED); - // 8. Update the tabledescriptor cache. + // 9. Update the tabledescriptor cache. ((HMaster) this.server).getTableDescriptors().get(tableName); } /** * Create any replicas for the regions (the default replicas that was * already created is passed to the method) - * @param hTableDescriptor + * @param hTableDescriptor descriptor to use * @param regions default replicas * @return the combined list of default and non-default replicas */ diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java index 00bbcb8..1ed0f85 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DeleteTableHandler.java @@ -58,7 +58,7 @@ public class DeleteTableHandler extends TableEventHandler { @Override protected void prepareWithTableLock() throws IOException { // The next call fails if no such table. - hTableDescriptor = getTableDescriptor(); + hTableDescriptor = getTableDescriptor().getHTableDescriptor(); } protected void waitRegionInTransition(final List regions) @@ -102,62 +102,66 @@ public class DeleteTableHandler extends TableEventHandler { // 1. Wait because of region in transition waitRegionInTransition(regions); - try { // 2. Remove table from hbase:meta and HDFS - removeTableData(regions); - } finally { - // 3. Update table descriptor cache - LOG.debug("Removing '" + tableName + "' descriptor."); - this.masterServices.getTableDescriptors().remove(tableName); - - AssignmentManager am = this.masterServices.getAssignmentManager(); - - // 4. Clean up regions of the table in RegionStates. - LOG.debug("Removing '" + tableName + "' from region states."); - am.getRegionStates().tableDeleted(tableName); - - // 5. If entry for this table in zk, and up in AssignmentManager, remove it. - LOG.debug("Marking '" + tableName + "' as deleted."); - am.getTableStateManager().setDeletedTable(tableName); - } + removeTableData(regions); if (cpHost != null) { cpHost.postDeleteTableHandler(this.tableName); } } + private void cleanupTableState() throws IOException { + // 3. Update table descriptor cache + LOG.debug("Removing '" + tableName + "' descriptor."); + this.masterServices.getTableDescriptors().remove(tableName); + + AssignmentManager am = this.masterServices.getAssignmentManager(); + + // 4. Clean up regions of the table in RegionStates. + LOG.debug("Removing '" + tableName + "' from region states."); + am.getRegionStates().tableDeleted(tableName); + + // 5. If entry for this table states, remove it. + LOG.debug("Marking '" + tableName + "' as deleted."); + am.getTableStateManager().setDeletedTable(tableName); + } + /** * Removes the table from hbase:meta and archives the HDFS files. */ protected void removeTableData(final List regions) - throws IOException, CoordinatedStateException { - // 1. Remove regions from META - LOG.debug("Deleting regions from META"); - MetaTableAccessor.deleteRegions(this.server.getConnection(), regions); - - // ----------------------------------------------------------------------- - // NOTE: At this point we still have data on disk, but nothing in hbase:meta - // if the rename below fails, hbck will report an inconsistency. - // ----------------------------------------------------------------------- - - // 2. Move the table in /hbase/.tmp - MasterFileSystem mfs = this.masterServices.getMasterFileSystem(); - Path tempTableDir = mfs.moveTableToTemp(tableName); - - // 3. Archive regions from FS (temp directory) - FileSystem fs = mfs.getFileSystem(); - for (HRegionInfo hri : regions) { - LOG.debug("Archiving region " + hri.getRegionNameAsString() + " from FS"); - HFileArchiver.archiveRegion(fs, mfs.getRootDir(), - tempTableDir, HRegion.getRegionDir(tempTableDir, hri.getEncodedName())); - } + throws IOException, CoordinatedStateException { + try { + // 1. Remove regions from META + LOG.debug("Deleting regions from META"); + MetaTableAccessor.deleteRegions(this.server.getConnection(), regions); + + // ----------------------------------------------------------------------- + // NOTE: At this point we still have data on disk, but nothing in hbase:meta + // if the rename below fails, hbck will report an inconsistency. + // ----------------------------------------------------------------------- + + // 2. Move the table in /hbase/.tmp + MasterFileSystem mfs = this.masterServices.getMasterFileSystem(); + Path tempTableDir = mfs.moveTableToTemp(tableName); + + // 3. Archive regions from FS (temp directory) + FileSystem fs = mfs.getFileSystem(); + for (HRegionInfo hri : regions) { + LOG.debug("Archiving region " + hri.getRegionNameAsString() + " from FS"); + HFileArchiver.archiveRegion(fs, mfs.getRootDir(), + tempTableDir, HRegion.getRegionDir(tempTableDir, hri.getEncodedName())); + } - // 4. Delete table directory from FS (temp directory) - if (!fs.delete(tempTableDir, true)) { - LOG.error("Couldn't delete " + tempTableDir); - } + // 4. Delete table directory from FS (temp directory) + if (!fs.delete(tempTableDir, true)) { + LOG.error("Couldn't delete " + tempTableDir); + } - LOG.debug("Table '" + tableName + "' archived!"); + LOG.debug("Table '" + tableName + "' archived!"); + } finally { + cleanupTableState(); + } } @Override diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java index 80eac6c..ee97616 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DisableTableHandler.java @@ -25,13 +25,13 @@ import java.util.concurrent.ExecutorService; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.TableNotEnabledException; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.constraint.ConstraintException; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; @@ -39,11 +39,10 @@ import org.apache.hadoop.hbase.master.AssignmentManager; import org.apache.hadoop.hbase.master.BulkAssigner; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; +import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.master.TableLockManager; -import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.htrace.Trace; /** @@ -91,16 +90,11 @@ public class DisableTableHandler extends EventHandler { // DISABLED or ENABLED. //TODO: reevaluate this since we have table locks now if (!skipTableStateCheck) { - try { - if (!this.assignmentManager.getTableStateManager().setTableStateIfInStates( - this.tableName, ZooKeeperProtos.Table.State.DISABLING, - ZooKeeperProtos.Table.State.ENABLED)) { - LOG.info("Table " + tableName + " isn't enabled; skipping disable"); - throw new TableNotEnabledException(this.tableName); - } - } catch (CoordinatedStateException e) { - throw new IOException("Unable to ensure that the table will be" + - " disabling because of a coordination engine issue", e); + if (!this.assignmentManager.getTableStateManager().setTableStateIfInStates( + this.tableName, TableState.State.DISABLING, + TableState.State.ENABLED)) { + LOG.info("Table " + tableName + " isn't enabled; skipping disable"); + throw new TableNotEnabledException(this.tableName); } } success = true; @@ -138,8 +132,6 @@ public class DisableTableHandler extends EventHandler { } } catch (IOException e) { LOG.error("Error trying to disable table " + this.tableName, e); - } catch (CoordinatedStateException e) { - LOG.error("Error trying to disable table " + this.tableName, e); } finally { releaseTableLock(); } @@ -155,10 +147,10 @@ public class DisableTableHandler extends EventHandler { } } - private void handleDisableTable() throws IOException, CoordinatedStateException { + private void handleDisableTable() throws IOException { // Set table disabling flag up in zk. this.assignmentManager.getTableStateManager().setTableState(this.tableName, - ZooKeeperProtos.Table.State.DISABLING); + TableState.State.DISABLING); boolean done = false; while (true) { // Get list of online regions that are of this table. Regions that are @@ -187,7 +179,7 @@ public class DisableTableHandler extends EventHandler { } // Flip the table to disabled if success. if (done) this.assignmentManager.getTableStateManager().setTableState(this.tableName, - ZooKeeperProtos.Table.State.DISABLED); + TableState.State.DISABLED); LOG.info("Disabled table, " + this.tableName + ", is done=" + done); } @@ -207,13 +199,13 @@ public class DisableTableHandler extends EventHandler { RegionStates regionStates = assignmentManager.getRegionStates(); for (HRegionInfo region: regions) { if (regionStates.isRegionInTransition(region) - && !regionStates.isRegionInState(region, State.FAILED_CLOSE)) { + && !regionStates.isRegionInState(region, RegionState.State.FAILED_CLOSE)) { continue; } final HRegionInfo hri = region; pool.execute(Trace.wrap("DisableTableHandler.BulkDisabler",new Runnable() { public void run() { - assignmentManager.unassign(hri, true); + assignmentManager.unassign(hri); } })); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java index feb5b64..280e3e4 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/EnableTableHandler.java @@ -27,7 +27,6 @@ import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; @@ -35,6 +34,7 @@ import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.AssignmentManager; @@ -47,7 +47,6 @@ import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.master.ServerManager; import org.apache.hadoop.hbase.master.TableLockManager; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; @@ -97,15 +96,7 @@ public class EnableTableHandler extends EventHandler { if (!this.skipTableStateCheck) { throw new TableNotFoundException(tableName); } - try { - this.assignmentManager.getTableStateManager().checkAndRemoveTableState(tableName, - ZooKeeperProtos.Table.State.ENABLING, true); - throw new TableNotFoundException(tableName); - } catch (CoordinatedStateException e) { - // TODO : Use HBCK to clear such nodes - LOG.warn("Failed to delete the ENABLING node for the table " + tableName - + ". The table will remain unusable. Run HBCK to manually fix the problem."); - } + this.assignmentManager.getTableStateManager().setDeletedTable(tableName); } // There could be multiple client requests trying to disable or enable @@ -113,16 +104,11 @@ public class EnableTableHandler extends EventHandler { // After that, no other requests can be accepted until the table reaches // DISABLED or ENABLED. if (!skipTableStateCheck) { - try { - if (!this.assignmentManager.getTableStateManager().setTableStateIfInStates( - this.tableName, ZooKeeperProtos.Table.State.ENABLING, - ZooKeeperProtos.Table.State.DISABLED)) { - LOG.info("Table " + tableName + " isn't disabled; skipping enable"); - throw new TableNotDisabledException(this.tableName); - } - } catch (CoordinatedStateException e) { - throw new IOException("Unable to ensure that the table will be" + - " enabling because of a coordination engine issue", e); + if (!this.assignmentManager.getTableStateManager().setTableStateIfInStates( + this.tableName, TableState.State.ENABLING, + TableState.State.DISABLED)) { + LOG.info("Table " + tableName + " isn't disabled; skipping enable"); + throw new TableNotDisabledException(this.tableName); } } success = true; @@ -157,11 +143,7 @@ public class EnableTableHandler extends EventHandler { if (cpHost != null) { cpHost.postEnableTableHandler(this.tableName); } - } catch (IOException e) { - LOG.error("Error trying to enable the table " + this.tableName, e); - } catch (CoordinatedStateException e) { - LOG.error("Error trying to enable the table " + this.tableName, e); - } catch (InterruptedException e) { + } catch (IOException | InterruptedException e) { LOG.error("Error trying to enable the table " + this.tableName, e); } finally { releaseTableLock(); @@ -178,14 +160,13 @@ public class EnableTableHandler extends EventHandler { } } - private void handleEnableTable() throws IOException, CoordinatedStateException, + private void handleEnableTable() throws IOException, InterruptedException { // I could check table is disabling and if so, not enable but require // that user first finish disabling but that might be obnoxious. - // Set table enabling flag up in zk. this.assignmentManager.getTableStateManager().setTableState(this.tableName, - ZooKeeperProtos.Table.State.ENABLING); + TableState.State.ENABLING); boolean done = false; ServerManager serverManager = ((HMaster)this.server).getServerManager(); // Get the regions of this table. We're done when all listed @@ -196,7 +177,7 @@ public class EnableTableHandler extends EventHandler { server.getZooKeeper()); } else { tableRegionsAndLocations = MetaTableAccessor.getTableRegionsAndLocations( - server.getZooKeeper(), server.getConnection(), tableName, true); + server.getConnection(), tableName, true); } int countOfRegionsInTable = tableRegionsAndLocations.size(); @@ -243,7 +224,7 @@ public class EnableTableHandler extends EventHandler { if (done) { // Flip the table to enabled. this.assignmentManager.getTableStateManager().setTableState( - this.tableName, ZooKeeperProtos.Table.State.ENABLED); + this.tableName, TableState.State.ENABLED); LOG.info("Table '" + this.tableName + "' was successfully enabled. Status: done=" + done); } else { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/LogReplayHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/LogReplayHandler.java index 18e564a..008a04e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/LogReplayHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/LogReplayHandler.java @@ -34,9 +34,9 @@ import org.apache.hadoop.hbase.master.MasterServices; * Handle logReplay work from SSH. Having a separate handler is not to block SSH in re-assigning * regions from dead servers. Otherwise, available SSH handlers could be blocked by logReplay work * (from {@link org.apache.hadoop.hbase.master.MasterFileSystem#splitLog(ServerName)}). - * During logReplay, if a receiving RS(say A) fails again, regions on A won't be able to be - * assigned to another live RS which causes the log replay unable to complete because WAL edits - * replay depends on receiving RS to be live + * During logReplay, if a receiving RS(say A) fails again, regions on A won't be able + * to be assigned to another live RS which causes the log replay unable to complete + * because WAL edits replay depends on receiving RS to be live */ @InterfaceAudience.Private public class LogReplayHandler extends EventHandler { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java index 0e72496..23e41d2 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/MetaServerShutdownHandler.java @@ -89,12 +89,6 @@ public class MetaServerShutdownHandler extends ServerShutdownHandler { // timeout if (am.isCarryingMeta(serverName)) { LOG.info("Server " + serverName + " was carrying META. Trying to assign."); - am.regionOffline(HRegionInfo.FIRST_META_REGIONINFO); - verifyAndAssignMetaWithRetries(); - } else if (!server.getMetaTableLocator().isLocationAvailable(this.server.getZooKeeper())) { - // the meta location as per master is null. This could happen in case when meta assignment - // in previous run failed, while meta znode has been updated to null. We should try to - // assign the meta again. verifyAndAssignMetaWithRetries(); } else { LOG.info("META has been assigned to otherwhere, skip assigning."); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java index baa8513..b35de6a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ModifyTableHandler.java @@ -37,12 +37,12 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.util.Bytes; @InterfaceAudience.Private @@ -65,8 +65,9 @@ public class ModifyTableHandler extends TableEventHandler { // Check operation is possible on the table in its current state // Also checks whether the table exists if (masterServices.getAssignmentManager().getTableStateManager() - .isTableState(this.htd.getTableName(), ZooKeeperProtos.Table.State.ENABLED) - && this.htd.getRegionReplication() != getTableDescriptor().getRegionReplication()) { + .isTableState(this.htd.getTableName(), TableState.State.ENABLED) + && this.htd.getRegionReplication() != getTableDescriptor() + .getHTableDescriptor().getRegionReplication()) { throw new IOException("REGION_REPLICATION change is not supported for enabled tables"); } } @@ -79,11 +80,14 @@ public class ModifyTableHandler extends TableEventHandler { cpHost.preModifyTableHandler(this.tableName, this.htd); } // Update descriptor - HTableDescriptor oldHtd = getTableDescriptor(); - this.masterServices.getTableDescriptors().add(this.htd); - deleteFamilyFromFS(hris, oldHtd.getFamiliesKeys()); - removeReplicaColumnsIfNeeded(this.htd.getRegionReplication(), oldHtd.getRegionReplication(), - htd.getTableName()); + HTableDescriptor oldDescriptor = + this.masterServices.getTableDescriptors().get(this.tableName); + this.masterServices.getTableDescriptors().add(htd); + deleteFamilyFromFS(hris, oldDescriptor.getFamiliesKeys()); + removeReplicaColumnsIfNeeded( + this.htd.getRegionReplication(), + oldDescriptor.getRegionReplication(), + this.htd.getTableName()); if (cpHost != null) { cpHost.postModifyTableHandler(this.tableName, this.htd); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java deleted file mode 100644 index a8747a7..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java +++ /dev/null @@ -1,103 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master.handler; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.NamespaceDescriptor; -import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; -import org.apache.hadoop.hbase.executor.EventHandler; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.AssignmentManager; - -/** - * Handles OPENED region event on Master. - */ -@InterfaceAudience.Private -public class OpenedRegionHandler extends EventHandler implements TotesHRegionInfo { - private static final Log LOG = LogFactory.getLog(OpenedRegionHandler.class); - private final AssignmentManager assignmentManager; - private final HRegionInfo regionInfo; - private final OpenedPriority priority; - - private OpenRegionCoordination coordination; - private OpenRegionCoordination.OpenRegionDetails ord; - - private enum OpenedPriority { - META (1), - SYSTEM (2), - USER (3); - - private final int value; - OpenedPriority(int value) { - this.value = value; - } - public int getValue() { - return value; - } - }; - - public OpenedRegionHandler(Server server, - AssignmentManager assignmentManager, HRegionInfo regionInfo, - OpenRegionCoordination coordination, - OpenRegionCoordination.OpenRegionDetails ord) { - super(server, EventType.RS_ZK_REGION_OPENED); - this.assignmentManager = assignmentManager; - this.regionInfo = regionInfo; - this.coordination = coordination; - this.ord = ord; - if(regionInfo.isMetaRegion()) { - priority = OpenedPriority.META; - } else if(regionInfo.getTable() - .getNamespaceAsString().equals(NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR)) { - priority = OpenedPriority.SYSTEM; - } else { - priority = OpenedPriority.USER; - } - } - - @Override - public int getPriority() { - return priority.getValue(); - } - - @Override - public HRegionInfo getHRegionInfo() { - return this.regionInfo; - } - - @Override - public String toString() { - String name = "UnknownServerName"; - if(server != null && server.getServerName() != null) { - name = server.getServerName().toString(); - } - return getClass().getSimpleName() + "-" + name + "-" + getSeqid(); - } - - @Override - public void process() { - if (!coordination.commitOpenOnMasterSide(assignmentManager,regionInfo, ord)) { - assignmentManager.unassign(regionInfo); - } - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java index 907d5ca..5b7b27b 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/ServerShutdownHandler.java @@ -27,12 +27,12 @@ import java.util.concurrent.locks.Lock; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.AssignmentManager; @@ -40,15 +40,10 @@ import org.apache.hadoop.hbase.master.DeadServer; import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; import org.apache.hadoop.hbase.master.RegionState; -import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.master.ServerManager; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; -import org.apache.hadoop.hbase.util.ConfigUtil; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.zookeeper.KeeperException; /** * Process server shutdown. @@ -158,36 +153,21 @@ public class ServerShutdownHandler extends EventHandler { // {@link SplitTransaction}. We'd also have to be figure another way for // doing the below hbase:meta daughters fixup. Set hris = null; - while (!this.server.isStopped()) { - try { - server.getMetaTableLocator().waitMetaRegionLocation(server.getZooKeeper()); - if (BaseLoadBalancer.tablesOnMaster(server.getConfiguration())) { - while (!this.server.isStopped() && serverManager.countOfRegionServers() < 2) { - // Wait till at least another regionserver is up besides the active master - // so that we don't assign all regions to the active master. - // This is best of efforts, because newly joined regionserver - // could crash right after that. - Thread.sleep(100); - } - } - // Skip getting user regions if the server is stopped. - if (!this.server.isStopped()) { - if (ConfigUtil.useZKForAssignment(server.getConfiguration())) { - hris = MetaTableAccessor.getServerUserRegions(this.server.getConnection(), - this.serverName).keySet(); - } else { - // Not using ZK for assignment, regionStates has everything we want - hris = am.getRegionStates().getServerRegions(serverName); - } + try { + server.getMetaTableLocator().waitMetaRegionLocation(server.getZooKeeper()); + if (BaseLoadBalancer.tablesOnMaster(server.getConfiguration())) { + while (!this.server.isStopped() && serverManager.countOfRegionServers() < 2) { + // Wait till at least another regionserver is up besides the active master + // so that we don't assign all regions to the active master. + // This is best of efforts, because newly joined regionserver + // could crash right after that. + Thread.sleep(100); } - break; - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - throw (InterruptedIOException)new InterruptedIOException().initCause(e); - } catch (IOException ioe) { - LOG.info("Received exception accessing hbase:meta during server shutdown of " + - serverName + ", retrying hbase:meta read", ioe); } + hris = am.getRegionStates().getServerRegions(serverName); + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + throw (InterruptedIOException)new InterruptedIOException().initCause(e); } if (this.server.isStopped()) { throw new IOException("Server is stopped"); @@ -243,7 +223,7 @@ public class ServerShutdownHandler extends EventHandler { Lock lock = am.acquireRegionLock(encodedName); try { RegionState rit = regionStates.getRegionTransitionState(hri); - if (processDeadRegion(hri, am)) { + if (processDeadRegion(hri, am)) { ServerName addressFromAM = regionStates.getRegionServerOfRegion(hri); if (addressFromAM != null && !addressFromAM.equals(this.serverName)) { // If this region is in transition on the dead server, it must be @@ -258,31 +238,24 @@ public class ServerShutdownHandler extends EventHandler { LOG.info("Skip assigning region in transition on other server" + rit); continue; } - try{ - //clean zk node - LOG.info("Reassigning region with rs = " + rit + " and deleting zk node if exists"); - ZKAssign.deleteNodeFailSilent(services.getZooKeeper(), hri); - regionStates.updateRegionState(hri, State.OFFLINE); - } catch (KeeperException ke) { - this.server.abort("Unexpected ZK exception deleting unassigned node " + hri, ke); - return; - } + LOG.info("Reassigning region with rs = " + rit); + regionStates.updateRegionState(hri, RegionState.State.OFFLINE); } else if (regionStates.isRegionInState( - hri, State.SPLITTING_NEW, State.MERGING_NEW)) { - regionStates.updateRegionState(hri, State.OFFLINE); + hri, RegionState.State.SPLITTING_NEW, RegionState.State.MERGING_NEW)) { + regionStates.updateRegionState(hri, RegionState.State.OFFLINE); } toAssignRegions.add(hri); } else if (rit != null) { - if ((rit.isPendingCloseOrClosing() || rit.isOffline()) + if ((rit.isClosing() || rit.isFailedClose() || rit.isOffline()) && am.getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING)) { + TableState.State.DISABLED, TableState.State.DISABLING) || + am.getReplicasToClose().contains(hri)) { // If the table was partially disabled and the RS went down, we should clear the RIT // and remove the node for the region. // The rit that we use may be stale in case the table was in DISABLING state // but though we did assign we will not be clearing the znode in CLOSING state. // Doing this will have no harm. See HBASE-5927 - regionStates.updateRegionState(hri, State.OFFLINE); - am.deleteClosingOrClosedNode(hri, rit.getServerName()); + regionStates.updateRegionState(hri, RegionState.State.OFFLINE); am.offlineDisabledRegion(hri); } else { LOG.warn("THIS SHOULD NOT HAPPEN: unexpected region in transition " @@ -364,7 +337,7 @@ public class ServerShutdownHandler extends EventHandler { } // If table is not disabled but the region is offlined, boolean disabled = assignmentManager.getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLED); + TableState.State.DISABLED); if (disabled){ LOG.info("The table " + hri.getTable() + " was disabled. Hence not proceeding."); @@ -377,7 +350,7 @@ public class ServerShutdownHandler extends EventHandler { return false; } boolean disabling = assignmentManager.getTableStateManager().isTableState(hri.getTable(), - ZooKeeperProtos.Table.State.DISABLING); + TableState.State.DISABLING); if (disabling) { LOG.info("The table " + hri.getTable() + " is disabled. Hence not assigning region" + hri.getEncodedName()); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java index 1397b29..b5c03f8 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableAddFamilyHandler.java @@ -22,10 +22,10 @@ import java.io.IOException; import java.util.List; import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.InvalidFamilyOperationException; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.executor.EventType; @@ -50,8 +50,8 @@ public class TableAddFamilyHandler extends TableEventHandler { @Override protected void prepareWithTableLock() throws IOException { super.prepareWithTableLock(); - HTableDescriptor htd = getTableDescriptor(); - if (htd.hasFamily(familyDesc.getName())) { + TableDescriptor htd = getTableDescriptor(); + if (htd.getHTableDescriptor().hasFamily(familyDesc.getName())) { throw new InvalidFamilyOperationException("Family '" + familyDesc.getNameAsString() + "' already exists so cannot be added"); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java index 285d36d..7b5c5c5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableDeleteFamilyHandler.java @@ -50,7 +50,7 @@ public class TableDeleteFamilyHandler extends TableEventHandler { @Override protected void prepareWithTableLock() throws IOException { super.prepareWithTableLock(); - HTableDescriptor htd = getTableDescriptor(); + HTableDescriptor htd = getTableDescriptor().getHTableDescriptor(); this.familyName = hasColumnFamily(htd, familyName); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java index 1b141fc..af3d302 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableEventHandler.java @@ -36,19 +36,20 @@ import org.apache.hadoop.hbase.InvalidFamilyOperationException; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotDisabledException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.RegionLocator; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.BulkReOpen; import org.apache.hadoop.hbase.master.MasterServices; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import com.google.common.collect.Lists; import com.google.common.collect.Maps; @@ -130,13 +131,12 @@ public abstract class TableEventHandler extends EventHandler { if (TableName.META_TABLE_NAME.equals(tableName)) { hris = new MetaTableLocator().getMetaRegions(server.getZooKeeper()); } else { - hris = MetaTableAccessor.getTableRegions(server.getZooKeeper(), - server.getConnection(), tableName); + hris = MetaTableAccessor.getTableRegions(server.getConnection(), tableName); } handleTableOperation(hris); if (eventType.isOnlineSchemaChangeSupported() && this.masterServices. getAssignmentManager().getTableStateManager().isTableState( - tableName, ZooKeeperProtos.Table.State.ENABLED)) { + tableName, TableState.State.ENABLED)) { if (reOpenAllRegions(hris)) { LOG.info("Completed table operation " + eventType + " on table " + tableName); @@ -236,10 +236,10 @@ public abstract class TableEventHandler extends EventHandler { * @throws FileNotFoundException * @throws IOException */ - public HTableDescriptor getTableDescriptor() + public TableDescriptor getTableDescriptor() throws FileNotFoundException, IOException { - HTableDescriptor htd = - this.masterServices.getTableDescriptors().get(tableName); + TableDescriptor htd = + this.masterServices.getTableDescriptors().getDescriptor(tableName); if (htd == null) { throw new IOException("HTableDescriptor missing for " + tableName); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java index 8ce4df6..e7e3a14 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TableModifyFamilyHandler.java @@ -49,7 +49,7 @@ public class TableModifyFamilyHandler extends TableEventHandler { @Override protected void prepareWithTableLock() throws IOException { super.prepareWithTableLock(); - HTableDescriptor htd = getTableDescriptor(); + HTableDescriptor htd = getTableDescriptor().getHTableDescriptor(); hasColumnFamily(htd, familyDesc.getName()); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java index c264824..a124bf6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/TruncateTableHandler.java @@ -29,14 +29,15 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.master.AssignmentManager; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.ModifyRegionUtils; @@ -93,53 +94,44 @@ public class TruncateTableHandler extends DeleteTableHandler { AssignmentManager assignmentManager = this.masterServices.getAssignmentManager(); - // 1. Set table znode - CreateTableHandler.checkAndSetEnablingTable(assignmentManager, tableName); - try { - // 1. Create Table Descriptor - Path tempTableDir = FSUtils.getTableDir(tempdir, this.tableName); - new FSTableDescriptors(server.getConfiguration()) - .createTableDescriptorForTableDirectory(tempTableDir, this.hTableDescriptor, false); - Path tableDir = FSUtils.getTableDir(mfs.getRootDir(), this.tableName); - - HRegionInfo[] newRegions; - if (this.preserveSplits) { - newRegions = regions.toArray(new HRegionInfo[regions.size()]); - LOG.info("Truncate will preserve " + newRegions.length + " regions"); - } else { - newRegions = new HRegionInfo[1]; - newRegions[0] = new HRegionInfo(this.tableName, null, null); - LOG.info("Truncate will not preserve the regions"); - } - - // 2. Create Regions - List regionInfos = ModifyRegionUtils.createRegions( - masterServices.getConfiguration(), tempdir, - this.hTableDescriptor, newRegions, null); - - // 3. Move Table temp directory to the hbase root location - if (!fs.rename(tempTableDir, tableDir)) { - throw new IOException("Unable to move table from temp=" + tempTableDir + - " to hbase root=" + tableDir); - } - - // 4. Add regions to META - MetaTableAccessor.addRegionsToMeta(masterServices.getConnection(), regionInfos); - - // 5. Trigger immediate assignment of the regions in round-robin fashion - ModifyRegionUtils.assignRegions(assignmentManager, regionInfos); - - // 6. Set table enabled flag up in zk. - try { - assignmentManager.getTableStateManager().setTableState(tableName, - ZooKeeperProtos.Table.State.ENABLED); - } catch (CoordinatedStateException e) { - throw new IOException("Unable to ensure that " + tableName + " will be" + - " enabled because of a ZooKeeper issue", e); - } - } catch (IOException e) { - CreateTableHandler.removeEnablingTable(assignmentManager, tableName); - throw e; + // 1. Create Table Descriptor + TableDescriptor underConstruction = new TableDescriptor( + this.hTableDescriptor, TableState.State.ENABLING); + Path tempTableDir = FSUtils.getTableDir(tempdir, this.tableName); + new FSTableDescriptors(server.getConfiguration()) + .createTableDescriptorForTableDirectory(tempTableDir, underConstruction, false); + Path tableDir = FSUtils.getTableDir(mfs.getRootDir(), this.tableName); + + HRegionInfo[] newRegions; + if (this.preserveSplits) { + newRegions = regions.toArray(new HRegionInfo[regions.size()]); + LOG.info("Truncate will preserve " + newRegions.length + " regions"); + } else { + newRegions = new HRegionInfo[1]; + newRegions[0] = new HRegionInfo(this.tableName, null, null); + LOG.info("Truncate will not preserve the regions"); } + + // 2. Create Regions + List regionInfos = ModifyRegionUtils.createRegions( + masterServices.getConfiguration(), tempdir, + this.hTableDescriptor, newRegions, null); + + // 3. Move Table temp directory to the hbase root location + if (!fs.rename(tempTableDir, tableDir)) { + throw new IOException("Unable to move table from temp=" + tempTableDir + + " to hbase root=" + tableDir); + } + + // 4. Add regions to META + MetaTableAccessor.addRegionsToMeta(masterServices.getConnection(), + regionInfos); + + // 5. Trigger immediate assignment of the regions in round-robin fashion + ModifyRegionUtils.assignRegions(assignmentManager, regionInfos); + + // 6. Set table enabled flag up in zk. + assignmentManager.getTableStateManager().setTableState(tableName, + TableState.State.ENABLED); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java index 2d7fbb7..b21f4e7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/MasterSnapshotVerifier.java @@ -98,8 +98,8 @@ public final class MasterSnapshotVerifier { /** * Verify that the snapshot in the directory is a valid snapshot * @param snapshotDir snapshot directory to check - * @param snapshotServers {@link org.apache.hadoop.hbase.ServerName} - * of the servers that are involved in the snapshot + * @param snapshotServers {@link org.apache.hadoop.hbase.ServerName} of the servers + * that are involved in the snapshot * @throws CorruptedSnapshotException if the snapshot is invalid * @throws IOException if there is an unexpected connection issue to the filesystem */ @@ -155,8 +155,7 @@ public final class MasterSnapshotVerifier { if (TableName.META_TABLE_NAME.equals(tableName)) { regions = new MetaTableLocator().getMetaRegions(services.getZooKeeper()); } else { - regions = MetaTableAccessor.getTableRegions(services.getZooKeeper(), - services.getConnection(), tableName); + regions = MetaTableAccessor.getTableRegions(services.getConnection(), tableName); } // Remove the non-default regions RegionReplicaUtil.removeNonDefaultRegions(regions); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java index 44435a2..b7a891d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java @@ -31,21 +31,23 @@ import java.util.concurrent.ThreadPoolExecutor; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.executor.ExecutorService; +import org.apache.hadoop.hbase.ipc.RequestContext; import org.apache.hadoop.hbase.master.AssignmentManager; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.master.MasterFileSystem; @@ -63,7 +65,8 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.ProcedureDescription; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription.Type; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.security.AccessDeniedException; +import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.snapshot.ClientSnapshotDescriptionUtils; import org.apache.hadoop.hbase.snapshot.HBaseSnapshotException; import org.apache.hadoop.hbase.snapshot.RestoreSnapshotException; @@ -211,6 +214,7 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable // ignore all the snapshots in progress FileStatus[] snapshots = fs.listStatus(snapshotDir, new SnapshotDescriptionUtils.CompletedSnaphotDirectoriesFilter(fs)); + MasterCoprocessorHost cpHost = master.getMasterCoprocessorHost(); // loop through all the completed snapshots for (FileStatus snapshot : snapshots) { Path info = new Path(snapshot.getPath(), SnapshotDescriptionUtils.SNAPSHOTINFO_FILE); @@ -223,7 +227,22 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable try { in = fs.open(info); SnapshotDescription desc = SnapshotDescription.parseFrom(in); + if (cpHost != null) { + try { + cpHost.preListSnapshot(desc); + } catch (AccessDeniedException e) { + LOG.warn("Current user does not have access to " + desc.getName() + " snapshot. " + + "Either you should be owner of this snapshot or admin user."); + // Skip this and try for next snapshot + continue; + } + } snapshotDescs.add(desc); + + // call coproc post hook + if (cpHost != null) { + cpHost.postListSnapshot(desc); + } } catch (IOException e) { LOG.warn("Found a corrupted snapshot " + snapshot.getPath(), e); } finally { @@ -258,26 +277,28 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable * @throws IOException For filesystem IOExceptions */ public void deleteSnapshot(SnapshotDescription snapshot) throws SnapshotDoesNotExistException, IOException { - - // call coproc pre hook - MasterCoprocessorHost cpHost = master.getMasterCoprocessorHost(); - if (cpHost != null) { - cpHost.preDeleteSnapshot(snapshot); - } - // check to see if it is completed if (!isSnapshotCompleted(snapshot)) { throw new SnapshotDoesNotExistException(snapshot); } String snapshotName = snapshot.getName(); - LOG.debug("Deleting snapshot: " + snapshotName); // first create the snapshot description and check to see if it exists - MasterFileSystem fs = master.getMasterFileSystem(); + FileSystem fs = master.getMasterFileSystem().getFileSystem(); Path snapshotDir = SnapshotDescriptionUtils.getCompletedSnapshotDir(snapshotName, rootDir); + // Get snapshot info from file system. The one passed as parameter is a "fake" snapshotInfo with + // just the "name" and it does not contains the "real" snapshot information + snapshot = SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDir); + // call coproc pre hook + MasterCoprocessorHost cpHost = master.getMasterCoprocessorHost(); + if (cpHost != null) { + cpHost.preDeleteSnapshot(snapshot); + } + + LOG.debug("Deleting snapshot: " + snapshotName); // delete the existing snapshot - if (!fs.getFileSystem().delete(snapshotDir, true)) { + if (!fs.delete(snapshotDir, true)) { throw new HBaseSnapshotException("Failed to delete snapshot directory: " + snapshotDir); } @@ -541,13 +562,16 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable throw new SnapshotCreationException("Table '" + snapshot.getTable() + "' doesn't exist, can't take snapshot.", snapshot); } - + SnapshotDescription.Builder builder = snapshot.toBuilder(); // if not specified, set the snapshot format if (!snapshot.hasVersion()) { - snapshot = snapshot.toBuilder() - .setVersion(SnapshotDescriptionUtils.SNAPSHOT_LAYOUT_VERSION) - .build(); + builder.setVersion(SnapshotDescriptionUtils.SNAPSHOT_LAYOUT_VERSION); + } + User user = RequestContext.getRequestUser(); + if (User.isHBaseSecurityEnabled(master.getConfiguration()) && user != null) { + builder.setOwner(user.getShortName()); } + snapshot = builder.build(); // call pre coproc hook MasterCoprocessorHost cpHost = master.getMasterCoprocessorHost(); @@ -559,14 +583,14 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable TableName snapshotTable = TableName.valueOf(snapshot.getTable()); AssignmentManager assignmentMgr = master.getAssignmentManager(); if (assignmentMgr.getTableStateManager().isTableState(snapshotTable, - ZooKeeperProtos.Table.State.ENABLED)) { + TableState.State.ENABLED)) { LOG.debug("Table enabled, starting distributed snapshot."); snapshotEnabledTable(snapshot); LOG.debug("Started snapshot: " + ClientSnapshotDescriptionUtils.toString(snapshot)); } // For disabled table, snapshot is created by the master else if (assignmentMgr.getTableStateManager().isTableState(snapshotTable, - ZooKeeperProtos.Table.State.DISABLED)) { + TableState.State.DISABLED)) { LOG.debug("Table is disabled, running snapshot entirely on master."); snapshotDisabledTable(snapshot); LOG.debug("Started snapshot: " + ClientSnapshotDescriptionUtils.toString(snapshot)); @@ -681,10 +705,12 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable throw new SnapshotDoesNotExistException(reqSnapshot); } - // read snapshot information - SnapshotDescription fsSnapshot = SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDir); + // Get snapshot info from file system. The reqSnapshot is a "fake" snapshotInfo with + // just the snapshot "name" and table name to restore. It does not contains the "real" snapshot + // information. + SnapshotDescription snapshot = SnapshotDescriptionUtils.readSnapshotInfo(fs, snapshotDir); SnapshotManifest manifest = SnapshotManifest.open(master.getConfiguration(), fs, - snapshotDir, fsSnapshot); + snapshotDir, snapshot); HTableDescriptor snapshotTableDesc = manifest.getTableDescriptor(); TableName tableName = TableName.valueOf(reqSnapshot.getTable()); @@ -696,10 +722,10 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable // Execute the restore/clone operation if (MetaTableAccessor.tableExists(master.getConnection(), tableName)) { - if (master.getAssignmentManager().getTableStateManager().isTableState( - TableName.valueOf(fsSnapshot.getTable()), ZooKeeperProtos.Table.State.ENABLED)) { + if (master.getTableStateManager().isTableState( + TableName.valueOf(snapshot.getTable()), TableState.State.ENABLED)) { throw new UnsupportedOperationException("Table '" + - TableName.valueOf(fsSnapshot.getTable()) + "' must be disabled in order to " + + TableName.valueOf(snapshot.getTable()) + "' must be disabled in order to " + "perform a restore operation" + "."); } @@ -708,8 +734,8 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable if (cpHost != null) { cpHost.preRestoreSnapshot(reqSnapshot, snapshotTableDesc); } - restoreSnapshot(fsSnapshot, snapshotTableDesc); - LOG.info("Restore snapshot=" + fsSnapshot.getName() + " as table=" + tableName); + restoreSnapshot(snapshot, snapshotTableDesc); + LOG.info("Restore snapshot=" + snapshot.getName() + " as table=" + tableName); if (cpHost != null) { cpHost.postRestoreSnapshot(reqSnapshot, snapshotTableDesc); @@ -719,8 +745,8 @@ public class SnapshotManager extends MasterProcedureManager implements Stoppable if (cpHost != null) { cpHost.preCloneSnapshot(reqSnapshot, htd); } - cloneSnapshot(fsSnapshot, htd); - LOG.info("Clone snapshot=" + fsSnapshot.getName() + " as table=" + tableName); + cloneSnapshot(snapshot, htd); + LOG.info("Clone snapshot=" + snapshot.getName() + " as table=" + tableName); if (cpHost != null) { cpHost.postCloneSnapshot(reqSnapshot, htd); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java index 5ac9cbc..5fd4aaa 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java @@ -174,7 +174,7 @@ public abstract class TakeSnapshotHandler extends EventHandler implements Snapsh server.getZooKeeper()); } else { regionsAndLocations = MetaTableAccessor.getTableRegionsAndLocations( - server.getZooKeeper(), server.getConnection(), snapshotTable, false); + server.getConnection(), snapshotTable, false); } // run the snapshot diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java deleted file mode 100644 index d1bd167..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/migration/NamespaceUpgrade.java +++ /dev/null @@ -1,573 +0,0 @@ -/** - * The Apache Software Foundation - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.migration; - -import java.io.IOException; -import java.util.Arrays; -import java.util.Comparator; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FSDataInputStream; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.fs.PathFilter; -import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.CellUtil; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.NamespaceDescriptor; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.MetaTableAccessor; -import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.Get; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; -import org.apache.hadoop.hbase.wal.WAL; -import org.apache.hadoop.hbase.wal.WALFactory; -import org.apache.hadoop.hbase.security.access.AccessControlLists; -import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSTableDescriptors; -import org.apache.hadoop.hbase.util.FSUtils; -import org.apache.hadoop.util.Tool; - -import com.google.common.collect.Lists; -import com.google.common.primitives.Ints; - -/** - * Upgrades old 0.94 filesystem layout to namespace layout - * Does the following: - * - * - creates system namespace directory and move .META. table there - * renaming .META. table to hbase:meta, - * this in turn would require to re-encode the region directory name - * - *

          The pre-0.96 paths and dir names are hardcoded in here. - */ -public class NamespaceUpgrade implements Tool { - private static final Log LOG = LogFactory.getLog(NamespaceUpgrade.class); - - private Configuration conf; - - private FileSystem fs; - - private Path rootDir; - private Path sysNsDir; - private Path defNsDir; - private Path baseDirs[]; - private Path backupDir; - // First move everything to this tmp .data dir in case there is a table named 'data' - private static final String TMP_DATA_DIR = ".data"; - // Old dir names to migrate. - private static final String DOT_LOGS = ".logs"; - private static final String DOT_OLD_LOGS = ".oldlogs"; - private static final String DOT_CORRUPT = ".corrupt"; - private static final String DOT_SPLITLOG = "splitlog"; - private static final String DOT_ARCHIVE = ".archive"; - - // The old default directory of hbase.dynamic.jars.dir(0.94.12 release). - private static final String DOT_LIB_DIR = ".lib"; - - private static final String OLD_ACL = "_acl_"; - /** Directories that are not HBase table directories */ - static final List NON_USER_TABLE_DIRS = Arrays.asList(new String[] { - DOT_LOGS, - DOT_OLD_LOGS, - DOT_CORRUPT, - DOT_SPLITLOG, - HConstants.HBCK_SIDELINEDIR_NAME, - DOT_ARCHIVE, - HConstants.SNAPSHOT_DIR_NAME, - HConstants.HBASE_TEMP_DIRECTORY, - TMP_DATA_DIR, - OLD_ACL, - DOT_LIB_DIR}); - - public NamespaceUpgrade() throws IOException { - super(); - } - - public void init() throws IOException { - this.rootDir = FSUtils.getRootDir(conf); - FSUtils.setFsDefault(getConf(), rootDir); - this.fs = FileSystem.get(conf); - Path tmpDataDir = new Path(rootDir, TMP_DATA_DIR); - sysNsDir = new Path(tmpDataDir, NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR); - defNsDir = new Path(tmpDataDir, NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR); - baseDirs = new Path[]{rootDir, - new Path(rootDir, HConstants.HFILE_ARCHIVE_DIRECTORY), - new Path(rootDir, HConstants.HBASE_TEMP_DIRECTORY)}; - backupDir = new Path(rootDir, HConstants.MIGRATION_NAME); - } - - - public void upgradeTableDirs() throws IOException, DeserializationException { - // if new version is written then upgrade is done - if (verifyNSUpgrade(fs, rootDir)) { - return; - } - - makeNamespaceDirs(); - - migrateTables(); - - migrateSnapshots(); - - migrateDotDirs(); - - migrateMeta(); - - migrateACL(); - - deleteRoot(); - - FSUtils.setVersion(fs, rootDir); - } - - /** - * Remove the -ROOT- dir. No longer of use. - * @throws IOException - */ - public void deleteRoot() throws IOException { - Path rootDir = new Path(this.rootDir, "-ROOT-"); - if (this.fs.exists(rootDir)) { - if (!this.fs.delete(rootDir, true)) LOG.info("Failed remove of " + rootDir); - LOG.info("Deleted " + rootDir); - } - } - - /** - * Rename all the dot dirs -- .data, .archive, etc. -- as data, archive, etc.; i.e. minus the dot. - * @throws IOException - */ - public void migrateDotDirs() throws IOException { - // Dot dirs to rename. Leave the tmp dir named '.tmp' and snapshots as .hbase-snapshot. - final Path archiveDir = new Path(rootDir, HConstants.HFILE_ARCHIVE_DIRECTORY); - Path [][] dirs = new Path[][] { - new Path [] {new Path(rootDir, DOT_CORRUPT), new Path(rootDir, HConstants.CORRUPT_DIR_NAME)}, - new Path [] {new Path(rootDir, DOT_LOGS), new Path(rootDir, HConstants.HREGION_LOGDIR_NAME)}, - new Path [] {new Path(rootDir, DOT_OLD_LOGS), - new Path(rootDir, HConstants.HREGION_OLDLOGDIR_NAME)}, - new Path [] {new Path(rootDir, TMP_DATA_DIR), - new Path(rootDir, HConstants.BASE_NAMESPACE_DIR)}, - new Path[] { new Path(rootDir, DOT_LIB_DIR), - new Path(rootDir, HConstants.LIB_DIR)}}; - for (Path [] dir: dirs) { - Path src = dir[0]; - Path tgt = dir[1]; - if (!this.fs.exists(src)) { - LOG.info("Does not exist: " + src); - continue; - } - rename(src, tgt); - } - // Do the .archive dir. Need to move its subdirs to the default ns dir under data dir... so - // from '.archive/foo', to 'archive/data/default/foo'. - Path oldArchiveDir = new Path(rootDir, DOT_ARCHIVE); - if (this.fs.exists(oldArchiveDir)) { - // This is a pain doing two nn calls but portable over h1 and h2. - mkdirs(archiveDir); - Path archiveDataDir = new Path(archiveDir, HConstants.BASE_NAMESPACE_DIR); - mkdirs(archiveDataDir); - rename(oldArchiveDir, new Path(archiveDataDir, - NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR)); - } - // Update the system and user namespace dirs removing the dot in front of .data. - Path dataDir = new Path(rootDir, HConstants.BASE_NAMESPACE_DIR); - sysNsDir = new Path(dataDir, NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR); - defNsDir = new Path(dataDir, NamespaceDescriptor.DEFAULT_NAMESPACE_NAME_STR); - } - - private void mkdirs(final Path p) throws IOException { - if (!this.fs.mkdirs(p)) throw new IOException("Failed make of " + p); - } - - private void rename(final Path src, final Path tgt) throws IOException { - if (!fs.rename(src, tgt)) { - throw new IOException("Failed move " + src + " to " + tgt); - } - } - - /** - * Create the system and default namespaces dirs - * @throws IOException - */ - public void makeNamespaceDirs() throws IOException { - if (!fs.exists(sysNsDir)) { - if (!fs.mkdirs(sysNsDir)) { - throw new IOException("Failed to create system namespace dir: " + sysNsDir); - } - } - if (!fs.exists(defNsDir)) { - if (!fs.mkdirs(defNsDir)) { - throw new IOException("Failed to create default namespace dir: " + defNsDir); - } - } - } - - /** - * Migrate all tables into respective namespaces, either default or system. We put them into - * a temporary location, '.data', in case a user table is name 'data'. In a later method we will - * move stuff from .data to data. - * @throws IOException - */ - public void migrateTables() throws IOException { - List sysTables = Lists.newArrayList("-ROOT-",".META.", ".META"); - - // Migrate tables including archive and tmp - for (Path baseDir: baseDirs) { - if (!fs.exists(baseDir)) continue; - List oldTableDirs = FSUtils.getLocalTableDirs(fs, baseDir); - for (Path oldTableDir: oldTableDirs) { - if (NON_USER_TABLE_DIRS.contains(oldTableDir.getName())) continue; - if (sysTables.contains(oldTableDir.getName())) continue; - // Make the new directory under the ns to which we will move the table. - Path nsDir = new Path(this.defNsDir, - TableName.valueOf(oldTableDir.getName()).getQualifierAsString()); - LOG.info("Moving " + oldTableDir + " to " + nsDir); - if (!fs.exists(nsDir.getParent())) { - if (!fs.mkdirs(nsDir.getParent())) { - throw new IOException("Failed to create namespace dir "+nsDir.getParent()); - } - } - if (sysTables.indexOf(oldTableDir.getName()) < 0) { - LOG.info("Migrating table " + oldTableDir.getName() + " to " + nsDir); - if (!fs.rename(oldTableDir, nsDir)) { - throw new IOException("Failed to move "+oldTableDir+" to namespace dir "+nsDir); - } - } - } - } - } - - public void migrateSnapshots() throws IOException { - //migrate snapshot dir - Path oldSnapshotDir = new Path(rootDir, HConstants.OLD_SNAPSHOT_DIR_NAME); - Path newSnapshotDir = new Path(rootDir, HConstants.SNAPSHOT_DIR_NAME); - if (fs.exists(oldSnapshotDir)) { - boolean foundOldSnapshotDir = false; - // Logic to verify old snapshot dir culled from SnapshotManager - // ignore all the snapshots in progress - FileStatus[] snapshots = fs.listStatus(oldSnapshotDir, - new SnapshotDescriptionUtils.CompletedSnaphotDirectoriesFilter(fs)); - // loop through all the completed snapshots - for (FileStatus snapshot : snapshots) { - Path info = new Path(snapshot.getPath(), SnapshotDescriptionUtils.SNAPSHOTINFO_FILE); - // if the snapshot is bad - if (fs.exists(info)) { - foundOldSnapshotDir = true; - break; - } - } - if(foundOldSnapshotDir) { - LOG.info("Migrating snapshot dir"); - if (!fs.rename(oldSnapshotDir, newSnapshotDir)) { - throw new IOException("Failed to move old snapshot dir "+ - oldSnapshotDir+" to new "+newSnapshotDir); - } - } - } - } - - public void migrateMeta() throws IOException { - Path newMetaDir = new Path(this.sysNsDir, TableName.META_TABLE_NAME.getQualifierAsString()); - Path newMetaRegionDir = - new Path(newMetaDir, HRegionInfo.FIRST_META_REGIONINFO.getEncodedName()); - Path oldMetaDir = new Path(rootDir, ".META."); - if (fs.exists(oldMetaDir)) { - LOG.info("Migrating meta table " + oldMetaDir.getName() + " to " + newMetaDir); - if (!fs.rename(oldMetaDir, newMetaDir)) { - throw new IOException("Failed to migrate meta table " - + oldMetaDir.getName() + " to " + newMetaDir); - } - } else { - // on windows NTFS, meta's name is .META (note the missing dot at the end) - oldMetaDir = new Path(rootDir, ".META"); - if (fs.exists(oldMetaDir)) { - LOG.info("Migrating meta table " + oldMetaDir.getName() + " to " + newMetaDir); - if (!fs.rename(oldMetaDir, newMetaDir)) { - throw new IOException("Failed to migrate meta table " - + oldMetaDir.getName() + " to " + newMetaDir); - } - } - } - - // Since meta table name has changed rename meta region dir from it's old encoding to new one - Path oldMetaRegionDir = HRegion.getRegionDir(rootDir, - new Path(newMetaDir, "1028785192").toString()); - if (fs.exists(oldMetaRegionDir)) { - LOG.info("Migrating meta region " + oldMetaRegionDir + " to " + newMetaRegionDir); - if (!fs.rename(oldMetaRegionDir, newMetaRegionDir)) { - throw new IOException("Failed to migrate meta region " - + oldMetaRegionDir + " to " + newMetaRegionDir); - } - } - // Remove .tableinfo files as they refer to ".META.". - // They will be recreated by master on startup. - removeTableInfoInPre96Format(TableName.META_TABLE_NAME); - - Path oldRootDir = new Path(rootDir, "-ROOT-"); - if(!fs.rename(oldRootDir, backupDir)) { - throw new IllegalStateException("Failed to old data: "+oldRootDir+" to "+backupDir); - } - } - - /** - * Removes .tableinfo files that are laid in pre-96 format (i.e., the tableinfo files are under - * table directory). - * @param tableName - * @throws IOException - */ - private void removeTableInfoInPre96Format(TableName tableName) throws IOException { - Path tableDir = FSUtils.getTableDir(rootDir, tableName); - FileStatus[] status = FSUtils.listStatus(fs, tableDir, TABLEINFO_PATHFILTER); - if (status == null) return; - for (FileStatus fStatus : status) { - FSUtils.delete(fs, fStatus.getPath(), false); - } - } - - public void migrateACL() throws IOException { - - TableName oldTableName = TableName.valueOf(OLD_ACL); - Path oldTablePath = new Path(rootDir, oldTableName.getNameAsString()); - - if(!fs.exists(oldTablePath)) { - return; - } - - LOG.info("Migrating ACL table"); - - TableName newTableName = AccessControlLists.ACL_TABLE_NAME; - Path newTablePath = FSUtils.getTableDir(rootDir, newTableName); - HTableDescriptor oldDesc = - readTableDescriptor(fs, getCurrentTableInfoStatus(fs, oldTablePath)); - - if(FSTableDescriptors.getTableInfoPath(fs, newTablePath) == null) { - LOG.info("Creating new tableDesc for ACL"); - HTableDescriptor newDesc = new HTableDescriptor(oldDesc); - newDesc.setName(newTableName); - new FSTableDescriptors(this.conf).createTableDescriptorForTableDirectory( - newTablePath, newDesc, true); - } - - - ServerName fakeServer = ServerName.valueOf("nsupgrade", 96, 123); - final WALFactory walFactory = new WALFactory(conf, null, fakeServer.toString()); - WAL metawal = walFactory.getMetaWAL(HRegionInfo.FIRST_META_REGIONINFO.getEncodedNameAsBytes()); - FSTableDescriptors fst = new FSTableDescriptors(conf); - HRegion meta = HRegion.openHRegion(rootDir, HRegionInfo.FIRST_META_REGIONINFO, - fst.get(TableName.META_TABLE_NAME), metawal, conf); - HRegion region = null; - try { - for(Path regionDir : FSUtils.getRegionDirs(fs, oldTablePath)) { - LOG.info("Migrating ACL region "+regionDir.getName()); - HRegionInfo oldRegionInfo = HRegionFileSystem.loadRegionInfoFileContent(fs, regionDir); - HRegionInfo newRegionInfo = - new HRegionInfo(newTableName, - oldRegionInfo.getStartKey(), - oldRegionInfo.getEndKey(), - oldRegionInfo.isSplit(), - oldRegionInfo.getRegionId()); - newRegionInfo.setOffline(oldRegionInfo.isOffline()); - region = - new HRegion( - HRegionFileSystem.openRegionFromFileSystem(conf, fs, oldTablePath, - oldRegionInfo, false), - metawal, - conf, - oldDesc, - null); - region.initialize(); - updateAcls(region); - // closing the region would flush it so we don't need an explicit flush to save - // acl changes. - region.close(); - - //Create new region dir - Path newRegionDir = new Path(newTablePath, newRegionInfo.getEncodedName()); - if(!fs.exists(newRegionDir)) { - if(!fs.mkdirs(newRegionDir)) { - throw new IllegalStateException("Failed to create new region dir: " + newRegionDir); - } - } - - //create new region info file, delete in case one exists - HRegionFileSystem.openRegionFromFileSystem(conf, fs, newTablePath, newRegionInfo, false); - - //migrate region contents - for(FileStatus file : fs.listStatus(regionDir, new FSUtils.UserTableDirFilter(fs))) { - if(file.getPath().getName().equals(HRegionFileSystem.REGION_INFO_FILE)) - continue; - if(!fs.rename(file.getPath(), newRegionDir)) { - throw new IllegalStateException("Failed to move file "+file.getPath()+" to " + - newRegionDir); - } - } - meta.put(MetaTableAccessor.makePutFromRegionInfo(newRegionInfo)); - meta.delete(MetaTableAccessor.makeDeleteFromRegionInfo(oldRegionInfo)); - } - } finally { - meta.flushcache(); - meta.waitForFlushesAndCompactions(); - meta.close(); - walFactory.close(); - if(region != null) { - region.close(); - } - } - if(!fs.rename(oldTablePath, backupDir)) { - throw new IllegalStateException("Failed to old data: "+oldTablePath+" to "+backupDir); - } - } - - /** - * Deletes the old _acl_ entry, and inserts a new one using namespace. - * @param region - * @throws IOException - */ - void updateAcls(HRegion region) throws IOException { - byte[] rowKey = Bytes.toBytes(NamespaceUpgrade.OLD_ACL); - // get the old _acl_ entry, if present. - Get g = new Get(rowKey); - Result r = region.get(g); - if (r != null && r.size() > 0) { - // create a put for new _acl_ entry with rowkey as hbase:acl - Put p = new Put(AccessControlLists.ACL_GLOBAL_NAME); - for (Cell c : r.rawCells()) { - p.addImmutable(CellUtil.cloneFamily(c), CellUtil.cloneQualifier(c), CellUtil.cloneValue(c)); - } - region.put(p); - // delete the old entry - Delete del = new Delete(rowKey); - region.delete(del); - } - - // delete the old entry for '-ROOT-' - rowKey = Bytes.toBytes(TableName.OLD_ROOT_STR); - Delete del = new Delete(rowKey); - region.delete(del); - - // rename .META. to hbase:meta - rowKey = Bytes.toBytes(TableName.OLD_META_STR); - g = new Get(rowKey); - r = region.get(g); - if (r != null && r.size() > 0) { - // create a put for new .META. entry with rowkey as hbase:meta - Put p = new Put(TableName.META_TABLE_NAME.getName()); - for (Cell c : r.rawCells()) { - p.addImmutable(CellUtil.cloneFamily(c), CellUtil.cloneQualifier(c), CellUtil.cloneValue(c)); - } - region.put(p); - // delete the old entry - del = new Delete(rowKey); - region.delete(del); - } - } - - //Culled from FSTableDescriptors - private static HTableDescriptor readTableDescriptor(FileSystem fs, - FileStatus status) throws IOException { - int len = Ints.checkedCast(status.getLen()); - byte [] content = new byte[len]; - FSDataInputStream fsDataInputStream = fs.open(status.getPath()); - try { - fsDataInputStream.readFully(content); - } finally { - fsDataInputStream.close(); - } - HTableDescriptor htd = null; - try { - htd = HTableDescriptor.parseFrom(content); - } catch (DeserializationException e) { - throw new IOException("content=" + Bytes.toShort(content), e); - } - return htd; - } - - private static final PathFilter TABLEINFO_PATHFILTER = new PathFilter() { - @Override - public boolean accept(Path p) { - // Accept any file that starts with TABLEINFO_NAME - return p.getName().startsWith(".tableinfo"); - } - }; - - static final Comparator TABLEINFO_FILESTATUS_COMPARATOR = - new Comparator() { - @Override - public int compare(FileStatus left, FileStatus right) { - return right.compareTo(left); - }}; - - // logic culled from FSTableDescriptors - static FileStatus getCurrentTableInfoStatus(FileSystem fs, Path dir) - throws IOException { - FileStatus [] status = FSUtils.listStatus(fs, dir, TABLEINFO_PATHFILTER); - if (status == null || status.length < 1) return null; - FileStatus mostCurrent = null; - for (FileStatus file : status) { - if (mostCurrent == null || TABLEINFO_FILESTATUS_COMPARATOR.compare(file, mostCurrent) < 0) { - mostCurrent = file; - } - } - return mostCurrent; - } - - public static boolean verifyNSUpgrade(FileSystem fs, Path rootDir) - throws IOException { - try { - return FSUtils.getVersion(fs, rootDir).equals(HConstants.FILE_SYSTEM_VERSION); - } catch (DeserializationException e) { - throw new IOException("Failed to verify namespace upgrade", e); - } - } - - - @Override - public int run(String[] args) throws Exception { - if (args.length < 1 || !args[0].equals("--upgrade")) { - System.out.println("Usage: --upgrade"); - return 0; - } - init(); - upgradeTableDirs(); - return 0; - } - - @Override - public void setConf(Configuration conf) { - this.conf = conf; - } - - @Override - public Configuration getConf() { - return conf; - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java deleted file mode 100644 index fc11823..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/migration/UpgradeTo96.java +++ /dev/null @@ -1,262 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.migration; - -import java.io.IOException; -import java.util.List; - -import org.apache.commons.cli.CommandLine; -import org.apache.commons.cli.CommandLineParser; -import org.apache.commons.cli.GnuParser; -import org.apache.commons.cli.HelpFormatter; -import org.apache.commons.cli.Option; -import org.apache.commons.cli.Options; -import org.apache.commons.cli.ParseException; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configured; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.Abortable; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.wal.WALFactory; -import org.apache.hadoop.hbase.wal.WALSplitter; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSUtils; -import org.apache.hadoop.hbase.util.HFileV1Detector; -import org.apache.hadoop.hbase.util.ZKDataMigrator; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.hadoop.util.Tool; -import org.apache.hadoop.util.ToolRunner; - -public class UpgradeTo96 extends Configured implements Tool { - private static final Log LOG = LogFactory.getLog(UpgradeTo96.class); - - private Options options = new Options(); - /** - * whether to do overall upgrade (namespace and znodes) - */ - private boolean upgrade; - /** - * whether to check for HFileV1 - */ - private boolean checkForHFileV1; - /** - * Path of directory to check for HFileV1 - */ - private String dirToCheckForHFileV1; - - UpgradeTo96() { - setOptions(); - } - - private void setOptions() { - options.addOption("h", "help", false, "Help"); - options.addOption(new Option("check", false, "Run upgrade check; looks for HFileV1 " - + " under ${hbase.rootdir} or provided 'dir' directory.")); - options.addOption(new Option("execute", false, "Run upgrade; zk and hdfs must be up, hbase down")); - Option pathOption = new Option("dir", true, - "Relative path of dir to check for HFileV1s."); - pathOption.setRequired(false); - options.addOption(pathOption); - } - - private boolean parseOption(String[] args) throws ParseException { - if (args.length == 0) return false; // no args shows help. - - CommandLineParser parser = new GnuParser(); - CommandLine cmd = parser.parse(options, args); - if (cmd.hasOption("h")) { - return false; - } - if (cmd.hasOption("execute")) upgrade = true; - if (cmd.hasOption("check")) checkForHFileV1 = true; - if (checkForHFileV1 && cmd.hasOption("dir")) { - this.dirToCheckForHFileV1 = cmd.getOptionValue("dir"); - } - return true; - } - - private void printUsage() { - HelpFormatter formatter = new HelpFormatter(); - formatter.printHelp("$bin/hbase upgrade -check [-dir DIR]|-execute", options); - System.out.println("Read http://hbase.apache.org/book.html#upgrade0.96 before attempting upgrade"); - System.out.println(); - System.out.println("Example usage:"); - System.out.println(); - System.out.println("Run upgrade check; looks for HFileV1s under ${hbase.rootdir}:"); - System.out.println(" $ bin/hbase upgrade -check"); - System.out.println(); - System.out.println("Run the upgrade: "); - System.out.println(" $ bin/hbase upgrade -execute"); - } - - @Override - public int run(String[] args) throws Exception { - if (!parseOption(args)) { - printUsage(); - return -1; - } - if (checkForHFileV1) { - int res = doHFileV1Check(); - if (res == 0) LOG.info("No HFileV1 found."); - else { - LOG.warn("There are some HFileV1, or corrupt files (files with incorrect major version)."); - } - return res; - } - // if the user wants to upgrade, check for any HBase live process. - // If yes, prompt the user to stop them - else if (upgrade) { - if (isAnyHBaseProcessAlive()) { - LOG.error("Some HBase processes are still alive, or znodes not expired yet. " - + "Please stop them before upgrade or try after some time."); - throw new IOException("Some HBase processes are still alive, or znodes not expired yet"); - } - return executeUpgrade(); - } - return -1; - } - - private boolean isAnyHBaseProcessAlive() throws IOException { - ZooKeeperWatcher zkw = null; - try { - zkw = new ZooKeeperWatcher(getConf(), "Check Live Processes.", new Abortable() { - private boolean aborted = false; - - @Override - public void abort(String why, Throwable e) { - LOG.warn("Got aborted with reason: " + why + ", and error: " + e); - this.aborted = true; - } - - @Override - public boolean isAborted() { - return this.aborted; - } - - }); - boolean liveProcessesExists = false; - if (ZKUtil.checkExists(zkw, zkw.baseZNode) == -1) { - return false; - } - if (ZKUtil.checkExists(zkw, zkw.backupMasterAddressesZNode) != -1) { - List backupMasters = ZKUtil - .listChildrenNoWatch(zkw, zkw.backupMasterAddressesZNode); - if (!backupMasters.isEmpty()) { - LOG.warn("Backup master(s) " + backupMasters - + " are alive or backup-master znodes not expired."); - liveProcessesExists = true; - } - } - if (ZKUtil.checkExists(zkw, zkw.rsZNode) != -1) { - List regionServers = ZKUtil.listChildrenNoWatch(zkw, zkw.rsZNode); - if (!regionServers.isEmpty()) { - LOG.warn("Region server(s) " + regionServers + " are alive or rs znodes not expired."); - liveProcessesExists = true; - } - } - if (ZKUtil.checkExists(zkw, zkw.getMasterAddressZNode()) != -1) { - byte[] data = ZKUtil.getData(zkw, zkw.getMasterAddressZNode()); - if (data != null && !Bytes.equals(data, HConstants.EMPTY_BYTE_ARRAY)) { - LOG.warn("Active master at address " + Bytes.toString(data) - + " is still alive or master znode not expired."); - liveProcessesExists = true; - } - } - return liveProcessesExists; - } catch (Exception e) { - LOG.error("Got exception while checking live hbase processes", e); - throw new IOException(e); - } finally { - if (zkw != null) { - zkw.close(); - } - } - } - - private int doHFileV1Check() throws Exception { - String[] args = null; - if (dirToCheckForHFileV1 != null) args = new String[] { "-p" + dirToCheckForHFileV1 }; - return ToolRunner.run(getConf(), new HFileV1Detector(), args); - } - - /** - * Executes the upgrade process. It involves: - *

            - *
          • Upgrading Namespace - *
          • Upgrading Znodes - *
          • Log splitting - *
          - * @throws Exception - */ - private int executeUpgrade() throws Exception { - executeTool("Namespace upgrade", new NamespaceUpgrade(), - new String[] { "--upgrade" }, 0); - executeTool("Znode upgrade", new ZKDataMigrator(), null, 0); - doOfflineLogSplitting(); - return 0; - } - - private void executeTool(String toolMessage, Tool tool, String[] args, int expectedResult) - throws Exception { - LOG.info("Starting " + toolMessage); - int res = ToolRunner.run(getConf(), tool, new String[] { "--upgrade" }); - if (res != expectedResult) { - LOG.error(toolMessage + "returned " + res + ", expected " + expectedResult); - throw new Exception("Unexpected return code from " + toolMessage); - } - LOG.info("Successfully completed " + toolMessage); - } - - /** - * Performs log splitting for all regionserver directories. - * @throws Exception - */ - private void doOfflineLogSplitting() throws Exception { - LOG.info("Starting Log splitting"); - final Path rootDir = FSUtils.getRootDir(getConf()); - final Path oldLogDir = new Path(rootDir, HConstants.HREGION_OLDLOGDIR_NAME); - // since this is the singleton, we needn't close it. - final WALFactory factory = WALFactory.getInstance(getConf()); - FileSystem fs = FSUtils.getCurrentFileSystem(getConf()); - Path logDir = new Path(rootDir, HConstants.HREGION_LOGDIR_NAME); - FileStatus[] regionServerLogDirs = FSUtils.listStatus(fs, logDir); - if (regionServerLogDirs == null || regionServerLogDirs.length == 0) { - LOG.info("No log directories to split, returning"); - return; - } - try { - for (FileStatus regionServerLogDir : regionServerLogDirs) { - // split its log dir, if exists - WALSplitter.split(rootDir, regionServerLogDir.getPath(), oldLogDir, fs, getConf(), factory); - } - LOG.info("Successfully completed Log splitting"); - } catch (Exception e) { - LOG.error("Got exception while doing Log splitting ", e); - throw e; - } - } - - public static void main(String[] args) throws Exception { - System.exit(ToolRunner.run(HBaseConfiguration.create(), new UpgradeTo96(), args)); - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManager.java index e5257e5..8f866f6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManager.java @@ -46,8 +46,8 @@ import org.apache.zookeeper.KeeperException; * * A globally barriered procedure is identified by its signature (usually it is the name of the * procedure znode). During the initialization phase, the initialize methods are called by both -* {@link org.apache.hadoop.hbase.master.HMaster} and -* {@link org.apache.hadoop.hbase.regionserver.HRegionServer} witch create the procedure znode +* {@link org.apache.hadoop.hbase.master.HMaster} +* and {@link org.apache.hadoop.hbase.regionserver.HRegionServer} which create the procedure znode * and register the listeners. A procedure can be triggered by its signature and an instant name * (encapsulated in a {@link ProcedureDescription} object). When the servers are shutdown, * the stop methods on both classes are called to clean up the data associated with the procedure. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/MasterFlushTableProcedureManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/MasterFlushTableProcedureManager.java index 6a48eb6..e72da2a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/MasterFlushTableProcedureManager.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/MasterFlushTableProcedureManager.java @@ -24,7 +24,6 @@ import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.ThreadPoolExecutor; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; @@ -133,7 +132,7 @@ public class MasterFlushTableProcedureManager extends MasterProcedureManager { master.getZooKeeper()); } else { regionsAndLocations = MetaTableAccessor.getTableRegionsAndLocations( - master.getZooKeeper(), master.getConnection(), tableName, false); + master.getConnection(), tableName, false); } Set regionServers = new HashSet(regionsAndLocations.size()); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java index d68d247..d6a120b 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtbufUtil.java @@ -59,7 +59,7 @@ public class ReplicationProtbufUtil { public static void replicateWALEntry(final AdminService.BlockingInterface admin, final Entry[] entries) throws IOException { Pair p = - buildReplicateWALEntryRequest(entries); + buildReplicateWALEntryRequest(entries, null); PayloadCarryingRpcController controller = new PayloadCarryingRpcController(p.getSecond()); try { admin.replicateWALEntry(controller, p.getFirst()); @@ -77,6 +77,19 @@ public class ReplicationProtbufUtil { */ public static Pair buildReplicateWALEntryRequest(final Entry[] entries) { + return buildReplicateWALEntryRequest(entries, null); + } + + /** + * Create a new ReplicateWALEntryRequest from a list of WAL entries + * + * @param entries the WAL entries to be replicated + * @param encodedRegionName alternative region name to use if not null + * @return a pair of ReplicateWALEntryRequest and a CellScanner over all the WALEdit values + * found. + */ + public static Pair + buildReplicateWALEntryRequest(final Entry[] entries, byte[] encodedRegionName) { // Accumulate all the Cells seen in here. List> allCells = new ArrayList>(entries.length); int size = 0; @@ -91,7 +104,9 @@ public class ReplicationProtbufUtil { WALProtos.WALKey.Builder keyBuilder = entryBuilder.getKeyBuilder(); WALKey key = entry.getKey(); keyBuilder.setEncodedRegionName( - ByteStringer.wrap(key.getEncodedRegionName())); + ByteStringer.wrap(encodedRegionName == null + ? key.getEncodedRegionName() + : encodedRegionName)); keyBuilder.setTableName(ByteStringer.wrap(key.getTablename().getName())); keyBuilder.setLogSequenceNumber(key.getLogSeqNum()); keyBuilder.setWriteTime(key.getWriteTime()); @@ -121,7 +136,7 @@ public class ReplicationProtbufUtil { } } List cells = edit.getCells(); - // Add up the size. It is used later serializing out the cells. + // Add up the size. It is used later serializing out the kvs. for (Cell cell: cells) { size += CellUtil.estimatedSerializedSizeOf(cell); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/DefaultOperationQuota.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/DefaultOperationQuota.java new file mode 100644 index 0000000..34c749e --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/DefaultOperationQuota.java @@ -0,0 +1,144 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.Arrays; +import java.util.List; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.quotas.OperationQuota.AvgOperationSize; +import org.apache.hadoop.hbase.quotas.OperationQuota.OperationType; + +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class DefaultOperationQuota implements OperationQuota { + private static final Log LOG = LogFactory.getLog(DefaultOperationQuota.class); + + private final List limiters; + private long writeAvailable = 0; + private long readAvailable = 0; + private long writeConsumed = 0; + private long readConsumed = 0; + + private AvgOperationSize avgOpSize = new AvgOperationSize(); + + public DefaultOperationQuota(final QuotaLimiter... limiters) { + this(Arrays.asList(limiters)); + } + + /** + * NOTE: The order matters. It should be something like [user, table, namespace, global] + */ + public DefaultOperationQuota(final List limiters) { + this.limiters = limiters; + } + + @Override + public void checkQuota(int numWrites, int numReads, int numScans) + throws ThrottlingException { + writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100); + readConsumed = estimateConsume(OperationType.GET, numReads, 100); + readConsumed += estimateConsume(OperationType.SCAN, numScans, 1000); + + writeAvailable = Long.MAX_VALUE; + readAvailable = Long.MAX_VALUE; + for (final QuotaLimiter limiter: limiters) { + if (limiter.isBypass()) continue; + + limiter.checkQuota(writeConsumed, readConsumed); + readAvailable = Math.min(readAvailable, limiter.getReadAvailable()); + writeAvailable = Math.min(writeAvailable, limiter.getWriteAvailable()); + } + + for (final QuotaLimiter limiter: limiters) { + limiter.grabQuota(writeConsumed, readConsumed); + } + } + + @Override + public void close() { + // Calculate and set the average size of get, scan and mutate for the current operation + long getSize = avgOpSize.getAvgOperationSize(OperationType.GET); + long scanSize = avgOpSize.getAvgOperationSize(OperationType.SCAN); + long mutationSize = avgOpSize.getAvgOperationSize(OperationType.MUTATE); + for (final QuotaLimiter limiter: limiters) { + limiter.addOperationSize(OperationType.GET, getSize); + limiter.addOperationSize(OperationType.SCAN, scanSize); + limiter.addOperationSize(OperationType.MUTATE, mutationSize); + } + + // Adjust the quota consumed for the specified operation + long writeDiff = avgOpSize.getOperationSize(OperationType.MUTATE) - writeConsumed; + long readDiff = (avgOpSize.getOperationSize(OperationType.GET) + + avgOpSize.getOperationSize(OperationType.SCAN)) - readConsumed; + for (final QuotaLimiter limiter: limiters) { + if (writeDiff != 0) limiter.consumeWrite(writeDiff); + if (readDiff != 0) limiter.consumeRead(readDiff); + } + } + + @Override + public long getReadAvailable() { + return readAvailable; + } + + @Override + public long getWriteAvailable() { + return writeAvailable; + } + + @Override + public void addGetResult(final Result result) { + avgOpSize.addGetResult(result); + } + + @Override + public void addScanResult(final List results) { + avgOpSize.addScanResult(results); + } + + @Override + public void addMutation(final Mutation mutation) { + avgOpSize.addMutation(mutation); + } + + @Override + public long getAvgOperationSize(OperationType type) { + return avgOpSize.getAvgOperationSize(type); + } + + private long estimateConsume(final OperationType type, int numReqs, long avgSize) { + if (numReqs > 0) { + for (final QuotaLimiter limiter: limiters) { + long size = limiter.getAvgOperationSize(type); + if (size > 0) { + avgSize = size; + break; + } + } + return avgSize * numReqs; + } + return 0; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java new file mode 100644 index 0000000..6a57156 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/MasterQuotaManager.java @@ -0,0 +1,426 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.IOException; +import java.util.HashSet; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.master.MasterServices; +import org.apache.hadoop.hbase.master.handler.CreateTableHandler; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaRequest; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetQuotaResponse; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.ThrottleRequest; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota; + +/** + * Master Quota Manager. + * It is responsible for initialize the quota table on the first-run and + * provide the admin operations to interact with the quota table. + * + * TODO: FUTURE: The master will be responsible to notify each RS of quota changes + * and it will do the "quota aggregation" when the QuotaScope is CLUSTER. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class MasterQuotaManager { + private static final Log LOG = LogFactory.getLog(MasterQuotaManager.class); + + private final MasterServices masterServices; + private NamedLock namespaceLocks; + private NamedLock tableLocks; + private NamedLock userLocks; + private boolean enabled = false; + + public MasterQuotaManager(final MasterServices masterServices) { + this.masterServices = masterServices; + } + + public void start() throws IOException { + // If the user doesn't want the quota support skip all the initializations. + if (!QuotaUtil.isQuotaEnabled(masterServices.getConfiguration())) { + LOG.info("Quota support disabled"); + return; + } + + // Create the quota table if missing + if (!MetaTableAccessor.tableExists(masterServices.getConnection(), + QuotaUtil.QUOTA_TABLE_NAME)) { + LOG.info("Quota table not found. Creating..."); + createQuotaTable(); + } + + LOG.info("Initializing quota support"); + namespaceLocks = new NamedLock(); + tableLocks = new NamedLock(); + userLocks = new NamedLock(); + + enabled = true; + } + + public void stop() { + } + + public boolean isQuotaEnabled() { + return enabled; + } + + /* ========================================================================== + * Admin operations to manage the quota table + */ + public SetQuotaResponse setQuota(final SetQuotaRequest req) + throws IOException, InterruptedException { + checkQuotaSupport(); + + if (req.hasUserName()) { + userLocks.lock(req.getUserName()); + try { + if (req.hasTableName()) { + setUserQuota(req.getUserName(), ProtobufUtil.toTableName(req.getTableName()), req); + } else if (req.hasNamespace()) { + setUserQuota(req.getUserName(), req.getNamespace(), req); + } else { + setUserQuota(req.getUserName(), req); + } + } finally { + userLocks.unlock(req.getUserName()); + } + } else if (req.hasTableName()) { + TableName table = ProtobufUtil.toTableName(req.getTableName()); + tableLocks.lock(table); + try { + setTableQuota(table, req); + } finally { + tableLocks.unlock(table); + } + } else if (req.hasNamespace()) { + namespaceLocks.lock(req.getNamespace()); + try { + setNamespaceQuota(req.getNamespace(), req); + } finally { + namespaceLocks.unlock(req.getNamespace()); + } + } else { + throw new DoNotRetryIOException( + new UnsupportedOperationException("a user, a table or a namespace must be specified")); + } + return SetQuotaResponse.newBuilder().build(); + } + + public void setUserQuota(final String userName, final SetQuotaRequest req) + throws IOException, InterruptedException { + setQuota(req, new SetQuotaOperations() { + @Override + public Quotas fetch() throws IOException { + return QuotaUtil.getUserQuota(masterServices.getConnection(), userName); + } + @Override + public void update(final Quotas quotas) throws IOException { + QuotaUtil.addUserQuota(masterServices.getConnection(), userName, quotas); + } + @Override + public void delete() throws IOException { + QuotaUtil.deleteUserQuota(masterServices.getConnection(), userName); + } + @Override + public void preApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().preSetUserQuota(userName, quotas); + } + @Override + public void postApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().postSetUserQuota(userName, quotas); + } + }); + } + + public void setUserQuota(final String userName, final TableName table, + final SetQuotaRequest req) throws IOException, InterruptedException { + setQuota(req, new SetQuotaOperations() { + @Override + public Quotas fetch() throws IOException { + return QuotaUtil.getUserQuota(masterServices.getConnection(), userName, table); + } + @Override + public void update(final Quotas quotas) throws IOException { + QuotaUtil.addUserQuota(masterServices.getConnection(), userName, table, quotas); + } + @Override + public void delete() throws IOException { + QuotaUtil.deleteUserQuota(masterServices.getConnection(), userName, table); + } + @Override + public void preApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().preSetUserQuota(userName, table, quotas); + } + @Override + public void postApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().postSetUserQuota(userName, table, quotas); + } + }); + } + + public void setUserQuota(final String userName, final String namespace, + final SetQuotaRequest req) throws IOException, InterruptedException { + setQuota(req, new SetQuotaOperations() { + @Override + public Quotas fetch() throws IOException { + return QuotaUtil.getUserQuota(masterServices.getConnection(), userName, namespace); + } + @Override + public void update(final Quotas quotas) throws IOException { + QuotaUtil.addUserQuota(masterServices.getConnection(), userName, namespace, quotas); + } + @Override + public void delete() throws IOException { + QuotaUtil.deleteUserQuota(masterServices.getConnection(), userName, namespace); + } + @Override + public void preApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().preSetUserQuota(userName, namespace, quotas); + } + @Override + public void postApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().postSetUserQuota(userName, namespace, quotas); + } + }); + } + + public void setTableQuota(final TableName table, final SetQuotaRequest req) + throws IOException, InterruptedException { + setQuota(req, new SetQuotaOperations() { + @Override + public Quotas fetch() throws IOException { + return QuotaUtil.getTableQuota(masterServices.getConnection(), table); + } + @Override + public void update(final Quotas quotas) throws IOException { + QuotaUtil.addTableQuota(masterServices.getConnection(), table, quotas); + } + @Override + public void delete() throws IOException { + QuotaUtil.deleteTableQuota(masterServices.getConnection(), table); + } + @Override + public void preApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().preSetTableQuota(table, quotas); + } + @Override + public void postApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().postSetTableQuota(table, quotas); + } + }); + } + + public void setNamespaceQuota(final String namespace, final SetQuotaRequest req) + throws IOException, InterruptedException { + setQuota(req, new SetQuotaOperations() { + @Override + public Quotas fetch() throws IOException { + return QuotaUtil.getNamespaceQuota(masterServices.getConnection(), namespace); + } + @Override + public void update(final Quotas quotas) throws IOException { + QuotaUtil.addNamespaceQuota(masterServices.getConnection(), namespace, quotas); + } + @Override + public void delete() throws IOException { + QuotaUtil.deleteNamespaceQuota(masterServices.getConnection(), namespace); + } + @Override + public void preApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().preSetNamespaceQuota(namespace, quotas); + } + @Override + public void postApply(final Quotas quotas) throws IOException { + masterServices.getMasterCoprocessorHost().postSetNamespaceQuota(namespace, quotas); + } + }); + } + + private void setQuota(final SetQuotaRequest req, final SetQuotaOperations quotaOps) + throws IOException, InterruptedException { + if (req.hasRemoveAll() && req.getRemoveAll() == true) { + quotaOps.preApply(null); + quotaOps.delete(); + quotaOps.postApply(null); + return; + } + + // Apply quota changes + Quotas quotas = quotaOps.fetch(); + quotaOps.preApply(quotas); + + Quotas.Builder builder = (quotas != null) ? quotas.toBuilder() : Quotas.newBuilder(); + if (req.hasThrottle()) applyThrottle(builder, req.getThrottle()); + if (req.hasBypassGlobals()) applyBypassGlobals(builder, req.getBypassGlobals()); + + // Submit new changes + quotas = builder.build(); + if (QuotaUtil.isEmptyQuota(quotas)) { + quotaOps.delete(); + } else { + quotaOps.update(quotas); + } + quotaOps.postApply(quotas); + } + + private static interface SetQuotaOperations { + Quotas fetch() throws IOException; + void delete() throws IOException; + void update(final Quotas quotas) throws IOException; + void preApply(final Quotas quotas) throws IOException; + void postApply(final Quotas quotas) throws IOException; + } + + /* ========================================================================== + * Helpers to apply changes to the quotas + */ + private void applyThrottle(final Quotas.Builder quotas, final ThrottleRequest req) + throws IOException { + Throttle.Builder throttle; + + if (req.hasType() && (req.hasTimedQuota() || quotas.hasThrottle())) { + // Validate timed quota if present + if (req.hasTimedQuota()) validateTimedQuota(req.getTimedQuota()); + + // apply the new settings + throttle = quotas.hasThrottle() ? quotas.getThrottle().toBuilder() : Throttle.newBuilder(); + + switch (req.getType()) { + case REQUEST_NUMBER: + if (req.hasTimedQuota()) { + throttle.setReqNum(req.getTimedQuota()); + } else { + throttle.clearReqNum(); + } + break; + case REQUEST_SIZE: + if (req.hasTimedQuota()) { + throttle.setReqSize(req.getTimedQuota()); + } else { + throttle.clearReqSize(); + } + break; + case WRITE_NUMBER: + if (req.hasTimedQuota()) { + throttle.setWriteNum(req.getTimedQuota()); + } else { + throttle.clearWriteNum(); + } + break; + case WRITE_SIZE: + if (req.hasTimedQuota()) { + throttle.setWriteSize(req.getTimedQuota()); + } else { + throttle.clearWriteSize(); + } + break; + case READ_NUMBER: + if (req.hasTimedQuota()) { + throttle.setReadNum(req.getTimedQuota()); + } else { + throttle.clearReqNum(); + } + break; + case READ_SIZE: + if (req.hasTimedQuota()) { + throttle.setReadSize(req.getTimedQuota()); + } else { + throttle.clearReadSize(); + } + break; + } + quotas.setThrottle(throttle.build()); + } else { + quotas.clearThrottle(); + } + } + + private void applyBypassGlobals(final Quotas.Builder quotas, boolean bypassGlobals) { + if (bypassGlobals) { + quotas.setBypassGlobals(bypassGlobals); + } else { + quotas.clearBypassGlobals(); + } + } + + private void validateTimedQuota(final TimedQuota timedQuota) throws IOException { + if (timedQuota.getSoftLimit() < 1) { + throw new DoNotRetryIOException(new UnsupportedOperationException( + "The throttle limit must be greater then 0, got " + timedQuota.getSoftLimit())); + } + } + + /* ========================================================================== + * Helpers + */ + + private void checkQuotaSupport() throws IOException { + if (!enabled) { + throw new DoNotRetryIOException( + new UnsupportedOperationException("quota support disabled")); + } + } + + private void createQuotaTable() throws IOException { + HRegionInfo newRegions[] = new HRegionInfo[] { + new HRegionInfo(QuotaUtil.QUOTA_TABLE_NAME) + }; + + masterServices.getExecutorService() + .submit(new CreateTableHandler(masterServices, + masterServices.getMasterFileSystem(), + QuotaUtil.QUOTA_TABLE_DESC, + masterServices.getConfiguration(), + newRegions, + masterServices) + .prepare()); + } + + private static class NamedLock { + private HashSet locks = new HashSet(); + + public void lock(final T name) throws InterruptedException { + synchronized (locks) { + while (locks.contains(name)) { + locks.wait(); + } + locks.add(name); + } + } + + public void unlock(final T name) { + synchronized (locks) { + locks.remove(name); + locks.notifyAll(); + } + } + } +} + diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopOperationQuota.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopOperationQuota.java new file mode 100644 index 0000000..e67c7c0 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopOperationQuota.java @@ -0,0 +1,84 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.List; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Result; + +/** + * Noop operation quota returned when no quota is associated to the user/table + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +class NoopOperationQuota implements OperationQuota { + private static OperationQuota instance = new NoopOperationQuota(); + + private NoopOperationQuota() { + // no-op + } + + public static OperationQuota get() { + return instance; + } + + @Override + public void checkQuota(int numWrites, int numReads, int numScans) + throws ThrottlingException { + // no-op + } + + @Override + public void close() { + // no-op + } + + @Override + public void addGetResult(final Result result) { + // no-op + } + + @Override + public void addScanResult(final List results) { + // no-op + } + + @Override + public void addMutation(final Mutation mutation) { + // no-op + } + + @Override + public long getReadAvailable() { + return Long.MAX_VALUE; + } + + @Override + public long getWriteAvailable() { + return Long.MAX_VALUE; + } + + @Override + public long getAvgOperationSize(OperationType type) { + return -1; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopQuotaLimiter.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopQuotaLimiter.java new file mode 100644 index 0000000..2273dc0 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/NoopQuotaLimiter.java @@ -0,0 +1,90 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.quotas.OperationQuota.OperationType; + +/** + * Noop quota limiter returned when no limiter is associated to the user/table + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +class NoopQuotaLimiter implements QuotaLimiter { + private static QuotaLimiter instance = new NoopQuotaLimiter(); + + private NoopQuotaLimiter() { + // no-op + } + + @Override + public void checkQuota(long estimateWriteSize, long estimateReadSize) + throws ThrottlingException { + // no-op + } + + @Override + public void grabQuota(long writeSize, long readSize) { + // no-op + } + + @Override + public void consumeWrite(final long size) { + // no-op + } + + @Override + public void consumeRead(final long size) { + // no-op + } + + @Override + public boolean isBypass() { + return true; + } + + @Override + public long getWriteAvailable() { + throw new UnsupportedOperationException(); + } + + @Override + public long getReadAvailable() { + throw new UnsupportedOperationException(); + } + + @Override + public void addOperationSize(OperationType type, long size) { + } + + @Override + public long getAvgOperationSize(OperationType type) { + return -1; + } + + @Override + public String toString() { + return "NoopQuotaLimiter"; + } + + public static QuotaLimiter get() { + return instance; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/OperationQuota.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/OperationQuota.java new file mode 100644 index 0000000..b885ac9 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/OperationQuota.java @@ -0,0 +1,128 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.List; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Result; + +/** + * Interface that allows to check the quota available for an operation. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public interface OperationQuota { + public enum OperationType { MUTATE, GET, SCAN } + + /** + * Keeps track of the average data size of operations like get, scan, mutate + */ + public class AvgOperationSize { + private final long[] sizeSum; + private final long[] count; + + public AvgOperationSize() { + int size = OperationType.values().length; + sizeSum = new long[size]; + count = new long[size]; + for (int i = 0; i < size; ++i) { + sizeSum[i] = 0; + count[i] = 0; + } + } + + public void addOperationSize(OperationType type, long size) { + if (size > 0) { + int index = type.ordinal(); + sizeSum[index] += size; + count[index]++; + } + } + + public long getAvgOperationSize(OperationType type) { + int index = type.ordinal(); + return count[index] > 0 ? sizeSum[index] / count[index] : 0; + } + + public long getOperationSize(OperationType type) { + return sizeSum[type.ordinal()]; + } + + public void addGetResult(final Result result) { + long size = QuotaUtil.calculateResultSize(result); + addOperationSize(OperationType.GET, size); + } + + public void addScanResult(final List results) { + long size = QuotaUtil.calculateResultSize(results); + addOperationSize(OperationType.SCAN, size); + } + + public void addMutation(final Mutation mutation) { + long size = QuotaUtil.calculateMutationSize(mutation); + addOperationSize(OperationType.MUTATE, size); + } + } + + /** + * Checks if it is possible to execute the specified operation. + * The quota will be estimated based on the number of operations to perform + * and the average size accumulated during time. + * + * @param numWrites number of write operation that will be performed + * @param numReads number of small-read operation that will be performed + * @param numScans number of long-read operation that will be performed + * @throws ThrottlingException if the operation cannot be performed + */ + void checkQuota(int numWrites, int numReads, int numScans) + throws ThrottlingException; + + /** Cleanup method on operation completion */ + void close(); + + /** + * Add a get result. This will be used to calculate the exact quota and + * have a better short-read average size for the next time. + */ + void addGetResult(Result result); + + /** + * Add a scan result. This will be used to calculate the exact quota and + * have a better long-read average size for the next time. + */ + void addScanResult(List results); + + /** + * Add a mutation result. This will be used to calculate the exact quota and + * have a better mutation average size for the next time. + */ + void addMutation(Mutation mutation); + + /** @return the number of bytes available to read to avoid exceeding the quota */ + long getReadAvailable(); + + /** @return the number of bytes available to write to avoid exceeding the quota */ + long getWriteAvailable(); + + /** @return the average data size of the specified operation */ + long getAvgOperationSize(OperationType type); +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaCache.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaCache.java new file mode 100644 index 0000000..8cd402d --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaCache.java @@ -0,0 +1,327 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.Chore; +import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.regionserver.RegionServerServices; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.security.UserGroupInformation; + +import com.google.common.annotations.VisibleForTesting; + +/** + * Cache that keeps track of the quota settings for the users and tables that + * are interacting with it. + * + * To avoid blocking the operations if the requested quota is not in cache + * an "empty quota" will be returned and the request to fetch the quota information + * will be enqueued for the next refresh. + * + * TODO: At the moment the Cache has a Chore that will be triggered every 5min + * or on cache-miss events. Later the Quotas will be pushed using the notification system. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class QuotaCache implements Stoppable { + private static final Log LOG = LogFactory.getLog(QuotaCache.class); + + public static final String REFRESH_CONF_KEY = "hbase.quota.refresh.period"; + private static final int REFRESH_DEFAULT_PERIOD = 5 * 60000; // 5min + private static final int EVICT_PERIOD_FACTOR = 5; // N * REFRESH_DEFAULT_PERIOD + + // for testing purpose only, enforce the cache to be always refreshed + static boolean TEST_FORCE_REFRESH = false; + + private final ConcurrentHashMap namespaceQuotaCache = + new ConcurrentHashMap(); + private final ConcurrentHashMap tableQuotaCache = + new ConcurrentHashMap(); + private final ConcurrentHashMap userQuotaCache = + new ConcurrentHashMap(); + private final RegionServerServices rsServices; + + private QuotaRefresherChore refreshChore; + private boolean stopped = true; + + public QuotaCache(final RegionServerServices rsServices) { + this.rsServices = rsServices; + } + + public void start() throws IOException { + stopped = false; + + // TODO: This will be replaced once we have the notification bus ready. + Configuration conf = rsServices.getConfiguration(); + int period = conf.getInt(REFRESH_CONF_KEY, REFRESH_DEFAULT_PERIOD); + refreshChore = new QuotaRefresherChore(period, this); + Threads.setDaemonThreadRunning(refreshChore.getThread()); + } + + @Override + public void stop(final String why) { + stopped = true; + } + + @Override + public boolean isStopped() { + return stopped; + } + + /** + * Returns the limiter associated to the specified user/table. + * + * @param ugi the user to limit + * @param table the table to limit + * @return the limiter associated to the specified user/table + */ + public QuotaLimiter getUserLimiter(final UserGroupInformation ugi, final TableName table) { + if (table.isSystemTable()) { + return NoopQuotaLimiter.get(); + } + return getUserQuotaState(ugi).getTableLimiter(table); + } + + /** + * Returns the QuotaState associated to the specified user. + * + * @param ugi the user + * @return the quota info associated to specified user + */ + public UserQuotaState getUserQuotaState(final UserGroupInformation ugi) { + String key = ugi.getShortUserName(); + UserQuotaState quotaInfo = userQuotaCache.get(key); + if (quotaInfo == null) { + quotaInfo = new UserQuotaState(); + if (userQuotaCache.putIfAbsent(key, quotaInfo) == null) { + triggerCacheRefresh(); + } + } + return quotaInfo; + } + + /** + * Returns the limiter associated to the specified table. + * + * @param table the table to limit + * @return the limiter associated to the specified table + */ + public QuotaLimiter getTableLimiter(final TableName table) { + return getQuotaState(this.tableQuotaCache, table).getGlobalLimiter(); + } + + /** + * Returns the limiter associated to the specified namespace. + * + * @param namespace the namespace to limit + * @return the limiter associated to the specified namespace + */ + public QuotaLimiter getNamespaceLimiter(final String namespace) { + return getQuotaState(this.namespaceQuotaCache, namespace).getGlobalLimiter(); + } + + /** + * Returns the QuotaState requested. + * If the quota info is not in cache an empty one will be returned + * and the quota request will be enqueued for the next cache refresh. + */ + private QuotaState getQuotaState(final ConcurrentHashMap quotasMap, + final K key) { + QuotaState quotaInfo = quotasMap.get(key); + if (quotaInfo == null) { + quotaInfo = new QuotaState(); + if (quotasMap.putIfAbsent(key, quotaInfo) == null) { + triggerCacheRefresh(); + } + } + return quotaInfo; + } + + private Configuration getConfiguration() { + return rsServices.getConfiguration(); + } + + @VisibleForTesting + void triggerCacheRefresh() { + refreshChore.triggerNow(); + } + + @VisibleForTesting + long getLastUpdate() { + return refreshChore.lastUpdate; + } + + @VisibleForTesting + Map getNamespaceQuotaCache() { + return namespaceQuotaCache; + } + + @VisibleForTesting + Map getTableQuotaCache() { + return tableQuotaCache; + } + + @VisibleForTesting + Map getUserQuotaCache() { + return userQuotaCache; + } + + // TODO: Remove this once we have the notification bus + private class QuotaRefresherChore extends Chore { + private long lastUpdate = 0; + + public QuotaRefresherChore(final int period, final Stoppable stoppable) { + super("QuotaRefresherChore", period, stoppable); + } + + @Override + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="GC_UNRELATED_TYPES", + justification="I do not understand why the complaints, it looks good to me -- FIX") + protected void chore() { + // Prefetch online tables/namespaces + for (TableName table: QuotaCache.this.rsServices.getOnlineTables()) { + if (table.isSystemTable()) continue; + if (!QuotaCache.this.tableQuotaCache.contains(table)) { + QuotaCache.this.tableQuotaCache.putIfAbsent(table, new QuotaState()); + } + String ns = table.getNamespaceAsString(); + if (!QuotaCache.this.namespaceQuotaCache.contains(ns)) { + QuotaCache.this.namespaceQuotaCache.putIfAbsent(ns, new QuotaState()); + } + } + + fetchNamespaceQuotaState(); + fetchTableQuotaState(); + fetchUserQuotaState(); + lastUpdate = EnvironmentEdgeManager.currentTime(); + } + + private void fetchNamespaceQuotaState() { + fetch("namespace", QuotaCache.this.namespaceQuotaCache, new Fetcher() { + @Override + public Get makeGet(final Map.Entry entry) { + return QuotaUtil.makeGetForNamespaceQuotas(entry.getKey()); + } + + @Override + public Map fetchEntries(final List gets) + throws IOException { + return QuotaUtil.fetchNamespaceQuotas(rsServices.getConnection(), gets); + } + }); + } + + private void fetchTableQuotaState() { + fetch("table", QuotaCache.this.tableQuotaCache, new Fetcher() { + @Override + public Get makeGet(final Map.Entry entry) { + return QuotaUtil.makeGetForTableQuotas(entry.getKey()); + } + + @Override + public Map fetchEntries(final List gets) + throws IOException { + return QuotaUtil.fetchTableQuotas(rsServices.getConnection(), gets); + } + }); + } + + private void fetchUserQuotaState() { + final Set namespaces = QuotaCache.this.namespaceQuotaCache.keySet(); + final Set tables = QuotaCache.this.tableQuotaCache.keySet(); + fetch("user", QuotaCache.this.userQuotaCache, new Fetcher() { + @Override + public Get makeGet(final Map.Entry entry) { + return QuotaUtil.makeGetForUserQuotas(entry.getKey(), tables, namespaces); + } + + @Override + public Map fetchEntries(final List gets) + throws IOException { + return QuotaUtil.fetchUserQuotas(rsServices.getConnection(), gets); + } + }); + } + + private void fetch(final String type, + final ConcurrentHashMap quotasMap, final Fetcher fetcher) { + long now = EnvironmentEdgeManager.currentTime(); + long refreshPeriod = getPeriod(); + long evictPeriod = refreshPeriod * EVICT_PERIOD_FACTOR; + + // Find the quota entries to update + List gets = new ArrayList(); + List toRemove = new ArrayList(); + for (Map.Entry entry: quotasMap.entrySet()) { + long lastUpdate = entry.getValue().getLastUpdate(); + long lastQuery = entry.getValue().getLastQuery(); + if (lastQuery > 0 && (now - lastQuery) >= evictPeriod) { + toRemove.add(entry.getKey()); + } else if (TEST_FORCE_REFRESH || (now - lastUpdate) >= refreshPeriod) { + gets.add(fetcher.makeGet(entry)); + } + } + + for (final K key: toRemove) { + if (LOG.isTraceEnabled()) { + LOG.trace("evict " + type + " key=" + key); + } + quotasMap.remove(key); + } + + // fetch and update the quota entries + if (!gets.isEmpty()) { + try { + for (Map.Entry entry: fetcher.fetchEntries(gets).entrySet()) { + V quotaInfo = quotasMap.putIfAbsent(entry.getKey(), entry.getValue()); + if (quotaInfo != null) { + quotaInfo.update(entry.getValue()); + } + + if (LOG.isTraceEnabled()) { + LOG.trace("refresh " + type + " key=" + entry.getKey() + " quotas=" + quotaInfo); + } + } + } catch (IOException e) { + LOG.warn("Unable to read " + type + " from quota table", e); + } + } + } + } + + static interface Fetcher { + Get makeGet(Map.Entry entry); + Map fetchEntries(List gets) throws IOException; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiter.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiter.java new file mode 100644 index 0000000..ffacbc0 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiter.java @@ -0,0 +1,80 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.quotas.OperationQuota.OperationType; + +/** + * Internal interface used to interact with the user/table quota. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public interface QuotaLimiter { + /** + * Checks if it is possible to execute the specified operation. + * + * @param estimateWriteSize the write size that will be checked against the available quota + * @param estimateReadSize the read size that will be checked against the available quota + * @throws ThrottlingException thrown if not enough avialable resources to perform operation. + */ + void checkQuota(long estimateWriteSize, long estimateReadSize) + throws ThrottlingException; + + /** + * Removes the specified write and read amount from the quota. + * At this point the write and read amount will be an estimate, + * that will be later adjusted with a consumeWrite()/consumeRead() call. + * + * @param writeSize the write size that will be removed from the current quota + * @param readSize the read size that will be removed from the current quota + */ + void grabQuota(long writeSize, long readSize); + + /** + * Removes or add back some write amount to the quota. + * (called at the end of an operation in case the estimate quota was off) + */ + void consumeWrite(long size); + + /** + * Removes or add back some read amount to the quota. + * (called at the end of an operation in case the estimate quota was off) + */ + void consumeRead(long size); + + /** @return true if the limiter is a noop */ + boolean isBypass(); + + /** @return the number of bytes available to read to avoid exceeding the quota */ + long getReadAvailable(); + + /** @return the number of bytes available to write to avoid exceeding the quota */ + long getWriteAvailable(); + + /** + * Add the average size of the specified operation type. + * The average will be used as estimate for the next operations. + */ + void addOperationSize(OperationType type, long size); + + /** @return the average data size of the specified operation */ + long getAvgOperationSize(OperationType type); +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiterFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiterFactory.java new file mode 100644 index 0000000..3c759f0 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaLimiterFactory.java @@ -0,0 +1,39 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle; + +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class QuotaLimiterFactory { + public static QuotaLimiter fromThrottle(final Throttle throttle) { + return TimeBasedLimiter.fromThrottle(throttle); + } + + public static QuotaLimiter update(final QuotaLimiter a, final QuotaLimiter b) { + if (a.getClass().equals(b.getClass()) && a instanceof TimeBasedLimiter) { + ((TimeBasedLimiter)a).update(((TimeBasedLimiter)b)); + return a; + } + throw new UnsupportedOperationException("TODO not implemented yet"); + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaState.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaState.java new file mode 100644 index 0000000..3804a6f --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaState.java @@ -0,0 +1,119 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; + +/** + * In-Memory state of table or namespace quotas + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class QuotaState { + protected long lastUpdate = 0; + protected long lastQuery = 0; + + protected QuotaLimiter globalLimiter = NoopQuotaLimiter.get(); + + public QuotaState() { + this(0); + } + + public QuotaState(final long updateTs) { + lastUpdate = updateTs; + } + + public synchronized long getLastUpdate() { + return lastUpdate; + } + + public synchronized long getLastQuery() { + return lastQuery; + } + + @Override + public synchronized String toString() { + StringBuilder builder = new StringBuilder(); + builder.append("QuotaState(ts=" + getLastUpdate()); + if (isBypass()) { + builder.append(" bypass"); + } else { + if (globalLimiter != NoopQuotaLimiter.get()) { + //builder.append(" global-limiter"); + builder.append(" " + globalLimiter); + } + } + builder.append(')'); + return builder.toString(); + } + + /** + * @return true if there is no quota information associated to this object + */ + public synchronized boolean isBypass() { + return globalLimiter == NoopQuotaLimiter.get(); + } + + /** + * Setup the global quota information. + * (This operation is part of the QuotaState setup) + */ + public void setQuotas(final Quotas quotas) { + if (quotas.hasThrottle()) { + globalLimiter = QuotaLimiterFactory.fromThrottle(quotas.getThrottle()); + } else { + globalLimiter = NoopQuotaLimiter.get(); + } + } + + /** + * Perform an update of the quota info based on the other quota info object. + * (This operation is executed by the QuotaCache) + */ + public synchronized void update(final QuotaState other) { + if (globalLimiter == NoopQuotaLimiter.get()) { + globalLimiter = other.globalLimiter; + } else if (other.globalLimiter == NoopQuotaLimiter.get()) { + globalLimiter = NoopQuotaLimiter.get(); + } else { + globalLimiter = QuotaLimiterFactory.update(globalLimiter, other.globalLimiter); + } + lastUpdate = other.lastUpdate; + } + + /** + * Return the limiter associated with this quota. + * @return the quota limiter + */ + public synchronized QuotaLimiter getGlobalLimiter() { + lastQuery = EnvironmentEdgeManager.currentTime(); + return globalLimiter; + } + + /** + * Return the limiter associated with this quota without updating internal last query stats + * @return the quota limiter + */ + synchronized QuotaLimiter getGlobalLimiterWithoutUpdatingLastQuery() { + return globalLimiter; + } +} \ No newline at end of file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaUtil.java new file mode 100644 index 0000000..bff648d --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/QuotaUtil.java @@ -0,0 +1,311 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.IOException; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Mutation; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.regionserver.BloomType; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; + +/** + * Helper class to interact with the quota table + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class QuotaUtil extends QuotaTableUtil { + private static final Log LOG = LogFactory.getLog(QuotaUtil.class); + + public static final String QUOTA_CONF_KEY = "hbase.quota.enabled"; + private static final boolean QUOTA_ENABLED_DEFAULT = false; + + /** Table descriptor for Quota internal table */ + public static final HTableDescriptor QUOTA_TABLE_DESC = + new HTableDescriptor(QUOTA_TABLE_NAME); + static { + QUOTA_TABLE_DESC.addFamily( + new HColumnDescriptor(QUOTA_FAMILY_INFO) + .setScope(HConstants.REPLICATION_SCOPE_LOCAL) + .setBloomFilterType(BloomType.ROW) + .setMaxVersions(1) + ); + QUOTA_TABLE_DESC.addFamily( + new HColumnDescriptor(QUOTA_FAMILY_USAGE) + .setScope(HConstants.REPLICATION_SCOPE_LOCAL) + .setBloomFilterType(BloomType.ROW) + .setMaxVersions(1) + ); + } + + /** Returns true if the support for quota is enabled */ + public static boolean isQuotaEnabled(final Configuration conf) { + return conf.getBoolean(QUOTA_CONF_KEY, QUOTA_ENABLED_DEFAULT); + } + + /* ========================================================================= + * Quota "settings" helpers + */ + public static void addTableQuota(final Connection connection, final TableName table, + final Quotas data) throws IOException { + addQuotas(connection, getTableRowKey(table), data); + } + + public static void deleteTableQuota(final Connection connection, final TableName table) + throws IOException { + deleteQuotas(connection, getTableRowKey(table)); + } + + public static void addNamespaceQuota(final Connection connection, final String namespace, + final Quotas data) throws IOException { + addQuotas(connection, getNamespaceRowKey(namespace), data); + } + + public static void deleteNamespaceQuota(final Connection connection, final String namespace) + throws IOException { + deleteQuotas(connection, getNamespaceRowKey(namespace)); + } + + public static void addUserQuota(final Connection connection, final String user, + final Quotas data) throws IOException { + addQuotas(connection, getUserRowKey(user), data); + } + + public static void addUserQuota(final Connection connection, final String user, + final TableName table, final Quotas data) throws IOException { + addQuotas(connection, getUserRowKey(user), getSettingsQualifierForUserTable(table), data); + } + + public static void addUserQuota(final Connection connection, final String user, + final String namespace, final Quotas data) throws IOException { + addQuotas(connection, getUserRowKey(user), + getSettingsQualifierForUserNamespace(namespace), data); + } + + public static void deleteUserQuota(final Connection connection, final String user) + throws IOException { + deleteQuotas(connection, getUserRowKey(user)); + } + + public static void deleteUserQuota(final Connection connection, final String user, + final TableName table) throws IOException { + deleteQuotas(connection, getUserRowKey(user), + getSettingsQualifierForUserTable(table)); + } + + public static void deleteUserQuota(final Connection connection, final String user, + final String namespace) throws IOException { + deleteQuotas(connection, getUserRowKey(user), + getSettingsQualifierForUserNamespace(namespace)); + } + + private static void addQuotas(final Connection connection, final byte[] rowKey, + final Quotas data) throws IOException { + addQuotas(connection, rowKey, QUOTA_QUALIFIER_SETTINGS, data); + } + + private static void addQuotas(final Connection connection, final byte[] rowKey, + final byte[] qualifier, final Quotas data) throws IOException { + Put put = new Put(rowKey); + put.add(QUOTA_FAMILY_INFO, qualifier, quotasToData(data)); + doPut(connection, put); + } + + private static void deleteQuotas(final Connection connection, final byte[] rowKey) + throws IOException { + deleteQuotas(connection, rowKey, null); + } + + private static void deleteQuotas(final Connection connection, final byte[] rowKey, + final byte[] qualifier) throws IOException { + Delete delete = new Delete(rowKey); + if (qualifier != null) { + delete.deleteColumns(QUOTA_FAMILY_INFO, qualifier); + } + doDelete(connection, delete); + } + + public static Map fetchUserQuotas(final Connection connection, + final List gets) throws IOException { + long nowTs = EnvironmentEdgeManager.currentTime(); + Result[] results = doGet(connection, gets); + + Map userQuotas = new HashMap(results.length); + for (int i = 0; i < results.length; ++i) { + byte[] key = gets.get(i).getRow(); + assert isUserRowKey(key); + String user = getUserFromRowKey(key); + + final UserQuotaState quotaInfo = new UserQuotaState(nowTs); + userQuotas.put(user, quotaInfo); + + if (results[i].isEmpty()) continue; + assert Bytes.equals(key, results[i].getRow()); + + try { + parseUserResult(user, results[i], new UserQuotasVisitor() { + @Override + public void visitUserQuotas(String userName, String namespace, Quotas quotas) { + quotaInfo.setQuotas(namespace, quotas); + } + + @Override + public void visitUserQuotas(String userName, TableName table, Quotas quotas) { + quotaInfo.setQuotas(table, quotas); + } + + @Override + public void visitUserQuotas(String userName, Quotas quotas) { + quotaInfo.setQuotas(quotas); + } + }); + } catch (IOException e) { + LOG.error("Unable to parse user '" + user + "' quotas", e); + userQuotas.remove(user); + } + } + return userQuotas; + } + + public static Map fetchTableQuotas(final Connection connection, + final List gets) throws IOException { + return fetchGlobalQuotas("table", connection, gets, new KeyFromRow() { + @Override + public TableName getKeyFromRow(final byte[] row) { + assert isTableRowKey(row); + return getTableFromRowKey(row); + } + }); + } + + public static Map fetchNamespaceQuotas(final Connection connection, + final List gets) throws IOException { + return fetchGlobalQuotas("namespace", connection, gets, new KeyFromRow() { + @Override + public String getKeyFromRow(final byte[] row) { + assert isNamespaceRowKey(row); + return getNamespaceFromRowKey(row); + } + }); + } + + public static Map fetchGlobalQuotas(final String type, + final Connection connection, final List gets, final KeyFromRow kfr) + throws IOException { + long nowTs = EnvironmentEdgeManager.currentTime(); + Result[] results = doGet(connection, gets); + + Map globalQuotas = new HashMap(results.length); + for (int i = 0; i < results.length; ++i) { + byte[] row = gets.get(i).getRow(); + K key = kfr.getKeyFromRow(row); + + QuotaState quotaInfo = new QuotaState(nowTs); + globalQuotas.put(key, quotaInfo); + + if (results[i].isEmpty()) continue; + assert Bytes.equals(row, results[i].getRow()); + + byte[] data = results[i].getValue(QUOTA_FAMILY_INFO, QUOTA_QUALIFIER_SETTINGS); + if (data == null) continue; + + try { + Quotas quotas = quotasFromData(data); + quotaInfo.setQuotas(quotas); + } catch (IOException e) { + LOG.error("Unable to parse " + type + " '" + key + "' quotas", e); + globalQuotas.remove(key); + } + } + return globalQuotas; + } + + private static interface KeyFromRow { + T getKeyFromRow(final byte[] row); + } + + /* ========================================================================= + * HTable helpers + */ + private static void doPut(final Connection connection, final Put put) + throws IOException { + try (Table table = connection.getTable(QuotaUtil.QUOTA_TABLE_NAME)) { + table.put(put); + } + } + + private static void doDelete(final Connection connection, final Delete delete) + throws IOException { + try (Table table = connection.getTable(QuotaUtil.QUOTA_TABLE_NAME)) { + table.delete(delete); + } + } + + /* ========================================================================= + * Data Size Helpers + */ + public static long calculateMutationSize(final Mutation mutation) { + long size = 0; + for (Map.Entry> entry : mutation.getFamilyCellMap().entrySet()) { + for (Cell cell : entry.getValue()) { + size += KeyValueUtil.length(cell); + } + } + return size; + } + + public static long calculateResultSize(final Result result) { + long size = 0; + for (Cell cell : result.rawCells()) { + size += KeyValueUtil.length(cell); + } + return size; + } + + public static long calculateResultSize(final List results) { + long size = 0; + for (Result result: results) { + for (Cell cell : result.rawCells()) { + size += KeyValueUtil.length(cell); + } + } + return size; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RateLimiter.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RateLimiter.java new file mode 100644 index 0000000..1806cc3 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RateLimiter.java @@ -0,0 +1,181 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; + +/** + * Simple rate limiter. + * + * Usage Example: + * RateLimiter limiter = new RateLimiter(); // At this point you have a unlimited resource limiter + * limiter.set(10, TimeUnit.SECONDS); // set 10 resources/sec + * + * long lastTs = 0; // You need to keep track of the last update timestamp + * while (true) { + * long now = System.currentTimeMillis(); + * + * // call canExecute before performing resource consuming operation + * bool canExecute = limiter.canExecute(now, lastTs); + * // If there are no available resources, wait until one is available + * if (!canExecute) Thread.sleep(limiter.waitInterval()); + * // ...execute the work and consume the resource... + * limiter.consume(); + * } + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class RateLimiter { + private long tunit = 1000; // Timeunit factor for translating to ms. + private long limit = Long.MAX_VALUE; // The max value available resource units can be refilled to. + private long avail = Long.MAX_VALUE; // Currently available resource units + + public RateLimiter() { + } + + /** + * Set the RateLimiter max available resources and refill period. + * @param limit The max value available resource units can be refilled to. + * @param timeUnit Timeunit factor for translating to ms. + */ + public void set(final long limit, final TimeUnit timeUnit) { + switch (timeUnit) { + case NANOSECONDS: + throw new RuntimeException("Unsupported NANOSECONDS TimeUnit"); + case MICROSECONDS: + throw new RuntimeException("Unsupported MICROSECONDS TimeUnit"); + case MILLISECONDS: + tunit = 1; + break; + case SECONDS: + tunit = 1000; + break; + case MINUTES: + tunit = 60 * 1000; + break; + case HOURS: + tunit = 60 * 60 * 1000; + break; + case DAYS: + tunit = 24 * 60 * 60 * 1000; + break; + } + this.limit = limit; + this.avail = limit; + } + + public String toString() { + if (limit == Long.MAX_VALUE) { + return "RateLimiter(Bypass)"; + } + return "RateLimiter(avail=" + avail + " limit=" + limit + " tunit=" + tunit + ")"; + } + + /** + * Sets the current instance of RateLimiter to a new values. + * + * if current limit is smaller than the new limit, bump up the available resources. + * Otherwise allow clients to use up the previously available resources. + */ + public synchronized void update(final RateLimiter other) { + this.tunit = other.tunit; + if (this.limit < other.limit) { + this.avail += (other.limit - this.limit); + } + this.limit = other.limit; + } + + public synchronized boolean isBypass() { + return limit == Long.MAX_VALUE; + } + + public synchronized long getLimit() { + return limit; + } + + public synchronized long getAvailable() { + return avail; + } + + /** + * given the time interval, is there at least one resource available to allow execution? + * @param now the current timestamp + * @param lastTs the timestamp of the last update + * @return true if there is at least one resource available, otherwise false + */ + public boolean canExecute(final long now, final long lastTs) { + return canExecute(now, lastTs, 1); + } + + /** + * given the time interval, are there enough available resources to allow execution? + * @param now the current timestamp + * @param lastTs the timestamp of the last update + * @param amount the number of required resources + * @return true if there are enough available resources, otherwise false + */ + public synchronized boolean canExecute(final long now, final long lastTs, final long amount) { + return avail >= amount ? true : refill(now, lastTs) >= amount; + } + + /** + * consume one available unit. + */ + public void consume() { + consume(1); + } + + /** + * consume amount available units. + * @param amount the number of units to consume + */ + public synchronized void consume(final long amount) { + this.avail -= amount; + } + + /** + * @return estimate of the ms required to wait before being able to provide 1 resource. + */ + public long waitInterval() { + return waitInterval(1); + } + + /** + * @return estimate of the ms required to wait before being able to provide "amount" resources. + */ + public synchronized long waitInterval(final long amount) { + // TODO Handle over quota? + return (amount <= avail) ? 0 : ((amount * tunit) / limit) - ((avail * tunit) / limit); + } + + /** + * given the specified time interval, refill the avilable units to the proportionate + * to elapsed time or to the prespecified limit. + */ + private long refill(final long now, final long lastTs) { + long delta = (limit * (now - lastTs)) / tunit; + if (delta > 0) { + avail = Math.min(limit, avail + delta); + } + return avail; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RegionServerQuotaManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RegionServerQuotaManager.java new file mode 100644 index 0000000..836025f --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/RegionServerQuotaManager.java @@ -0,0 +1,199 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.IOException; +import java.util.List; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.ipc.RpcScheduler; +import org.apache.hadoop.hbase.ipc.RequestContext; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.RegionServerServices; +import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.security.UserGroupInformation; + +import com.google.common.annotations.VisibleForTesting; + +/** + * Region Server Quota Manager. + * It is responsible to provide access to the quota information of each user/table. + * + * The direct user of this class is the RegionServer that will get and check the + * user/table quota for each operation (put, get, scan). + * For system tables and user/table with a quota specified, the quota check will be a noop. + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class RegionServerQuotaManager { + private static final Log LOG = LogFactory.getLog(RegionServerQuotaManager.class); + + private final RegionServerServices rsServices; + + private QuotaCache quotaCache = null; + + public RegionServerQuotaManager(final RegionServerServices rsServices) { + this.rsServices = rsServices; + } + + public void start(final RpcScheduler rpcScheduler) throws IOException { + if (!QuotaUtil.isQuotaEnabled(rsServices.getConfiguration())) { + LOG.info("Quota support disabled"); + return; + } + + LOG.info("Initializing quota support"); + + // Initialize quota cache + quotaCache = new QuotaCache(rsServices); + quotaCache.start(); + } + + public void stop() { + if (isQuotaEnabled()) { + quotaCache.stop("shutdown"); + } + } + + public boolean isQuotaEnabled() { + return quotaCache != null; + } + + @VisibleForTesting + QuotaCache getQuotaCache() { + return quotaCache; + } + + /** + * Returns the quota for an operation. + * + * @param ugi the user that is executing the operation + * @param table the table where the operation will be executed + * @return the OperationQuota + */ + public OperationQuota getQuota(final UserGroupInformation ugi, final TableName table) { + if (isQuotaEnabled() && !table.isSystemTable()) { + UserQuotaState userQuotaState = quotaCache.getUserQuotaState(ugi); + QuotaLimiter userLimiter = userQuotaState.getTableLimiter(table); + boolean useNoop = userLimiter.isBypass(); + if (userQuotaState.hasBypassGlobals()) { + if (LOG.isTraceEnabled()) { + LOG.trace("get quota for ugi=" + ugi + " table=" + table + " userLimiter=" + userLimiter); + } + if (!useNoop) { + return new DefaultOperationQuota(userLimiter); + } + } else { + QuotaLimiter nsLimiter = quotaCache.getNamespaceLimiter(table.getNamespaceAsString()); + QuotaLimiter tableLimiter = quotaCache.getTableLimiter(table); + useNoop &= tableLimiter.isBypass() && nsLimiter.isBypass(); + if (LOG.isTraceEnabled()) { + LOG.trace("get quota for ugi=" + ugi + " table=" + table + " userLimiter=" + + userLimiter + " tableLimiter=" + tableLimiter + " nsLimiter=" + nsLimiter); + } + if (!useNoop) { + return new DefaultOperationQuota(userLimiter, tableLimiter, nsLimiter); + } + } + } + return NoopOperationQuota.get(); + } + + /** + * Check the quota for the current (rpc-context) user. + * Returns the OperationQuota used to get the available quota and + * to report the data/usage of the operation. + * @param region the region where the operation will be performed + * @param type the operation type + * @return the OperationQuota + * @throws ThrottlingException if the operation cannot be executed due to quota exceeded. + */ + public OperationQuota checkQuota(final HRegion region, + final OperationQuota.OperationType type) throws IOException, ThrottlingException { + switch (type) { + case SCAN: return checkQuota(region, 0, 0, 1); + case GET: return checkQuota(region, 0, 1, 0); + case MUTATE: return checkQuota(region, 1, 0, 0); + } + throw new RuntimeException("Invalid operation type: " + type); + } + + /** + * Check the quota for the current (rpc-context) user. + * Returns the OperationQuota used to get the available quota and + * to report the data/usage of the operation. + * @param region the region where the operation will be performed + * @param actions the "multi" actions to perform + * @return the OperationQuota + * @throws ThrottlingException if the operation cannot be executed due to quota exceeded. + */ + public OperationQuota checkQuota(final HRegion region, + final List actions) throws IOException, ThrottlingException { + int numWrites = 0; + int numReads = 0; + for (final ClientProtos.Action action: actions) { + if (action.hasMutation()) { + numWrites++; + } else if (action.hasGet()) { + numReads++; + } + } + return checkQuota(region, numWrites, numReads, 0); + } + + /** + * Check the quota for the current (rpc-context) user. + * Returns the OperationQuota used to get the available quota and + * to report the data/usage of the operation. + * @param region the region where the operation will be performed + * @param numWrites number of writes to perform + * @param numReads number of short-reads to perform + * @param numScans number of scan to perform + * @return the OperationQuota + * @throws ThrottlingException if the operation cannot be executed due to quota exceeded. + */ + private OperationQuota checkQuota(final HRegion region, + final int numWrites, final int numReads, final int numScans) + throws IOException, ThrottlingException { + UserGroupInformation ugi; + if (RequestContext.isInRequestContext()) { + ugi = RequestContext.getRequestUser().getUGI(); + } else { + ugi = User.getCurrent().getUGI(); + } + TableName table = region.getTableDesc().getTableName(); + + OperationQuota quota = getQuota(ugi, table); + try { + quota.checkQuota(numWrites, numReads, numScans); + } catch (ThrottlingException e) { + LOG.debug("Throttling exception for user=" + ugi.getUserName() + + " table=" + table + " numWrites=" + numWrites + + " numReads=" + numReads + " numScans=" + numScans + + ": " + e.getMessage()); + throw e; + } + return quota; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java new file mode 100644 index 0000000..8ca7e6b --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/TimeBasedLimiter.java @@ -0,0 +1,206 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.TimedQuota; +import org.apache.hadoop.hbase.quotas.OperationQuota.AvgOperationSize; +import org.apache.hadoop.hbase.quotas.OperationQuota.OperationType; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; + +/** + * Simple time based limiter that checks the quota Throttle + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class TimeBasedLimiter implements QuotaLimiter { + private static final Log LOG = LogFactory.getLog(TimeBasedLimiter.class); + + private long writeLastTs = 0; + private long readLastTs = 0; + + private RateLimiter reqsLimiter = new RateLimiter(); + private RateLimiter reqSizeLimiter = new RateLimiter(); + private RateLimiter writeReqsLimiter = new RateLimiter(); + private RateLimiter writeSizeLimiter = new RateLimiter(); + private RateLimiter readReqsLimiter = new RateLimiter(); + private RateLimiter readSizeLimiter = new RateLimiter(); + private AvgOperationSize avgOpSize = new AvgOperationSize(); + + private TimeBasedLimiter() { + } + + static QuotaLimiter fromThrottle(final Throttle throttle) { + TimeBasedLimiter limiter = new TimeBasedLimiter(); + boolean isBypass = true; + if (throttle.hasReqNum()) { + setFromTimedQuota(limiter.reqsLimiter, throttle.getReqNum()); + isBypass = false; + } + + if (throttle.hasReqSize()) { + setFromTimedQuota(limiter.reqSizeLimiter, throttle.getReqSize()); + isBypass = false; + } + + if (throttle.hasWriteNum()) { + setFromTimedQuota(limiter.writeReqsLimiter, throttle.getWriteNum()); + isBypass = false; + } + + if (throttle.hasWriteSize()) { + setFromTimedQuota(limiter.writeSizeLimiter, throttle.getWriteSize()); + isBypass = false; + } + + if (throttle.hasReadNum()) { + setFromTimedQuota(limiter.readReqsLimiter, throttle.getReadNum()); + isBypass = false; + } + + if (throttle.hasReadSize()) { + setFromTimedQuota(limiter.readSizeLimiter, throttle.getReadSize()); + isBypass = false; + } + return isBypass ? NoopQuotaLimiter.get() : limiter; + } + + public void update(final TimeBasedLimiter other) { + reqsLimiter.update(other.reqsLimiter); + reqSizeLimiter.update(other.reqSizeLimiter); + writeReqsLimiter.update(other.writeReqsLimiter); + writeSizeLimiter.update(other.writeSizeLimiter); + readReqsLimiter.update(other.readReqsLimiter); + readSizeLimiter.update(other.readSizeLimiter); + } + + private static void setFromTimedQuota(final RateLimiter limiter, final TimedQuota timedQuota) { + limiter.set(timedQuota.getSoftLimit(), ProtobufUtil.toTimeUnit(timedQuota.getTimeUnit())); + } + + @Override + public void checkQuota(long writeSize, long readSize) + throws ThrottlingException { + long now = EnvironmentEdgeManager.currentTime(); + long lastTs = Math.max(readLastTs, writeLastTs); + + if (!reqsLimiter.canExecute(now, lastTs)) { + ThrottlingException.throwNumRequestsExceeded(reqsLimiter.waitInterval()); + } + if (!reqSizeLimiter.canExecute(now, lastTs, writeSize + readSize)) { + ThrottlingException.throwNumRequestsExceeded(reqSizeLimiter.waitInterval(writeSize+readSize)); + } + + if (writeSize > 0) { + if (!writeReqsLimiter.canExecute(now, writeLastTs)) { + ThrottlingException.throwNumWriteRequestsExceeded(writeReqsLimiter.waitInterval()); + } + if (!writeSizeLimiter.canExecute(now, writeLastTs, writeSize)) { + ThrottlingException.throwWriteSizeExceeded(writeSizeLimiter.waitInterval(writeSize)); + } + } + + if (readSize > 0) { + if (!readReqsLimiter.canExecute(now, readLastTs)) { + ThrottlingException.throwNumReadRequestsExceeded(readReqsLimiter.waitInterval()); + } + if (!readSizeLimiter.canExecute(now, readLastTs, readSize)) { + ThrottlingException.throwReadSizeExceeded(readSizeLimiter.waitInterval(readSize)); + } + } + } + + @Override + public void grabQuota(long writeSize, long readSize) { + assert writeSize != 0 || readSize != 0; + + long now = EnvironmentEdgeManager.currentTime(); + + reqsLimiter.consume(1); + reqSizeLimiter.consume(writeSize + readSize); + + if (writeSize > 0) { + writeReqsLimiter.consume(1); + writeSizeLimiter.consume(writeSize); + writeLastTs = now; + } + if (readSize > 0) { + readReqsLimiter.consume(1); + readSizeLimiter.consume(readSize); + readLastTs = now; + } + } + + @Override + public void consumeWrite(final long size) { + reqSizeLimiter.consume(size); + writeSizeLimiter.consume(size); + } + + @Override + public void consumeRead(final long size) { + reqSizeLimiter.consume(size); + readSizeLimiter.consume(size); + } + + @Override + public boolean isBypass() { + return false; + } + + @Override + public long getWriteAvailable() { + return writeSizeLimiter.getAvailable(); + } + + @Override + public long getReadAvailable() { + return readSizeLimiter.getAvailable(); + } + + @Override + public void addOperationSize(OperationType type, long size) { + avgOpSize.addOperationSize(type, size); + } + + @Override + public long getAvgOperationSize(OperationType type) { + return avgOpSize.getAvgOperationSize(type); + } + + @Override + public String toString() { + StringBuilder builder = new StringBuilder(); + builder.append("TimeBasedLimiter("); + if (!reqsLimiter.isBypass()) builder.append("reqs=" + reqsLimiter); + if (!reqSizeLimiter.isBypass()) builder.append(" resSize=" + reqSizeLimiter); + if (!writeReqsLimiter.isBypass()) builder.append(" writeReqs=" + writeReqsLimiter); + if (!writeSizeLimiter.isBypass()) builder.append(" writeSize=" + writeSizeLimiter); + if (!readReqsLimiter.isBypass()) builder.append(" readReqs=" + readReqsLimiter); + if (!readSizeLimiter.isBypass()) builder.append(" readSize=" + readSizeLimiter); + builder.append(')'); + return builder.toString(); + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/UserQuotaState.java hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/UserQuotaState.java new file mode 100644 index 0000000..19fce22 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/UserQuotaState.java @@ -0,0 +1,202 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; + +/** + * In-Memory state of the user quotas + */ +@InterfaceAudience.Private +@InterfaceStability.Evolving +public class UserQuotaState extends QuotaState { + private Map namespaceLimiters = null; + private Map tableLimiters = null; + private boolean bypassGlobals = false; + + public UserQuotaState() { + super(); + } + + public UserQuotaState(final long updateTs) { + super(updateTs); + } + + @Override + public synchronized String toString() { + StringBuilder builder = new StringBuilder(); + builder.append("UserQuotaState(ts=" + getLastUpdate()); + if (bypassGlobals) builder.append(" bypass-globals"); + + if (isBypass()) { + builder.append(" bypass"); + } else { + if (getGlobalLimiterWithoutUpdatingLastQuery() != NoopQuotaLimiter.get()) { + builder.append(" global-limiter"); + } + + if (tableLimiters != null && !tableLimiters.isEmpty()) { + builder.append(" ["); + for (TableName table: tableLimiters.keySet()) { + builder.append(" " + table); + } + builder.append(" ]"); + } + + if (namespaceLimiters != null && !namespaceLimiters.isEmpty()) { + builder.append(" ["); + for (String ns: namespaceLimiters.keySet()) { + builder.append(" " + ns); + } + builder.append(" ]"); + } + } + builder.append(')'); + return builder.toString(); + } + + /** + * @return true if there is no quota information associated to this object + */ + @Override + public synchronized boolean isBypass() { + return !bypassGlobals && + getGlobalLimiterWithoutUpdatingLastQuery() == NoopQuotaLimiter.get() && + (tableLimiters == null || tableLimiters.isEmpty()) && + (namespaceLimiters == null || namespaceLimiters.isEmpty()); + } + + public synchronized boolean hasBypassGlobals() { + return bypassGlobals; + } + + @Override + public void setQuotas(final Quotas quotas) { + super.setQuotas(quotas); + bypassGlobals = quotas.getBypassGlobals(); + } + + /** + * Add the quota information of the specified table. + * (This operation is part of the QuotaState setup) + */ + public void setQuotas(final TableName table, Quotas quotas) { + tableLimiters = setLimiter(tableLimiters, table, quotas); + } + + /** + * Add the quota information of the specified namespace. + * (This operation is part of the QuotaState setup) + */ + public void setQuotas(final String namespace, Quotas quotas) { + namespaceLimiters = setLimiter(namespaceLimiters, namespace, quotas); + } + + private Map setLimiter(Map limiters, + final K key, final Quotas quotas) { + if (limiters == null) { + limiters = new HashMap(); + } + + QuotaLimiter limiter = quotas.hasThrottle() ? + QuotaLimiterFactory.fromThrottle(quotas.getThrottle()) : null; + if (limiter != null && !limiter.isBypass()) { + limiters.put(key, limiter); + } else { + limiters.remove(key); + } + return limiters; + } + + /** + * Perform an update of the quota state based on the other quota state object. + * (This operation is executed by the QuotaCache) + */ + @Override + public synchronized void update(final QuotaState other) { + super.update(other); + + if (other instanceof UserQuotaState) { + UserQuotaState uOther = (UserQuotaState)other; + tableLimiters = updateLimiters(tableLimiters, uOther.tableLimiters); + namespaceLimiters = updateLimiters(namespaceLimiters, uOther.namespaceLimiters); + bypassGlobals = uOther.bypassGlobals; + } else { + tableLimiters = null; + namespaceLimiters = null; + bypassGlobals = false; + } + } + + private static Map updateLimiters(final Map map, + final Map otherMap) { + if (map == null) { + return otherMap; + } + + if (otherMap != null) { + // To Remove + Set toRemove = new HashSet(map.keySet()); + toRemove.removeAll(otherMap.keySet()); + map.keySet().removeAll(toRemove); + + // To Update/Add + for (final Map.Entry entry: otherMap.entrySet()) { + QuotaLimiter limiter = map.get(entry.getKey()); + if (limiter == null) { + limiter = entry.getValue(); + } else { + limiter = QuotaLimiterFactory.update(limiter, entry.getValue()); + } + map.put(entry.getKey(), limiter); + } + return map; + } + return null; + } + + /** + * Return the limiter for the specified table associated with this quota. + * If the table does not have its own quota limiter the global one will be returned. + * In case there is no quota limiter associated with this object a noop limiter will be returned. + * + * @return the quota limiter for the specified table + */ + public synchronized QuotaLimiter getTableLimiter(final TableName table) { + lastQuery = EnvironmentEdgeManager.currentTime(); + if (tableLimiters != null) { + QuotaLimiter limiter = tableLimiters.get(table); + if (limiter != null) return limiter; + } + if (namespaceLimiters != null) { + QuotaLimiter limiter = namespaceLimiters.get(table.getNamespaceAsString()); + if (limiter != null) return limiter; + } + return getGlobalLimiterWithoutUpdatingLastQuery(); + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java index 1badd39..d9a9a84 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactSplitThread.java @@ -39,11 +39,11 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.conf.ConfigurationManager; import org.apache.hadoop.hbase.conf.PropagatingConfigurationObserver; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.util.StringUtils; import com.google.common.base.Preconditions; @@ -511,7 +511,8 @@ public class CompactSplitThread implements CompactionRequestor, PropagatingConfi } } } catch (IOException ex) { - IOException remoteEx = RemoteExceptionHandler.checkIOException(ex); + IOException remoteEx = + ex instanceof RemoteException ? ((RemoteException) ex).unwrapRemoteException() : ex; LOG.error("Compaction failed " + this, remoteEx); if (remoteEx != ex) { LOG.info("Compaction failed at original callstack: " + formatStackTrace(ex)); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java index 511e4bf..96f4a31 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactionTool.java @@ -34,6 +34,7 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; @@ -110,13 +111,14 @@ public class CompactionTool extends Configured implements Tool { if (isFamilyDir(fs, path)) { Path regionDir = path.getParent(); Path tableDir = regionDir.getParent(); - HTableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); + TableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); HRegionInfo hri = HRegionFileSystem.loadRegionInfoFileContent(fs, regionDir); - compactStoreFiles(tableDir, htd, hri, path.getName(), compactOnce, major); + compactStoreFiles(tableDir, htd.getHTableDescriptor(), hri, + path.getName(), compactOnce, major); } else if (isRegionDir(fs, path)) { Path tableDir = path.getParent(); - HTableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); - compactRegion(tableDir, htd, path, compactOnce, major); + TableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); + compactRegion(tableDir, htd.getHTableDescriptor(), path, compactOnce, major); } else if (isTableDir(fs, path)) { compactTable(path, compactOnce, major); } else { @@ -127,9 +129,9 @@ public class CompactionTool extends Configured implements Tool { private void compactTable(final Path tableDir, final boolean compactOnce, final boolean major) throws IOException { - HTableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); + TableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir); for (Path regionDir: FSUtils.getRegionDirs(fs, tableDir)) { - compactRegion(tableDir, htd, regionDir, compactOnce, major); + compactRegion(tableDir, htd.getHTableDescriptor(), regionDir, compactOnce, major); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllStoresPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllStoresPolicy.java new file mode 100644 index 0000000..0058104 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushAllStoresPolicy.java @@ -0,0 +1,35 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.Collection; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * A {@link FlushPolicy} that always flushes all stores for a given region. + */ +@InterfaceAudience.Private +public class FlushAllStoresPolicy extends FlushPolicy { + + @Override + public Collection selectStoresToFlush() { + return region.stores.values(); + } + +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java new file mode 100644 index 0000000..7e0e54c --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java @@ -0,0 +1,108 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.Collection; +import java.util.HashSet; +import java.util.Set; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * A {@link FlushPolicy} that only flushes store larger a given threshold. If no store is large + * enough, then all stores will be flushed. + */ +@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) +public class FlushLargeStoresPolicy extends FlushPolicy { + + private static final Log LOG = LogFactory.getLog(FlushLargeStoresPolicy.class); + + public static final String HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND = + "hbase.hregion.percolumnfamilyflush.size.lower.bound"; + + private static final long DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND = 1024 * 1024 * 16L; + + private long flushSizeLowerBound; + + @Override + protected void configureForRegion(HRegion region) { + super.configureForRegion(region); + long flushSizeLowerBound; + String flushedSizeLowerBoundString = + region.getTableDesc().getValue(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND); + if (flushedSizeLowerBoundString == null) { + flushSizeLowerBound = + getConf().getLong(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, + DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND); + if (LOG.isDebugEnabled()) { + LOG.debug(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND + + " is not specified, use global config(" + flushSizeLowerBound + ") instead"); + } + } else { + try { + flushSizeLowerBound = Long.parseLong(flushedSizeLowerBoundString); + } catch (NumberFormatException nfe) { + flushSizeLowerBound = + getConf().getLong(HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, + DEFAULT_HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND); + LOG.warn("Number format exception when parsing " + + HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND + " for table " + + region.getTableDesc().getTableName() + ":" + flushedSizeLowerBoundString + ". " + nfe + + ", use global config(" + flushSizeLowerBound + ") instead"); + + } + } + this.flushSizeLowerBound = flushSizeLowerBound; + } + + private boolean shouldFlush(Store store) { + if (store.getMemStoreSize() > this.flushSizeLowerBound) { + if (LOG.isDebugEnabled()) { + LOG.debug("Column Family: " + store.getColumnFamilyName() + " of region " + region + + " will be flushed because of memstoreSize(" + store.getMemStoreSize() + + ") is larger than lower bound(" + this.flushSizeLowerBound + ")"); + } + return true; + } + return region.shouldFlushStore(store); + } + + @Override + public Collection selectStoresToFlush() { + Collection stores = region.stores.values(); + Set specificStoresToFlush = new HashSet(); + for (Store store : stores) { + if (shouldFlush(store)) { + specificStoresToFlush.add(store); + } + } + // Didn't find any CFs which were above the threshold for selection. + if (specificStoresToFlush.isEmpty()) { + if (LOG.isDebugEnabled()) { + LOG.debug("Since none of the CFs were above the size, flushing all."); + } + return stores; + } else { + return specificStoresToFlush; + } + } + +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicy.java new file mode 100644 index 0000000..d581fee --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicy.java @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.Collection; + +import org.apache.hadoop.conf.Configured; +import org.apache.hadoop.hbase.classification.InterfaceAudience; + +/** + * A flush policy determines the stores that need to be flushed when flushing a region. + */ +@InterfaceAudience.Private +public abstract class FlushPolicy extends Configured { + + /** + * The region configured for this flush policy. + */ + protected HRegion region; + + /** + * Upon construction, this method will be called with the region to be governed. It will be called + * once and only once. + */ + protected void configureForRegion(HRegion region) { + this.region = region; + } + + /** + * @return the stores need to be flushed. + */ + public abstract Collection selectStoresToFlush(); + +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java new file mode 100644 index 0000000..e80b696 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushPolicyFactory.java @@ -0,0 +1,76 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.io.IOException; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.util.ReflectionUtils; + +/** + * The class that creates a flush policy from a conf and HTableDescriptor. + *

          + * The default flush policy is {@link FlushLargeStoresPolicy}. And for 0.98, the default flush + * policy is {@link FlushAllStoresPolicy}. + */ +@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.CONFIG) +public class FlushPolicyFactory { + + private static final Log LOG = LogFactory.getLog(FlushPolicyFactory.class); + + public static final String HBASE_FLUSH_POLICY_KEY = "hbase.regionserver.flush.policy"; + + private static final Class DEFAULT_FLUSH_POLICY_CLASS = + FlushLargeStoresPolicy.class; + + /** + * Create the FlushPolicy configured for the given table. + */ + public static FlushPolicy create(HRegion region, Configuration conf) throws IOException { + Class clazz = getFlushPolicyClass(region.getTableDesc(), conf); + FlushPolicy policy = ReflectionUtils.newInstance(clazz, conf); + policy.configureForRegion(region); + return policy; + } + + /** + * Get FlushPolicy class for the given table. + */ + public static Class getFlushPolicyClass(HTableDescriptor htd, + Configuration conf) throws IOException { + String className = htd.getFlushPolicyClassName(); + if (className == null) { + className = conf.get(HBASE_FLUSH_POLICY_KEY, DEFAULT_FLUSH_POLICY_CLASS.getName()); + } + try { + Class clazz = Class.forName(className).asSubclass(FlushPolicy.class); + return clazz; + } catch (Exception e) { + LOG.warn( + "Unable to load configured flush policy '" + className + "' for table '" + + htd.getTableName() + "', load default flush policy " + + DEFAULT_FLUSH_POLICY_CLASS.getName() + " instead", e); + return DEFAULT_FLUSH_POLICY_CLASS; + } + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java index e1c3144..7517454 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java @@ -30,26 +30,31 @@ public interface FlushRequester { * Tell the listener the cache needs to be flushed. * * @param region the HRegion requesting the cache flush + * @param forceFlushAllStores whether we want to flush all stores. e.g., when request from log + * rolling. */ - void requestFlush(HRegion region); + void requestFlush(HRegion region, boolean forceFlushAllStores); + /** * Tell the listener the cache needs to be flushed after a delay * * @param region the HRegion requesting the cache flush * @param delay after how much time should the flush happen + * @param forceFlushAllStores whether we want to flush all stores. e.g., when request from log + * rolling. */ - void requestDelayedFlush(HRegion region, long delay); + void requestDelayedFlush(HRegion region, long delay, boolean forceFlushAllStores); /** * Register a FlushRequestListener - * + * * @param listener */ void registerFlushRequestListener(final FlushRequestListener listener); /** * Unregister the given FlushRequestListener - * + * * @param listener * @return true when passed listener is unregistered successfully. */ @@ -57,7 +62,7 @@ public interface FlushRequester { /** * Sets the global memstore limit to a new size. - * + * * @param globalMemStoreSize */ public void setGlobalMemstoreLimit(long globalMemStoreSize); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java index ae41844..4d22c0e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/GetClosestRowBeforeTracker.java @@ -25,11 +25,11 @@ import java.util.TreeSet; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; -import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.util.Bytes; /** @@ -53,7 +53,7 @@ class GetClosestRowBeforeTracker { private final int tablenamePlusDelimiterLength; // Deletes keyed by row. Comparator compares on row portion of KeyValue only. - private final NavigableMap> deletes; + private final NavigableMap> deletes; /** * @param c @@ -79,8 +79,7 @@ class GetClosestRowBeforeTracker { this.now = System.currentTimeMillis(); this.oldestUnexpiredTs = now - ttl; this.kvcomparator = c; - KeyValue.RowOnlyComparator rc = new KeyValue.RowOnlyComparator(this.kvcomparator); - this.deletes = new TreeMap>(rc); + this.deletes = new TreeMap>(new CellComparator.RowComparator()); } /* @@ -88,12 +87,12 @@ class GetClosestRowBeforeTracker { * @param kv */ private void addDelete(final Cell kv) { - NavigableSet rowdeletes = this.deletes.get(kv); + NavigableSet rowdeletes = this.deletes.get(kv); if (rowdeletes == null) { - rowdeletes = new TreeSet(this.kvcomparator); - this.deletes.put(KeyValueUtil.ensureKeyValue(kv), rowdeletes); + rowdeletes = new TreeSet(this.kvcomparator); + this.deletes.put(kv, rowdeletes); } - rowdeletes.add(KeyValueUtil.ensureKeyValue(kv)); + rowdeletes.add(kv); } /* @@ -122,7 +121,7 @@ class GetClosestRowBeforeTracker { */ private boolean isDeleted(final Cell kv) { if (this.deletes.isEmpty()) return false; - NavigableSet rowdeletes = this.deletes.get(kv); + NavigableSet rowdeletes = this.deletes.get(kv); if (rowdeletes == null || rowdeletes.isEmpty()) return false; return isDeleted(kv, rowdeletes); } @@ -134,9 +133,9 @@ class GetClosestRowBeforeTracker { * @param ds * @return True is the specified KeyValue is deleted, false if not */ - public boolean isDeleted(final Cell kv, final NavigableSet ds) { + public boolean isDeleted(final Cell kv, final NavigableSet ds) { if (deletes == null || deletes.isEmpty()) return false; - for (KeyValue d: ds) { + for (Cell d: ds) { long kvts = kv.getTimestamp(); long dts = d.getTimestamp(); if (CellUtil.isDeleteFamily(d)) { @@ -158,7 +157,7 @@ class GetClosestRowBeforeTracker { if (kvts > dts) return false; // Check Type - switch (KeyValue.Type.codeToType(d.getType())) { + switch (KeyValue.Type.codeToType(d.getTypeByte())) { case Delete: return kvts == dts; case DeleteColumn: return true; default: continue; @@ -201,7 +200,7 @@ class GetClosestRowBeforeTracker { * @return True if we added a candidate */ boolean handle(final Cell kv) { - if (KeyValueUtil.ensureKeyValue(kv).isDelete()) { + if (CellUtil.isDelete(kv)) { handleDeletes(kv); return false; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java index 33aa8de..6cf2ce3 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java @@ -30,6 +30,7 @@ import java.util.Arrays; import java.util.Collection; import java.util.Collections; import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -41,6 +42,7 @@ import java.util.TreeMap; import java.util.concurrent.Callable; import java.util.concurrent.CompletionService; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; @@ -62,7 +64,6 @@ import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.commons.lang.RandomStringUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -82,6 +83,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.RegionTooBusyException; import org.apache.hadoop.hbase.TableName; @@ -89,6 +91,7 @@ import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.TagType; import org.apache.hadoop.hbase.UnknownScannerException; import org.apache.hadoop.hbase.backup.HFileArchiver; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; @@ -122,6 +125,7 @@ import org.apache.hadoop.hbase.monitoring.MonitoredTask; import org.apache.hadoop.hbase.monitoring.TaskMonitor; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState; +import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.CoprocessorServiceCall; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor; @@ -132,14 +136,9 @@ import org.apache.hadoop.hbase.regionserver.MultiVersionConsistencyControl.Write import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; import org.apache.hadoop.hbase.regionserver.wal.HLogKey; import org.apache.hadoop.hbase.regionserver.wal.MetricsWAL; -import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; -import org.apache.hadoop.hbase.wal.WALFactory; -import org.apache.hadoop.hbase.wal.WALKey; -import org.apache.hadoop.hbase.wal.WALSplitter; -import org.apache.hadoop.hbase.wal.WALSplitter.MutationReplay; -import org.apache.hadoop.hbase.regionserver.wal.WALUtil; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.regionserver.wal.WALUtil; import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.snapshot.SnapshotManifest; import org.apache.hadoop.hbase.util.Bytes; @@ -155,6 +154,11 @@ import org.apache.hadoop.hbase.util.HashedBytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.ServerRegionReplicaUtil; import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.wal.WALSplitter; +import org.apache.hadoop.hbase.wal.WALSplitter.MutationReplay; import org.apache.hadoop.io.MultipleIOException; import org.apache.hadoop.util.StringUtils; @@ -228,10 +232,10 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // final AtomicBoolean closing = new AtomicBoolean(false); /** - * The sequence id of the last flush on this region. Used doing some rough calculations on + * The max sequence id of flushed data on this region. Used doing some rough calculations on * whether time to flush or not. */ - protected volatile long lastFlushSeqId = -1L; + protected volatile long maxFlushedSeqId = -1L; /** * Region scoped edit sequence Id. Edits to this region are GUARANTEED to appear in the WAL @@ -516,7 +520,11 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // long memstoreFlushSize; final long timestampSlop; final long rowProcessorTimeout; - private volatile long lastFlushTime; + + // Last flush time for each Store. Useful when we are flushing for each column + private final ConcurrentMap lastStoreFlushTimeMap = + new ConcurrentHashMap(); + final RegionServerServices rsServices; private RegionServerAccounting rsAccounting; private long flushCheckInterval; @@ -525,12 +533,10 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // private long blockingMemStoreSize; final long threadWakeFrequency; // Used to guard closes - final ReentrantReadWriteLock lock = - new ReentrantReadWriteLock(); + final ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); // Stop updates lock - private final ReentrantReadWriteLock updatesLock = - new ReentrantReadWriteLock(); + private final ReentrantReadWriteLock updatesLock = new ReentrantReadWriteLock(); private boolean splitRequest; private byte[] explicitSplitPoint = null; @@ -542,10 +548,12 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // private HTableDescriptor htableDescriptor = null; private RegionSplitPolicy splitPolicy; + private FlushPolicy flushPolicy; private final MetricsRegion metricsRegion; private final MetricsRegionWrapperImpl metricsRegionWrapper; private final Durability durability; + private final boolean regionStatsEnabled; /** * HRegion constructor. This constructor should only be used for testing and @@ -610,7 +618,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // this.conf = new CompoundConfiguration() .add(confParam) .addStringMap(htd.getConfiguration()) - .addWritableMap(htd.getValues()); + .addBytesMap(htd.getValues()); this.flushCheckInterval = conf.getInt(MEMSTORE_PERIODIC_FLUSH_INTERVAL, DEFAULT_CACHE_FLUSH_INTERVAL); this.flushPerChanges = conf.getLong(MEMSTORE_FLUSH_PER_CHANGES, DEFAULT_FLUSH_PER_CHANGES); @@ -618,7 +626,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // throw new IllegalArgumentException(MEMSTORE_FLUSH_PER_CHANGES + " can not exceed " + MAX_FLUSH_PER_CHANGES); } - this.rowLockWaitDuration = conf.getInt("hbase.rowlock.wait.duration", DEFAULT_ROWLOCK_WAIT_DURATION); @@ -687,6 +694,13 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // conf.getBoolean(HConstants.DISALLOW_WRITES_IN_RECOVERING, HConstants.DEFAULT_DISALLOW_WRITES_IN_RECOVERING_CONFIG); configurationManager = Optional.absent(); + + // disable stats tracking system tables, but check the config for everything else + this.regionStatsEnabled = htd.getTableName().getNamespaceAsString().equals( + NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR) ? + false : + conf.getBoolean(HConstants.ENABLE_CLIENT_BACKPRESSURE, + HConstants.DEFAULT_ENABLE_CLIENT_BACKPRESSURE); } void setHTableSpecificConf() { @@ -777,8 +791,15 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // Initialize split policy this.splitPolicy = RegionSplitPolicy.create(this, conf); - this.lastFlushTime = EnvironmentEdgeManager.currentTime(); - // Use maximum of wal sequenceid or that which was found in stores + // Initialize flush policy + this.flushPolicy = FlushPolicyFactory.create(this, conf); + + long lastFlushTime = EnvironmentEdgeManager.currentTime(); + for (Store store: stores.values()) { + this.lastStoreFlushTimeMap.put(store, lastFlushTime); + } + + // Use maximum of log sequenceid or that which was found in stores // (particularly if no recovered edits, seqid will be -1). long nextSeqid = maxSeqId; @@ -1316,10 +1337,10 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // status.setStatus("Running coprocessor post-close hooks"); this.coprocessorHost.postClose(abort); } - if ( this.metricsRegion != null) { + if (this.metricsRegion != null) { this.metricsRegion.close(); } - if ( this.metricsRegionWrapper != null) { + if (this.metricsRegionWrapper != null) { Closeables.closeQuietly(this.metricsRegionWrapper); } status.markComplete("Closed"); @@ -1438,6 +1459,13 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } /** + * @return split policy for this region. + */ + public RegionSplitPolicy getSplitPolicy() { + return this.splitPolicy; + } + + /** * A split takes the config from the parent region & passes it to the daughter * region's constructor. If 'conf' was passed, you would end up using the HTD * of the parent region in addition to the new daughter HTD. Pass 'baseConf' @@ -1458,9 +1486,14 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // return this.fs; } - /** @return the last time the region was flushed */ - public long getLastFlushTime() { - return this.lastFlushTime; + /** + * @return Returns the earliest time a store in the region was flushed. All + * other stores in the region would have been flushed either at, or + * after this time. + */ + @VisibleForTesting + public long getEarliestFlushTimeForAllStores() { + return Collections.min(lastStoreFlushTimeMap.values()); } ////////////////////////////////////////////////////////////////////////////// @@ -1626,6 +1659,18 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } /** + * Flush all stores. + *

          + * See {@link #flushcache(boolean)}. + * + * @return whether the flush is success and whether the region needs compacting + * @throws IOException + */ + public FlushResult flushcache() throws IOException { + return flushcache(true); + } + + /** * Flush the cache. * * When this method is called the cache will be flushed unless: @@ -1638,14 +1683,14 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // * *

          This method may block for some time, so it should not be called from a * time-sensitive thread. - * - * @return true if the region needs compacting + * @param forceFlushAllStores whether we want to flush all stores + * @return whether the flush is success and whether the region needs compacting * * @throws IOException general io exceptions * @throws DroppedSnapshotException Thrown when replay of wal is required * because a Snapshot was not properly persisted. */ - public FlushResult flushcache() throws IOException { + public FlushResult flushcache(boolean forceFlushAllStores) throws IOException { // fail-fast instead of waiting on the lock if (this.closing.get()) { String msg = "Skipping flush on " + this + " because closing"; @@ -1687,8 +1732,11 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // return new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg); } } + try { - FlushResult fs = internalFlushcache(status); + Collection specificStoresToFlush = + forceFlushAllStores ? stores.values() : flushPolicy.selectStoresToFlush(); + FlushResult fs = internalFlushcache(specificStoresToFlush, status); if (coprocessorHost != null) { status.setStatus("Running post-flush coprocessor hooks"); @@ -1711,12 +1759,47 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } /** + * Should the store be flushed because it is old enough. + *

          + * Every FlushPolicy should call this to determine whether a store is old enough to flush(except + * that you always flush all stores). Otherwise the {@link #shouldFlush()} method will always + * returns true which will make a lot of flush requests. + */ + boolean shouldFlushStore(Store store) { + long maxFlushedSeqId = + this.wal.getEarliestMemstoreSeqNum(getRegionInfo().getEncodedNameAsBytes(), store + .getFamily().getName()) - 1; + if (maxFlushedSeqId > 0 && maxFlushedSeqId + flushPerChanges < sequenceId.get()) { + if (LOG.isDebugEnabled()) { + LOG.debug("Column Family: " + store.getColumnFamilyName() + " of region " + this + + " will be flushed because its max flushed seqId(" + maxFlushedSeqId + + ") is far away from current(" + sequenceId.get() + "), max allowed is " + + flushPerChanges); + } + return true; + } + if (flushCheckInterval <= 0) { + return false; + } + long now = EnvironmentEdgeManager.currentTime(); + if (store.timeOfOldestEdit() < now - flushCheckInterval) { + if (LOG.isDebugEnabled()) { + LOG.debug("Column Family: " + store.getColumnFamilyName() + " of region " + this + + " will be flushed because time of its oldest edit (" + store.timeOfOldestEdit() + + ") is far away from now(" + now + "), max allowed is " + flushCheckInterval); + } + return true; + } + return false; + } + + /** * Should the memstore be flushed now */ boolean shouldFlush() { // This is a rough measure. - if (this.lastFlushSeqId > 0 - && (this.lastFlushSeqId + this.flushPerChanges < this.sequenceId.get())) { + if (this.maxFlushedSeqId > 0 + && (this.maxFlushedSeqId + this.flushPerChanges < this.sequenceId.get())) { return true; } if (flushCheckInterval <= 0) { //disabled @@ -1724,7 +1807,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } long now = EnvironmentEdgeManager.currentTime(); //if we flushed in the recent past, we don't need to do again now - if ((now - getLastFlushTime() < flushCheckInterval)) { + if ((now - getEarliestFlushTimeForAllStores() < flushCheckInterval)) { return false; } //since we didn't flush in the recent past, flush now if certain conditions @@ -1739,35 +1822,56 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } /** - * Flush the memstore. Flushing the memstore is a little tricky. We have a lot of updates in the - * memstore, all of which have also been written to the wal. We need to write those updates in the - * memstore out to disk, while being able to process reads/writes as much as possible during the - * flush operation. - *

          This method may block for some time. Every time you call it, we up the regions - * sequence id even if we don't flush; i.e. the returned region id will be at least one larger - * than the last edit applied to this region. The returned id does not refer to an actual edit. - * The returned id can be used for say installing a bulk loaded file just ahead of the last hfile - * that was the result of this flush, etc. - * @return object describing the flush's state + * Flushing all stores. * - * @throws IOException general io exceptions - * @throws DroppedSnapshotException Thrown when replay of wal is required - * because a Snapshot was not properly persisted. + * @see #internalFlushcache(Collection, MonitoredTask) */ - protected FlushResult internalFlushcache(MonitoredTask status) + private FlushResult internalFlushcache(MonitoredTask status) throws IOException { - return internalFlushcache(this.wal, -1, status); + return internalFlushcache(stores.values(), status); + } + + /** + * Flushing given stores. + * + * @see #internalFlushcache(WAL, long, Collection, MonitoredTask) + */ + private FlushResult internalFlushcache(final Collection storesToFlush, + MonitoredTask status) throws IOException { + return internalFlushcache(this.wal, HConstants.NO_SEQNUM, storesToFlush, + status); } /** - * @param wal Null if we're NOT to go via wal. - * @param myseqid The seqid to use if wal is null writing out flush file. + * Flush the memstore. Flushing the memstore is a little tricky. We have a lot + * of updates in the memstore, all of which have also been written to the wal. + * We need to write those updates in the memstore out to disk, while being + * able to process reads/writes as much as possible during the flush + * operation. + *

          + * This method may block for some time. Every time you call it, we up the + * regions sequence id even if we don't flush; i.e. the returned region id + * will be at least one larger than the last edit applied to this region. The + * returned id does not refer to an actual edit. The returned id can be used + * for say installing a bulk loaded file just ahead of the last hfile that was + * the result of this flush, etc. + * + * @param wal + * Null if we're NOT to go via wal. + * @param myseqid + * The seqid to use if wal is null writing out flush + * file. + * @param storesToFlush + * The list of stores to flush. * @return object describing the flush's state * @throws IOException - * @see #internalFlushcache(MonitoredTask) + * general io exceptions + * @throws DroppedSnapshotException + * Thrown when replay of wal is required because a Snapshot was not + * properly persisted. */ - protected FlushResult internalFlushcache( - final WAL wal, final long myseqid, MonitoredTask status) throws IOException { + protected FlushResult internalFlushcache(final WAL wal, final long myseqid, + final Collection storesToFlush, MonitoredTask status) throws IOException { if (this.rsServices != null && this.rsServices.isAborted()) { // Don't flush when server aborting, it's unsafe throw new IOException("Aborting flush because server is aborted..."); @@ -1791,14 +1895,14 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // w = mvcc.beginMemstoreInsert(); long flushSeqId = getNextSequenceId(wal); FlushResult flushResult = new FlushResult( - FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY, flushSeqId, "Nothing to flush"); + FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY, flushSeqId, "Nothing to flush"); w.setWriteNumber(flushSeqId); mvcc.waitForPreviousTransactionsComplete(w); w = null; return flushResult; } else { return new FlushResult(FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY, - "Nothing to flush"); + "Nothing to flush"); } } } finally { @@ -1809,63 +1913,86 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } } - LOG.info("Started memstore flush for " + this + - ", current region memstore size " + - StringUtils.byteDesc(this.memstoreSize.get()) + - ((wal != null)? "": "; wal is null, using passed sequenceid=" + myseqid)); - + if (LOG.isInfoEnabled()) { + LOG.info("Started memstore flush for " + this + ", current region memstore size " + + StringUtils.byteDesc(this.memstoreSize.get()) + ", and " + storesToFlush.size() + "/" + + stores.size() + " column families' memstores are being flushed." + + ((wal != null) ? "" : "; wal is null, using passed sequenceid=" + myseqid)); + // only log when we are not flushing all stores. + if (this.stores.size() > storesToFlush.size()) { + for (Store store: storesToFlush) { + LOG.info("Flushing Column Family: " + store.getColumnFamilyName() + + " which was occupying " + + StringUtils.byteDesc(store.getMemStoreSize()) + " of memstore."); + } + } + } // Stop updates while we snapshot the memstore of all of these regions' stores. We only have // to do this for a moment. It is quick. We also set the memstore size to zero here before we // allow updates again so its value will represent the size of the updates received // during flush MultiVersionConsistencyControl.WriteEntry w = null; - // We have to take an update lock during snapshot, or else a write could end up in both snapshot // and memstore (makes it difficult to do atomic rows then) status.setStatus("Obtaining lock to block concurrent updates"); // block waiting for the lock for internal flush this.updatesLock.writeLock().lock(); - long totalFlushableSize = 0; status.setStatus("Preparing to flush by snapshotting stores in " + getRegionInfo().getEncodedName()); + long totalFlushableSizeOfFlushableStores = 0; + + Set flushedFamilyNames = new HashSet(); + for (Store store: storesToFlush) { + flushedFamilyNames.add(store.getFamily().getName()); + } + List storeFlushCtxs = new ArrayList(stores.size()); TreeMap> committedFiles = new TreeMap>( Bytes.BYTES_COMPARATOR); - long flushSeqId = -1L; + // The sequence id of this flush operation which is used to log FlushMarker and pass to + // createFlushContext to use as the store file's sequence id. + long flushOpSeqId = HConstants.NO_SEQNUM; + // The max flushed sequence id after this flush operation. Used as completeSequenceId which is + // passed to HMaster. + long flushedSeqId = HConstants.NO_SEQNUM; + byte[] encodedRegionName = getRegionInfo().getEncodedNameAsBytes(); long trxId = 0; try { try { w = mvcc.beginMemstoreInsert(); if (wal != null) { - if (!wal.startCacheFlush(this.getRegionInfo().getEncodedNameAsBytes())) { + if (!wal.startCacheFlush(encodedRegionName, flushedFamilyNames)) { // This should never happen. String msg = "Flush will not be started for [" + this.getRegionInfo().getEncodedName() + "] - because the WAL is closing."; status.setStatus(msg); return new FlushResult(FlushResult.Result.CANNOT_FLUSH, msg); } - // Get a sequence id that we can use to denote the flush. It will be one beyond the last - // edit that made it into the hfile (the below does not add an edit, it just asks the - // WAL system to return next sequence edit). - flushSeqId = getNextSequenceId(wal); + flushOpSeqId = getNextSequenceId(wal); + long oldestUnflushedSeqId = wal.getEarliestMemstoreSeqNum(encodedRegionName); + // no oldestUnflushedSeqId means we flushed all stores. + // or the unflushed stores are all empty. + flushedSeqId = + oldestUnflushedSeqId == HConstants.NO_SEQNUM ? flushOpSeqId : oldestUnflushedSeqId - 1; } else { // use the provided sequence Id as WAL is not being used for this flush. - flushSeqId = myseqid; + flushedSeqId = flushOpSeqId = myseqid; } - for (Store s : stores.values()) { - totalFlushableSize += s.getFlushableSize(); - storeFlushCtxs.add(s.createFlushContext(flushSeqId)); + for (Store s : storesToFlush) { + totalFlushableSizeOfFlushableStores += s.getFlushableSize(); + storeFlushCtxs.add(s.createFlushContext(flushOpSeqId)); committedFiles.put(s.getFamily().getName(), null); // for writing stores to WAL } // write the snapshot start to WAL if (wal != null) { FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.START_FLUSH, - getRegionInfo(), flushSeqId, committedFiles); + getRegionInfo(), flushOpSeqId, committedFiles); + // no sync. Sync is below where we do not hold the updates lock trxId = WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), - desc, sequenceId, false); // no sync. Sync is below where we do not hold the updates lock + desc, sequenceId, false); } // Prepare flush (take a snapshot) @@ -1877,7 +2004,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // if (trxId > 0) { // check whether we have already written START_FLUSH to WAL try { FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.ABORT_FLUSH, - getRegionInfo(), flushSeqId, committedFiles); + getRegionInfo(), flushOpSeqId, committedFiles); WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), desc, sequenceId, false); } catch (Throwable t) { @@ -1894,7 +2021,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // this.updatesLock.writeLock().unlock(); } String s = "Finished memstore snapshotting " + this + - ", syncing WAL and waiting on mvcc, flushsize=" + totalFlushableSize; + ", syncing WAL and waiting on mvcc, flushsize=" + totalFlushableSizeOfFlushableStores; status.setStatus(s); if (LOG.isTraceEnabled()) LOG.trace(s); // sync unflushed WAL changes @@ -1913,7 +2040,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // uncommitted transactions from being written into HFiles. // We have to block before we start the flush, otherwise keys that // were removed via a rollbackMemstore could be written to Hfiles. - w.setWriteNumber(flushSeqId); + w.setWriteNumber(flushOpSeqId); mvcc.waitForPreviousTransactionsComplete(w); // set w to null to prevent mvcc.advanceMemstore from being called again inside finally block w = null; @@ -1944,8 +2071,8 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // Switch snapshot (in memstore) -> new hfile (thus causing // all the store scanners to reset/reseek). - Iterator it = stores.values().iterator(); // stores.values() and storeFlushCtxs have - // same order + Iterator it = storesToFlush.iterator(); + // stores.values() and storeFlushCtxs have same order for (StoreFlushContext flush : storeFlushCtxs) { boolean needsCompaction = flush.commit(status); if (needsCompaction) { @@ -1956,12 +2083,12 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // storeFlushCtxs.clear(); // Set down the memstore size by amount of flush. - this.addAndGetGlobalMemstoreSize(-totalFlushableSize); + this.addAndGetGlobalMemstoreSize(-totalFlushableSizeOfFlushableStores); if (wal != null) { // write flush marker to WAL. If fail, we should throw DroppedSnapshotException FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.COMMIT_FLUSH, - getRegionInfo(), flushSeqId, committedFiles); + getRegionInfo(), flushOpSeqId, committedFiles); WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), desc, sequenceId, true); } @@ -1975,7 +2102,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // if (wal != null) { try { FlushDescriptor desc = ProtobufUtil.toFlushDescriptor(FlushAction.ABORT_FLUSH, - getRegionInfo(), flushSeqId, committedFiles); + getRegionInfo(), flushOpSeqId, committedFiles); WALUtil.writeFlushMarker(wal, this.htableDescriptor, getRegionInfo(), desc, sequenceId, false); } catch (Throwable ex) { @@ -1998,10 +2125,12 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } // Record latest flush time - this.lastFlushTime = EnvironmentEdgeManager.currentTime(); + for (Store store: storesToFlush) { + this.lastStoreFlushTimeMap.put(store, startTime); + } - // Update the last flushed sequence id for region. TODO: This is dup'd inside the WAL/FSHlog. - this.lastFlushSeqId = flushSeqId; + // Update the oldest unflushed sequence id for region. + this.maxFlushedSeqId = flushedSeqId; // C. Finally notify anyone waiting on memstore to clear: // e.g. checkResources(). @@ -2011,18 +2140,18 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // long time = EnvironmentEdgeManager.currentTime() - startTime; long memstoresize = this.memstoreSize.get(); - String msg = "Finished memstore flush of ~" + - StringUtils.byteDesc(totalFlushableSize) + "/" + totalFlushableSize + - ", currentsize=" + - StringUtils.byteDesc(memstoresize) + "/" + memstoresize + - " for region " + this + " in " + time + "ms, sequenceid=" + flushSeqId + - ", compaction requested=" + compactionRequested + - ((wal == null)? "; wal=null": ""); + String msg = "Finished memstore flush of ~" + + StringUtils.byteDesc(totalFlushableSizeOfFlushableStores) + "/" + + totalFlushableSizeOfFlushableStores + ", currentsize=" + + StringUtils.byteDesc(memstoresize) + "/" + memstoresize + + " for region " + this + " in " + time + "ms, sequenceid=" + + flushOpSeqId + ", compaction requested=" + compactionRequested + + ((wal == null) ? "; wal=null" : ""); LOG.info(msg); status.setStatus(msg); return new FlushResult(compactionRequested ? FlushResult.Result.FLUSHED_COMPACTION_NEEDED : - FlushResult.Result.FLUSHED_NO_COMPACTION_NEEDED, flushSeqId); + FlushResult.Result.FLUSHED_NO_COMPACTION_NEEDED, flushOpSeqId); } /** @@ -2153,7 +2282,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // if(delete.getFamilyCellMap().isEmpty()){ for(byte [] family : this.htableDescriptor.getFamiliesKeys()){ // Don't eat the timestamp - delete.deleteFamily(family, delete.getTimeStamp()); + delete.addFamily(family, delete.getTimeStamp()); } } else { for(byte [] family : delete.getFamilyCellMap().keySet()) { @@ -2804,6 +2933,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // coprocessorHost.postBatchMutate(miniBatchOp); } + // ------------------------------------------------------------------ // STEP 8. Advance mvcc. This will make this put visible to scanners and getters. // ------------------------------------------------------------------ @@ -2835,7 +2965,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // success = true; return addedSize; } finally { - // if the wal sync was unsuccessful, remove keys from memstore if (doRollBackMemstore) { rollbackMemstore(memstoreCells); @@ -3194,8 +3323,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // * We throw RegionTooBusyException if above memstore limit * and expect client to retry using some kind of backoff */ - private void checkResources() - throws RegionTooBusyException { + private void checkResources() throws RegionTooBusyException { // If catalog region, do not impose resource constraints or block updates. if (this.getRegionInfo().isMetaRegion()) return; @@ -3391,7 +3519,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // writestate.flushRequested = true; } // Make request outside of synchronize block; HBASE-818. - this.rsServices.getFlushRequester().requestFlush(this); + this.rsServices.getFlushRequester().requestFlush(this, false); if (LOG.isDebugEnabled()) { LOG.debug("Flush requested on " + this); } @@ -3512,7 +3640,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // } if (seqid > minSeqIdForTheRegion) { // Then we added some edits to memory. Flush and cleanup split edit files. - internalFlushcache(null, seqid, status); + internalFlushcache(null, seqid, stores.values(), status); } // Now delete the content of recovered edits. We're done w/ them. for (Path file: files) { @@ -3666,7 +3794,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // editsCount++; } if (flush) { - internalFlushcache(null, currentEditSeqId, status); + internalFlushcache(null, currentEditSeqId, stores.values(), status); } if (coprocessorHost != null) { @@ -4014,7 +4142,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // guaranteed to be one beyond the file made when we flushed (or if nothing to flush, it is // a sequence id that we can be sure is beyond the last hfile written). if (assignSeqId) { - FlushResult fs = this.flushcache(); + FlushResult fs = this.flushcache(true); if (fs.isFlushSucceeded()) { seqId = fs.flushSequenceId; } else if (fs.result == FlushResult.Result.CANNOT_FLUSH_MEMSTORE_EMPTY) { @@ -5057,8 +5185,8 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // FileSystem fs = a.getRegionFileSystem().getFileSystem(); // Make sure each region's cache is empty - a.flushcache(); - b.flushcache(); + a.flushcache(true); + b.flushcache(true); // Compact each region so we only have one store file per family a.compactStores(true); @@ -5172,7 +5300,7 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // do after lock if (this.metricsRegion != null) { - long totalSize = 0l; + long totalSize = 0L; for (Cell cell : results) { totalSize += CellUtil.estimatedSerializedSizeOf(cell); } @@ -5182,18 +5310,18 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // return results; } - public void mutateRow(RowMutations rm) throws IOException { + public ClientProtos.RegionLoadStats mutateRow(RowMutations rm) throws IOException { // Don't need nonces here - RowMutations only supports puts and deletes - mutateRowsWithLocks(rm.getMutations(), Collections.singleton(rm.getRow())); + return mutateRowsWithLocks(rm.getMutations(), Collections.singleton(rm.getRow())); } /** * Perform atomic mutations within the region w/o nonces. * See {@link #mutateRowsWithLocks(Collection, Collection, long, long)} */ - public void mutateRowsWithLocks(Collection mutations, + public ClientProtos.RegionLoadStats mutateRowsWithLocks(Collection mutations, Collection rowsToLock) throws IOException { - mutateRowsWithLocks(mutations, rowsToLock, HConstants.NO_NONCE, HConstants.NO_NONCE); + return mutateRowsWithLocks(mutations, rowsToLock, HConstants.NO_NONCE, HConstants.NO_NONCE); } /** @@ -5208,10 +5336,24 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // * rowsToLock is sorted in order to avoid deadlocks. * @throws IOException */ - public void mutateRowsWithLocks(Collection mutations, + public ClientProtos.RegionLoadStats mutateRowsWithLocks(Collection mutations, Collection rowsToLock, long nonceGroup, long nonce) throws IOException { MultiRowMutationProcessor proc = new MultiRowMutationProcessor(mutations, rowsToLock); processRowsWithLocks(proc, -1, nonceGroup, nonce); + return getRegionStats(); + } + + /** + * @return the current load statistics for the the region + */ + public ClientProtos.RegionLoadStats getRegionStats() { + if (!regionStatsEnabled) { + return null; + } + ClientProtos.RegionLoadStats.Builder stats = ClientProtos.RegionLoadStats.newBuilder(); + stats.setMemstoreLoad((int) (Math.min(100, (this.memstoreSize.get() * 100) / this + .memstoreFlushSize))); + return stats.build(); } /** @@ -5340,7 +5482,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // to get a sequence id assigned which is done by FSWALEntry#stampRegionSequenceId walKey = this.appendEmptyEdit(this.wal, memstoreCells); } - // 9. Release region lock if (locked) { this.updatesLock.readLock().unlock(); @@ -5468,7 +5609,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // WALEdit walEdits = null; List allKVs = new ArrayList(append.size()); Map> tempMemstore = new HashMap>(); - long size = 0; long txid = 0; @@ -5518,7 +5658,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // get.addColumn(family.getKey(), CellUtil.cloneQualifier(cell)); } List results = get(get, false); - // Iterate the input columns and update existing values if they were // found, otherwise add new column initialized to the append value @@ -5671,7 +5810,6 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // // Append a faked WALEdit in order for SKIP_WAL updates to get mvcc assigned walKey = this.appendEmptyEdit(this.wal, memstoreCells); } - size = this.addAndGetGlobalMemstoreSize(size); flush = isFlushSize(size); } finally { @@ -5968,8 +6106,8 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // public static final long FIXED_OVERHEAD = ClassSize.align( ClassSize.OBJECT + ClassSize.ARRAY + - 42 * ClassSize.REFERENCE + 2 * Bytes.SIZEOF_INT + - (12 * Bytes.SIZEOF_LONG) + + 44 * ClassSize.REFERENCE + 2 * Bytes.SIZEOF_INT + + (11 * Bytes.SIZEOF_LONG) + 4 * Bytes.SIZEOF_BOOLEAN); // woefully out of date - currently missing: @@ -6129,7 +6267,8 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // final WAL wal = walFactory.getMetaWAL( HRegionInfo.FIRST_META_REGIONINFO.getEncodedNameAsBytes()); region = HRegion.newHRegion(p, wal, fs, c, - HRegionInfo.FIRST_META_REGIONINFO, fst.get(TableName.META_TABLE_NAME), null); + HRegionInfo.FIRST_META_REGIONINFO, + fst.get(TableName.META_TABLE_NAME), null); } else { throw new IOException("Not a known catalog table: " + p.toString()); } @@ -6539,6 +6678,12 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // return this.maxSeqIdInStores; } + @VisibleForTesting + public long getOldestSeqIdOfStore(byte[] familyName) { + return wal.getEarliestMemstoreSeqNum(getRegionInfo() + .getEncodedNameAsBytes(), familyName); + } + /** * @return if a given region is in compaction now. */ @@ -6633,6 +6778,13 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // return new RowLock(this); } + @Override + public String toString() { + Thread t = this.thread; + return "Thread=" + (t == null? "null": t.getName()) + ", row=" + this.row + + ", lockCount=" + this.lockCount; + } + void releaseLock() { if (!ownedByCurrentThread()) { throw new IllegalArgumentException("Lock held by thread: " + thread @@ -6735,11 +6887,4 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver { // configurationManager.get().deregisterObserver(s); } } - - /** - * @return split policy for this region. - */ - public RegionSplitPolicy getSplitPolicy() { - return this.splitPolicy; - } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java index f682e82..0751634 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java @@ -567,12 +567,13 @@ public class HRegionFileSystem { * @param f File to split. * @param splitRow Split Row * @param top True if we are referring to the top half of the hfile. - * @return Path to created reference. * @param splitPolicy + * @return Path to created reference. * @throws IOException */ Path splitStoreFile(final HRegionInfo hri, final String familyName, final StoreFile f, - final byte[] splitRow, final boolean top, RegionSplitPolicy splitPolicy) throws IOException { + final byte[] splitRow, final boolean top, RegionSplitPolicy splitPolicy) + throws IOException { if (splitPolicy == null || !splitPolicy.skipStoreFileRangeCheck()) { // Check whether the split row lies in the range of the store file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java index ddeacd3..4669f8f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java @@ -25,7 +25,6 @@ import java.lang.management.ManagementFactory; import java.lang.management.MemoryUsage; import java.lang.reflect.Constructor; import java.net.BindException; -import java.net.InetAddress; import java.net.InetSocketAddress; import java.util.ArrayList; import java.util.Collection; @@ -47,6 +46,7 @@ import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicReference; import java.util.concurrent.locks.ReentrantReadWriteLock; +import java.net.InetAddress; import javax.management.MalformedObjectNameException; import javax.management.ObjectName; @@ -69,7 +69,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HealthCheckChore; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NotServingRegionException; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.TableDescriptors; @@ -77,12 +76,11 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.YouAreDeadException; import org.apache.hadoop.hbase.ZNodeClearer; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.ConnectionUtils; import org.apache.hadoop.hbase.conf.ConfigurationManager; +import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.CloseRegionCoordination; import org.apache.hadoop.hbase.coordination.SplitLogWorkerCoordination; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.exceptions.RegionMovedException; @@ -125,6 +123,7 @@ import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.Regio import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRSFatalErrorRequest; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionRequest; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionResponse; +import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager; import org.apache.hadoop.hbase.regionserver.compactions.CompactionProgress; import org.apache.hadoop.hbase.regionserver.handler.CloseMetaHandler; import org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler; @@ -139,7 +138,6 @@ import org.apache.hadoop.hbase.util.Addressing; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CompressionTest; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; @@ -167,6 +165,7 @@ import org.apache.zookeeper.KeeperException.NoNodeException; import org.apache.zookeeper.data.Stat; import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Preconditions; import com.google.common.collect.Maps; import com.google.protobuf.BlockingRpcChannel; import com.google.protobuf.Descriptors; @@ -187,6 +186,12 @@ public class HRegionServer extends HasThread implements public static final Log LOG = LogFactory.getLog(HRegionServer.class); + /** + * For testing only! Set to true to skip notifying region assignment to master . + */ + @edu.umd.cs.findbugs.annotations.SuppressWarnings(value="MS_SHOULD_BE_FINAL") + public static boolean TEST_SKIP_REPORTING_TRANSITION = false; + /* * Strings to be used in forming the exception message for * RegionsAlreadyInTransitionException. @@ -405,6 +410,8 @@ public class HRegionServer extends HasThread implements private RegionServerProcedureManagerHost rspmHost; + private RegionServerQuotaManager rsQuotaManager; + // Table level lock manager for locking for region operations protected TableLockManager tableLockManager; @@ -434,8 +441,6 @@ public class HRegionServer extends HasThread implements protected BaseCoordinatedStateManager csm; - private final boolean useZKForAssignment; - /** * Configuration manager is used to register/deregister and notify the configuration observers * when the regionserver is notified that there was a change in the on disk configs. @@ -457,10 +462,9 @@ public class HRegionServer extends HasThread implements * @param conf * @param csm implementation of CoordinatedStateManager to be used * @throws IOException - * @throws InterruptedException */ public HRegionServer(Configuration conf, CoordinatedStateManager csm) - throws IOException, InterruptedException { + throws IOException { this.fsOk = true; this.conf = conf; checkCodecs(this.conf); @@ -507,8 +511,6 @@ public class HRegionServer extends HasThread implements } }; - useZKForAssignment = ConfigUtil.useZKForAssignment(conf); - // Set 'fs.defaultFS' to match the filesystem on hbase.rootdir else // underlying hadoop hdfs accessors will be going against wrong filesystem // (unless all is set to defaults). @@ -518,8 +520,8 @@ public class HRegionServer extends HasThread implements boolean useHBaseChecksum = conf.getBoolean(HConstants.HBASE_CHECKSUM_VERIFICATION, true); this.fs = new HFileSystem(this.conf, useHBaseChecksum); this.rootDir = FSUtils.getRootDir(this.conf); - this.tableDescriptors = new FSTableDescriptors( - this.conf, this.fs, this.rootDir, !canUpdateTableDescriptor(), false); + this.tableDescriptors = new FSTableDescriptors(this.conf, + this.fs, this.rootDir, !canUpdateTableDescriptor(), false); service = new ExecutorService(getServerName().toShortString()); spanReceiverHost = SpanReceiverHost.getInstance(getConfiguration()); @@ -779,6 +781,9 @@ public class HRegionServer extends HasThread implements nonceManagerChore = this.nonceManager.createCleanupChore(this); } + // Setup the Quota Manager + rsQuotaManager = new RegionServerQuotaManager(this); + // Setup RPC client for master communication rpcClient = RpcClientFactory.createClient(conf, clusterId, new InetSocketAddress( rpcServices.isa.getAddress(), 0)); @@ -836,6 +841,9 @@ public class HRegionServer extends HasThread implements // start the snapshot handler and other procedure handlers, // since the server is ready to run rspmHost.start(); + + // Start the Quota Manager + rsQuotaManager.start(getRpcServer().getScheduler()); } // We registered with the Master. Go into run mode. @@ -929,6 +937,11 @@ public class HRegionServer extends HasThread implements this.storefileRefresher.interrupt(); } + // Stop the quota manager + if (rsQuotaManager != null) { + rsQuotaManager.stop(); + } + // Stop the snapshot and other procedure handlers, forcefully killing all running tasks if (rspmHost != null) { rspmHost.stop(this.abortRequested || this.killed); @@ -1209,7 +1222,7 @@ public class HRegionServer extends HasThread implements walFactory.shutdown(); } } catch (Throwable e) { - e = RemoteExceptionHandler.checkThrowable(e); + e = e instanceof RemoteException ? ((RemoteException) e).unwrapRemoteException() : e; LOG.error("Shutdown / close of WAL failed: " + e); LOG.debug("Shutdown / close exception details:", e); } @@ -1379,7 +1392,7 @@ public class HRegionServer extends HasThread implements .setWriteRequestsCount(r.writeRequestsCount.get()) .setTotalCompactingKVs(totalCompactingKVs) .setCurrentCompactedKVs(currentCompactedKVs) - .setCompleteSequenceId(r.lastFlushSeqId) + .setCompleteSequenceId(r.maxFlushedSeqId) .setDataLocality(dataLocality); return regionLoadBldr.build(); @@ -1475,7 +1488,7 @@ public class HRegionServer extends HasThread implements //Throttle the flushes by putting a delay. If we don't throttle, and there //is a balanced write-load on the regions in a table, we might end up //overwhelming the filesystem with too many flushes at once. - requester.requestDelayedFlush(r, randomDelay); + requester.requestDelayedFlush(r, randomDelay, false); } } } @@ -1799,14 +1812,8 @@ public class HRegionServer extends HasThread implements // Update flushed sequence id of a recovering region in ZK updateRecoveringRegionLastFlushedSequenceId(r); - // Update ZK, or META - if (r.getRegionInfo().isMetaRegion()) { - MetaTableLocator.setMetaLocation(getZooKeeper(), serverName, State.OPEN); - } else if (useZKForAssignment) { - MetaTableAccessor.updateRegionLocation(getConnection(), r.getRegionInfo(), - this.serverName, openSeqNum); - } - if (!useZKForAssignment && !reportRegionStateTransition( + // Notify master + if (!reportRegionStateTransition( TransitionCode.OPENED, openSeqNum, r.getRegionInfo())) { throw new IOException("Failed to report opened region to master: " + r.getRegionNameAsString()); @@ -1823,6 +1830,31 @@ public class HRegionServer extends HasThread implements @Override public boolean reportRegionStateTransition( TransitionCode code, long openSeqNum, HRegionInfo... hris) { + if (TEST_SKIP_REPORTING_TRANSITION) { + // This is for testing only in case there is no master + // to handle the region transition report at all. + if (code == TransitionCode.OPENED) { + Preconditions.checkArgument(hris != null && hris.length == 1); + if (hris[0].isMetaRegion()) { + try { + MetaTableLocator.setMetaLocation(getZooKeeper(), serverName, State.OPEN); + } catch (KeeperException e) { + LOG.info("Failed to update meta location", e); + return false; + } + } else { + try { + MetaTableAccessor.updateRegionLocation(clusterConnection, + hris[0], serverName, openSeqNum); + } catch (IOException e) { + LOG.info("Failed to update meta", e); + return false; + } + } + } + return true; + } + ReportRegionStateTransitionRequest.Builder builder = ReportRegionStateTransitionRequest.newBuilder(); builder.setServer(ProtobufUtil.toServerName(serverName)); @@ -2129,11 +2161,11 @@ public class HRegionServer extends HasThread implements } @Override - public long getLastSequenceId(byte[] region) { - Long lastFlushedSequenceId = -1l; + public long getLastSequenceId(byte[] encodedRegionName) { + long lastFlushedSequenceId = -1L; try { GetLastFlushedSequenceIdRequest req = RequestConverter - .buildGetLastFlushedSequenceIdRequest(region); + .buildGetLastFlushedSequenceIdRequest(encodedRegionName); RegionServerStatusService.BlockingInterface rss = rssStub; if (rss == null) { // Try to connect one more time createRegionServerStatusStub(); @@ -2142,7 +2174,7 @@ public class HRegionServer extends HasThread implements // Still no luck, we tried LOG.warn("Unable to connect to the master to check " + "the last flushed sequence id"); - return -1l; + return -1L; } } lastFlushedSequenceId = rss.getLastFlushedSequenceId(null, req) @@ -2391,6 +2423,11 @@ public class HRegionServer extends HasThread implements return service; } + @Override + public RegionServerQuotaManager getRegionServerQuotaManager() { + return rsQuotaManager; + } + // // Main program and support routines // @@ -2509,6 +2546,22 @@ public class HRegionServer extends HasThread implements return tableRegions; } + /** + * Gets the online tables in this RS. + * This method looks at the in-memory onlineRegions. + * @return all the online tables in this RS + */ + @Override + public Set getOnlineTables() { + Set tables = new HashSet(); + synchronized (this.onlineRegions) { + for (HRegion region: this.onlineRegions.values()) { + tables.add(region.getTableDesc().getTableName()); + } + } + return tables; + } + // used by org/apache/hbase/tmpl/regionserver/RSStatusTmpl.jamon (HBASE-4070). public String[] getRegionServerCoprocessors() { TreeSet coprocessors = new TreeSet(); @@ -2539,9 +2592,7 @@ public class HRegionServer extends HasThread implements */ private void closeRegionIgnoreErrors(HRegionInfo region, final boolean abort) { try { - CloseRegionCoordination.CloseRegionDetails details = - csm.getCloseRegionCoordination().getDetaultDetails(); - if (!closeRegion(region.getEncodedName(), abort, details, null)) { + if (!closeRegion(region.getEncodedName(), abort, null)) { LOG.warn("Failed to close " + region.getRegionNameAsString() + " - ignoring and continuing"); } @@ -2566,14 +2617,11 @@ public class HRegionServer extends HasThread implements * * @param encodedName Region to close * @param abort True if we are aborting - * @param crd details about closing region coordination-coordinated task * @return True if closed a region. * @throws NotServingRegionException if the region is not online - * @throws RegionAlreadyInTransitionException if the region is already closing */ - protected boolean closeRegion(String encodedName, final boolean abort, - CloseRegionCoordination.CloseRegionDetails crd, final ServerName sn) - throws NotServingRegionException, RegionAlreadyInTransitionException { + protected boolean closeRegion(String encodedName, final boolean abort, final ServerName sn) + throws NotServingRegionException { //Check for permissions to close. HRegion actualRegion = this.getFromOnlineRegions(encodedName); if ((actualRegion != null) && (actualRegion.getCoprocessorHost() != null)) { @@ -2596,31 +2644,24 @@ public class HRegionServer extends HasThread implements // We're going to try to do a standard close then. LOG.warn("The opening for region " + encodedName + " was done before we could cancel it." + " Doing a standard close now"); - return closeRegion(encodedName, abort, crd, sn); + return closeRegion(encodedName, abort, sn); } // Let's get the region from the online region list again actualRegion = this.getFromOnlineRegions(encodedName); if (actualRegion == null) { // If already online, we still need to close it. LOG.info("The opening previously in progress has been cancelled by a CLOSE request."); // The master deletes the znode when it receives this exception. - throw new RegionAlreadyInTransitionException("The region " + encodedName + + throw new NotServingRegionException("The region " + encodedName + " was opening but not yet served. Opening is cancelled."); } } else if (Boolean.FALSE.equals(previous)) { LOG.info("Received CLOSE for the region: " + encodedName + ", which we are already trying to CLOSE, but not completed yet"); - // The master will retry till the region is closed. We need to do this since - // the region could fail to close somehow. If we mark the region closed in master - // while it is not, there could be data loss. - // If the region stuck in closing for a while, and master runs out of retries, - // master will move the region to failed_to_close. Later on, if the region - // is indeed closed, master can properly re-assign it. - throw new RegionAlreadyInTransitionException("The region " + encodedName + - " was already closing. New CLOSE request is ignored."); + return true; } if (actualRegion == null) { - LOG.error("Received CLOSE for a region which is not online, and we're not opening."); + LOG.debug("Received CLOSE for a region which is not online, and we're not opening."); this.regionsInTransitionInRS.remove(encodedName.getBytes()); // The master deletes the znode when it receives this exception. throw new NotServingRegionException("The region " + encodedName + @@ -2630,11 +2671,9 @@ public class HRegionServer extends HasThread implements CloseRegionHandler crh; final HRegionInfo hri = actualRegion.getRegionInfo(); if (hri.isMetaRegion()) { - crh = new CloseMetaHandler(this, this, hri, abort, - csm.getCloseRegionCoordination(), crd); + crh = new CloseMetaHandler(this, this, hri, abort); } else { - crh = new CloseRegionHandler(this, this, hri, abort, - csm.getCloseRegionCoordination(), crd, sn); + crh = new CloseRegionHandler(this, this, hri, abort, sn); } this.service.submit(crh); return true; @@ -2742,10 +2781,11 @@ public class HRegionServer extends HasThread implements LOG.debug("NotServingRegionException; " + t.getMessage()); return t; } + Throwable e = t instanceof RemoteException ? ((RemoteException) t).unwrapRemoteException() : t; if (msg == null) { - LOG.error("", RemoteExceptionHandler.checkThrowable(t)); + LOG.error("", e); } else { - LOG.error(msg, RemoteExceptionHandler.checkThrowable(t)); + LOG.error(msg, e); } if (!rpcServices.checkOOME(t)) { checkFileSystem(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java index ad701b7..b674fea 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java @@ -55,7 +55,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.TagType; @@ -90,6 +89,7 @@ import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.ReflectionUtils; +import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.util.StringUtils; import com.google.common.annotations.VisibleForTesting; @@ -224,7 +224,7 @@ public class HStore implements Store { .add(confParam) .addStringMap(region.getTableDesc().getConfiguration()) .addStringMap(family.getConfiguration()) - .addWritableMap(family.getValues()); + .addBytesMap(family.getValues()); this.blocksize = family.getBlocksize(); this.dataBlockEncoder = @@ -1647,7 +1647,8 @@ public class HStore implements Store { this.fs.removeStoreFiles(this.getColumnFamilyName(), compactedFiles); } } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.error("Failed removing compacted files in " + this + ". Files we were trying to remove are " + compactedFiles.toString() + "; some of them may have been already removed", e); @@ -2264,7 +2265,7 @@ public class HStore implements Store { public void onConfigurationChange(Configuration conf) { this.conf = new CompoundConfiguration() .add(conf) - .addWritableMap(family.getValues()); + .addBytesMap(family.getValues()); this.storeEngine.compactionPolicy.setConf(conf); this.offPeakHours = OffPeakHours.getInstance(conf); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java index 9baac9b..143f800 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScan.java @@ -18,12 +18,15 @@ */ package org.apache.hadoop.hbase.regionserver; +import java.io.IOException; + +import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Scan; /** - * Special internal-only scanner, currently used for increment operations to + * Special scanner, currently used for increment operations to * allow additional server-side arguments for Scan operations. *

          * Rather than adding new options/parameters to the public Scan API, this new @@ -33,8 +36,8 @@ import org.apache.hadoop.hbase.client.Scan; * {@link #checkOnlyMemStore()} or to only read from StoreFiles with * {@link #checkOnlyStoreFiles()}. */ -@InterfaceAudience.Private -class InternalScan extends Scan { +@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) +public class InternalScan extends Scan { private boolean memOnly = false; private boolean filesOnly = false; @@ -46,6 +49,16 @@ class InternalScan extends Scan { } /** + * @param scan - original scan object + * @throws IOException + */ + public InternalScan(Scan scan) + throws IOException + { + super(scan); + } + + /** * StoreFiles will not be scanned. Only MemStore will be scanned. */ public void checkOnlyMemStore() { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LastSequenceId.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LastSequenceId.java index 51856e3..98f0985 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LastSequenceId.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LastSequenceId.java @@ -26,8 +26,8 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; @InterfaceAudience.Private public interface LastSequenceId { /** - * @param regionName Encoded region name - * @return Last flushed sequence Id for regionName or -1 if it can't be determined + * @param encodedRegionName Encoded region name + * @return Last flushed sequence Id for region or -1 if it can't be determined */ - long getLastSequenceId(byte[] regionName); + long getLastSequenceId(byte[] encodedRegionName); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java index aa5998b..aa60bfb 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/LogRoller.java @@ -28,17 +28,13 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.HasThread; - -import java.io.IOException; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.locks.ReentrantLock; +import org.apache.hadoop.ipc.RemoteException; /** * Runs periodically to determine if the WAL should be rolled. @@ -145,7 +141,7 @@ class LogRoller extends HasThread { } catch (IOException ex) { // Abort if we get here. We probably won't recover an IOE. HBASE-1132 server.abort("IOE in log roller", - RemoteExceptionHandler.checkIOException(ex)); + ex instanceof RemoteException ? ((RemoteException) ex).unwrapRemoteException() : ex); } catch (Exception ex) { LOG.error("Log rolling failed", ex); server.abort("Log rolling failed", ex); @@ -170,7 +166,8 @@ class LogRoller extends HasThread { if (r != null) { requester = this.services.getFlushRequester(); if (requester != null) { - requester.requestFlush(r); + // force flushing all stores to clean old logs + requester.requestFlush(r, true); scheduled = true; } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java index b2820dd..eece27a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java @@ -39,20 +39,20 @@ import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.DroppedSnapshotException; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.RemoteExceptionHandler; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.io.util.HeapMemorySizeUtil; import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.Counter; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.HasThread; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.ipc.RemoteException; +import org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix; import org.htrace.Trace; import org.htrace.TraceScope; -import org.apache.hadoop.hbase.util.Counter; import com.google.common.base.Preconditions; @@ -114,11 +114,11 @@ class MemStoreFlusher implements FlushRequester { 90000); int handlerCount = conf.getInt("hbase.hstore.flusher.count", 2); this.flushHandlers = new FlushHandler[handlerCount]; - LOG.info("globalMemStoreLimit=" + - StringUtils.humanReadableInt(this.globalMemStoreLimit) + - ", globalMemStoreLimitLowMark=" + - StringUtils.humanReadableInt(this.globalMemStoreLimitLowMark) + - ", maxHeap=" + StringUtils.humanReadableInt(max)); + LOG.info("globalMemStoreLimit=" + + TraditionalBinaryPrefix.long2String(this.globalMemStoreLimit, "", 1) + + ", globalMemStoreLimitLowMark=" + + TraditionalBinaryPrefix.long2String(this.globalMemStoreLimitLowMark, "", 1) + + ", maxHeap=" + TraditionalBinaryPrefix.long2String(max, "", 1)); } public Counter getUpdatesBlockedMsHighWater() { @@ -160,13 +160,12 @@ class MemStoreFlusher implements FlushRequester { // lots of little flushes and cause lots of compactions, etc, which just makes // life worse! if (LOG.isDebugEnabled()) { - LOG.debug("Under global heap pressure: " + - "Region " + bestAnyRegion.getRegionNameAsString() + " has too many " + - "store files, but is " + - StringUtils.humanReadableInt(bestAnyRegion.memstoreSize.get()) + - " vs best flushable region's " + - StringUtils.humanReadableInt(bestFlushableRegion.memstoreSize.get()) + - ". Choosing the bigger."); + LOG.debug("Under global heap pressure: " + "Region " + + bestAnyRegion.getRegionNameAsString() + " has too many " + "store files, but is " + + TraditionalBinaryPrefix.long2String(bestAnyRegion.memstoreSize.get(), "", 1) + + " vs best flushable region's " + + TraditionalBinaryPrefix.long2String(bestFlushableRegion.memstoreSize.get(), "", 1) + + ". Choosing the bigger."); } regionToFlush = bestAnyRegion; } else { @@ -180,7 +179,7 @@ class MemStoreFlusher implements FlushRequester { Preconditions.checkState(regionToFlush.memstoreSize.get() > 0); LOG.info("Flush of region " + regionToFlush + " due to global heap pressure"); - flushedOne = flushRegion(regionToFlush, true); + flushedOne = flushRegion(regionToFlush, true, true); if (!flushedOne) { LOG.info("Excluding unflushable region " + regionToFlush + " - trying to find a different region to flush."); @@ -206,7 +205,7 @@ class MemStoreFlusher implements FlushRequester { if (fqe == null || fqe instanceof WakeupFlushThread) { if (isAboveLowWaterMark()) { LOG.debug("Flush thread woke up because memory above low water=" - + StringUtils.humanReadableInt(globalMemStoreLimitLowMark)); + + TraditionalBinaryPrefix.long2String(globalMemStoreLimitLowMark, "", 1)); if (!flushOneForGlobalPressure()) { // Wasn't able to flush any region, but we're above low water mark // This is unlikely to happen, but might happen when closing the @@ -293,23 +292,23 @@ class MemStoreFlusher implements FlushRequester { getGlobalMemstoreSize() >= globalMemStoreLimitLowMark; } - public void requestFlush(HRegion r) { + public void requestFlush(HRegion r, boolean forceFlushAllStores) { synchronized (regionsInQueue) { if (!regionsInQueue.containsKey(r)) { // This entry has no delay so it will be added at the top of the flush // queue. It'll come out near immediately. - FlushRegionEntry fqe = new FlushRegionEntry(r); + FlushRegionEntry fqe = new FlushRegionEntry(r, forceFlushAllStores); this.regionsInQueue.put(r, fqe); this.flushQueue.add(fqe); } } } - public void requestDelayedFlush(HRegion r, long delay) { + public void requestDelayedFlush(HRegion r, long delay, boolean forceFlushAllStores) { synchronized (regionsInQueue) { if (!regionsInQueue.containsKey(r)) { // This entry has some delay - FlushRegionEntry fqe = new FlushRegionEntry(r); + FlushRegionEntry fqe = new FlushRegionEntry(r, forceFlushAllStores); fqe.requeue(delay); this.regionsInQueue.put(r, fqe); this.flushQueue.add(fqe); @@ -362,7 +361,7 @@ class MemStoreFlusher implements FlushRequester { } } - /* + /** * A flushRegion that checks store file count. If too many, puts the flush * on delay queue to retry later. * @param fqe @@ -390,9 +389,11 @@ class MemStoreFlusher implements FlushRequester { this.server.compactSplitThread.requestSystemCompaction( region, Thread.currentThread().getName()); } catch (IOException e) { + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.error( "Cache flush failed for region " + Bytes.toStringBinary(region.getRegionName()), - RemoteExceptionHandler.checkIOException(e)); + e); } } } @@ -404,22 +405,23 @@ class MemStoreFlusher implements FlushRequester { return true; } } - return flushRegion(region, false); + return flushRegion(region, false, fqe.isForceFlushAllStores()); } - /* + /** * Flush a region. * @param region Region to flush. * @param emergencyFlush Set if we are being force flushed. If true the region * needs to be removed from the flush queue. If false, when we were called * from the main flusher run loop and we got the entry to flush by calling * poll on the flush queue (which removed it). - * + * @param forceFlushAllStores whether we want to flush all store. * @return true if the region was successfully flushed, false otherwise. If * false, there will be accompanying log messages explaining why the log was * not flushed. */ - private boolean flushRegion(final HRegion region, final boolean emergencyFlush) { + private boolean flushRegion(final HRegion region, final boolean emergencyFlush, + boolean forceFlushAllStores) { long startTime = 0; synchronized (this.regionsInQueue) { FlushRegionEntry fqe = this.regionsInQueue.remove(region); @@ -442,7 +444,7 @@ class MemStoreFlusher implements FlushRequester { lock.readLock().lock(); try { notifyFlushRequest(region, emergencyFlush); - HRegion.FlushResult flushResult = region.flushcache(); + HRegion.FlushResult flushResult = region.flushcache(forceFlushAllStores); boolean shouldCompact = flushResult.isCompactionNeeded(); // We just want to check the size boolean shouldSplit = region.checkSplit() != null; @@ -465,9 +467,11 @@ class MemStoreFlusher implements FlushRequester { server.abort("Replay of WAL required. Forcing server shutdown", ex); return false; } catch (IOException ex) { - LOG.error("Cache flush failed" + - (region != null ? (" for region " + Bytes.toStringBinary(region.getRegionName())) : ""), - RemoteExceptionHandler.checkIOException(ex)); + ex = ex instanceof RemoteException ? ((RemoteException) ex).unwrapRemoteException() : ex; + LOG.error( + "Cache flush failed" + + (region != null ? (" for region " + Bytes.toStringBinary(region.getRegionName())) + : ""), ex); if (!server.checkFileSystem()) { return false; } @@ -524,11 +528,12 @@ class MemStoreFlusher implements FlushRequester { while (isAboveHighWaterMark() && !server.isStopped()) { if (!blocked) { startTime = EnvironmentEdgeManager.currentTime(); - LOG.info("Blocking updates on " + server.toString() + - ": the global memstore size " + - StringUtils.humanReadableInt(server.getRegionServerAccounting().getGlobalMemstoreSize()) + - " is >= than blocking " + - StringUtils.humanReadableInt(globalMemStoreLimit) + " size"); + LOG.info("Blocking updates on " + + server.toString() + + ": the global memstore size " + + TraditionalBinaryPrefix.long2String(server.getRegionServerAccounting() + .getGlobalMemstoreSize(), "", 1) + " is >= than blocking " + + TraditionalBinaryPrefix.long2String(globalMemStoreLimit, "", 1) + " size"); } blocked = true; wakeupFlushThread(); @@ -652,10 +657,13 @@ class MemStoreFlusher implements FlushRequester { private long whenToExpire; private int requeueCount = 0; - FlushRegionEntry(final HRegion r) { + private boolean forceFlushAllStores; + + FlushRegionEntry(final HRegion r, boolean forceFlushAllStores) { this.region = r; this.createTime = EnvironmentEdgeManager.currentTime(); this.whenToExpire = this.createTime; + this.forceFlushAllStores = forceFlushAllStores; } /** @@ -675,6 +683,13 @@ class MemStoreFlusher implements FlushRequester { } /** + * @return whether we need to flush all stores. + */ + public boolean isForceFlushAllStores() { + return forceFlushAllStores; + } + + /** * @param when When to expire, when to come up out of the queue. * Specify in milliseconds. This method adds EnvironmentEdgeManager.currentTime() * to whatever you pass. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java index 415e271..d8ad6fe 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase.regionserver; +import java.io.IOException; import java.util.Collection; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; @@ -34,7 +35,9 @@ import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.io.hfile.CacheStats; import org.apache.hadoop.hbase.wal.DefaultWALProvider; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; +import org.apache.hadoop.hdfs.DFSHedgedReadMetrics; import org.apache.hadoop.metrics2.MetricsExecutor; /** @@ -80,6 +83,11 @@ class MetricsRegionServerWrapperImpl private Runnable runnable; private long period; + /** + * Can be null if not on hdfs. + */ + private DFSHedgedReadMetrics dfsHedgedReadMetrics; + public MetricsRegionServerWrapperImpl(final HRegionServer regionServer) { this.regionServer = regionServer; initBlockCache(); @@ -93,6 +101,11 @@ class MetricsRegionServerWrapperImpl this.executor.scheduleWithFixedDelay(this.runnable, this.period, this.period, TimeUnit.MILLISECONDS); + try { + this.dfsHedgedReadMetrics = FSUtils.getDFSHedgedReadMetrics(regionServer.getConfiguration()); + } catch (IOException e) { + LOG.warn("Failed to get hedged metrics", e); + } if (LOG.isInfoEnabled()) { LOG.info("Computing regionserver metrics every " + this.period + " milliseconds"); } @@ -510,6 +523,16 @@ class MetricsRegionServerWrapperImpl } @Override + public long getHedgedReadOps() { + return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadOps(); + } + + @Override + public long getHedgedReadWins() { + return this.dfsHedgedReadMetrics == null? 0: this.dfsHedgedReadMetrics.getHedgedReadWins(); + } + + @Override public long getBlockedRequestsCount() { return blockedRequestsCount; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java index 0285a59..a2284dd 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java @@ -23,10 +23,10 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit; /** * Wraps together the mutations which are applied as a batch to the region and their operation * status and WALEdits. - * @see org.apache.hadoop.hbase.coprocessor. - * RegionObserver#preBatchMutate(ObserverContext, MiniBatchOperationInProgress) - * @see org.apache.hadoop.hbase.coprocessor. - * RegionObserver#postBatchMutate(ObserverContext, MiniBatchOperationInProgress) + * @see org.apache.hadoop.hbase.coprocessor.RegionObserver#preBatchMutate( + * ObserverContext, MiniBatchOperationInProgress) + * @see org.apache.hadoop.hbase.coprocessor.RegionObserver#postBatchMutate( + * ObserverContext, MiniBatchOperationInProgress) * @param Pair pair of Mutations and associated rowlock ids . */ @InterfaceAudience.Private diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java index 06e51c6..492b26d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java @@ -48,7 +48,6 @@ import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -57,6 +56,7 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.ConnectionUtils; import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Increment; import org.apache.hadoop.hbase.client.Mutation; @@ -65,14 +65,12 @@ import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.RowMutations; import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.coordination.CloseRegionCoordination; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; import org.apache.hadoop.hbase.exceptions.FailedSanityCheckException; +import org.apache.hadoop.hbase.exceptions.MergeRegionException; import org.apache.hadoop.hbase.exceptions.OperationConflictException; import org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException; import org.apache.hadoop.hbase.filter.ByteArrayComparable; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; -import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.ipc.HBaseRPCErrorHandler; import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.ipc.PriorityFunction; @@ -145,20 +143,21 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType; import org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor; +import org.apache.hadoop.hbase.quotas.OperationQuota; +import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager; import org.apache.hadoop.hbase.regionserver.HRegion.Operation; import org.apache.hadoop.hbase.regionserver.Leases.LeaseStillHeldException; import org.apache.hadoop.hbase.regionserver.handler.OpenMetaHandler; import org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler; -import org.apache.hadoop.hbase.wal.WAL; -import org.apache.hadoop.hbase.wal.WALKey; -import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; -import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Counter; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.ServerRegionReplicaUtil; import org.apache.hadoop.hbase.util.Strings; +import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hbase.zookeeper.ZKSplitLog; import org.apache.hadoop.net.DNS; import org.apache.zookeeper.KeeperException; @@ -257,8 +256,8 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } private static ResultOrException getResultOrException( - final ClientProtos.Result r, final int index) { - return getResultOrException(ResponseConverter.buildActionResult(r), index); + final ClientProtos.Result r, final int index, final ClientProtos.RegionLoadStats stats) { + return getResultOrException(ResponseConverter.buildActionResult(r, stats), index); } private static ResultOrException getResultOrException(final Exception e, final int index) { @@ -355,7 +354,8 @@ public class RSRpcServices implements HBaseRPCErrorHandler, * @param cellScanner if non-null, the mutation data -- the Cell content. * @throws IOException */ - private void mutateRows(final HRegion region, final List actions, + private ClientProtos.RegionLoadStats mutateRows(final HRegion region, + final List actions, final CellScanner cellScanner) throws IOException { if (!region.getRegionInfo().isMetaTable()) { regionServer.cacheFlusher.reclaimMemStoreMemory(); @@ -381,7 +381,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, throw new DoNotRetryIOException("Atomic put and/or delete only, not " + type.name()); } } - region.mutateRow(rm); + return region.mutateRow(rm); } /** @@ -436,10 +436,11 @@ public class RSRpcServices implements HBaseRPCErrorHandler, * bypassed as indicated by RegionObserver, null otherwise * @throws IOException */ - private Result append(final HRegion region, final MutationProto m, + private Result append(final HRegion region, final OperationQuota quota, final MutationProto m, final CellScanner cellScanner, long nonceGroup) throws IOException { long before = EnvironmentEdgeManager.currentTime(); Append append = ProtobufUtil.toAppend(m, cellScanner); + quota.addMutation(append); Result r = null; if (region.getCoprocessorHost() != null) { r = region.getCoprocessorHost().preAppend(append); @@ -472,10 +473,12 @@ public class RSRpcServices implements HBaseRPCErrorHandler, * @return the Result * @throws IOException */ - private Result increment(final HRegion region, final MutationProto mutation, - final CellScanner cells, long nonceGroup) throws IOException { + private Result increment(final HRegion region, final OperationQuota quota, + final MutationProto mutation, final CellScanner cells, long nonceGroup) + throws IOException { long before = EnvironmentEdgeManager.currentTime(); Increment increment = ProtobufUtil.toIncrement(mutation, cells); + quota.addMutation(increment); Result r = null; if (region.getCoprocessorHost() != null) { r = region.getCoprocessorHost().preIncrement(increment); @@ -512,7 +515,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, * @return Return the cellScanner passed */ private List doNonAtomicRegionMutation(final HRegion region, - final RegionAction actions, final CellScanner cellScanner, + final OperationQuota quota, final RegionAction actions, final CellScanner cellScanner, final RegionActionResult.Builder builder, List cellsToReturn, long nonceGroup) { // Gather up CONTIGUOUS Puts and Deletes in this mutations List. Idea is that rather than do // one at a time, we instead pass them in batch. Be aware that the corresponding @@ -545,15 +548,15 @@ public class RSRpcServices implements HBaseRPCErrorHandler, if (type != MutationType.PUT && type != MutationType.DELETE && mutations != null && !mutations.isEmpty()) { // Flush out any Puts or Deletes already collected. - doBatchOp(builder, region, mutations, cellScanner); + doBatchOp(builder, region, quota, mutations, cellScanner); mutations.clear(); } switch (type) { case APPEND: - r = append(region, action.getMutation(), cellScanner, nonceGroup); + r = append(region, quota, action.getMutation(), cellScanner, nonceGroup); break; case INCREMENT: - r = increment(region, action.getMutation(), cellScanner, nonceGroup); + r = increment(region, quota, action.getMutation(), cellScanner, nonceGroup); break; case PUT: case DELETE: @@ -598,7 +601,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } // Finish up any outstanding mutations if (mutations != null && !mutations.isEmpty()) { - doBatchOp(builder, region, mutations, cellScanner); + doBatchOp(builder, region, quota, mutations, cellScanner); } return cellsToReturn; } @@ -611,7 +614,8 @@ public class RSRpcServices implements HBaseRPCErrorHandler, * @param mutations */ private void doBatchOp(final RegionActionResult.Builder builder, final HRegion region, - final List mutations, final CellScanner cells) { + final OperationQuota quota, final List mutations, + final CellScanner cells) { Mutation[] mArray = new Mutation[mutations.size()]; long before = EnvironmentEdgeManager.currentTime(); boolean batchContainsPuts = false, batchContainsDelete = false; @@ -628,6 +632,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, batchContainsDelete = true; } mArray[i++] = mutation; + quota.addMutation(mutation); } if (!region.getRegionInfo().isMetaTable()) { @@ -656,7 +661,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, case SUCCESS: builder.addResultOrException(getResultOrException( - ClientProtos.Result.getDefaultInstance(), index)); + ClientProtos.Result.getDefaultInstance(), index, region.getRegionStats())); break; } } @@ -688,7 +693,6 @@ public class RSRpcServices implements HBaseRPCErrorHandler, */ private OperationStatus [] doReplayBatchOp(final HRegion region, final List mutations, long replaySeqId) throws IOException { - long before = EnvironmentEdgeManager.currentTime(); boolean batchContainsPuts = false, batchContainsDelete = false; try { @@ -860,6 +864,10 @@ public class RSRpcServices implements HBaseRPCErrorHandler, return regionServer.getConfiguration(); } + private RegionServerQuotaManager getQuotaManager() { + return regionServer.getRegionServerQuotaManager(); + } + void start() { rpcServer.start(); } @@ -982,10 +990,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, requestCount.increment(); LOG.info("Close " + encodedRegionName + ", moving to " + sn); - CloseRegionCoordination.CloseRegionDetails crd = regionServer.getCoordinatedStateManager() - .getCloseRegionCoordination().parseFromProtoRequest(request); - - boolean closed = regionServer.closeRegion(encodedRegionName, false, crd, sn); + boolean closed = regionServer.closeRegion(encodedRegionName, false, sn); CloseRegionResponse.Builder builder = CloseRegionResponse.newBuilder().setClosed(closed); return builder.build(); } catch (IOException ie) { @@ -1069,7 +1074,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, LOG.info("Flushing " + region.getRegionNameAsString()); boolean shouldFlush = true; if (request.hasIfOlderThanTs()) { - shouldFlush = region.getLastFlushTime() < request.getIfOlderThanTs(); + shouldFlush = region.getEarliestFlushTimeForAllStores() < request.getIfOlderThanTs(); } FlushRegionResponse.Builder builder = FlushRegionResponse.newBuilder(); if (shouldFlush) { @@ -1086,7 +1091,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } builder.setFlushed(result); } - builder.setLastFlushTime(region.getLastFlushTime()); + builder.setLastFlushTime( region.getEarliestFlushTimeForAllStores()); return builder.build(); } catch (DroppedSnapshotException ex) { // Cache flush can fail in a few places. If it fails in a critical @@ -1207,6 +1212,10 @@ public class RSRpcServices implements HBaseRPCErrorHandler, boolean forcible = request.getForcible(); regionA.startRegionOperation(Operation.MERGE_REGION); regionB.startRegionOperation(Operation.MERGE_REGION); + if (regionA.getRegionInfo().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID || + regionB.getRegionInfo().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + throw new ServiceException(new MergeRegionException("Can't merge non-default replicas")); + } LOG.info("Receiving merging request for " + regionA + ", " + regionB + ",forcible=" + forcible); long startTime = EnvironmentEdgeManager.currentTime(); @@ -1305,42 +1314,17 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } for (RegionOpenInfo regionOpenInfo : request.getOpenInfoList()) { final HRegionInfo region = HRegionInfo.convert(regionOpenInfo.getRegion()); - OpenRegionCoordination coordination = regionServer.getCoordinatedStateManager(). - getOpenRegionCoordination(); - OpenRegionCoordination.OpenRegionDetails ord = - coordination.parseFromProtoRequest(regionOpenInfo); - HTableDescriptor htd; try { - final HRegion onlineRegion = regionServer.getFromOnlineRegions(region.getEncodedName()); + String encodedName = region.getEncodedName(); + byte[] encodedNameBytes = region.getEncodedNameAsBytes(); + final HRegion onlineRegion = regionServer.getFromOnlineRegions(encodedName); if (onlineRegion != null) { - //Check if the region can actually be opened. - if (onlineRegion.getCoprocessorHost() != null) { - onlineRegion.getCoprocessorHost().preOpen(); - } - // See HBASE-5094. Cross check with hbase:meta if still this RS is owning - // the region. - Pair p = MetaTableAccessor.getRegion( - regionServer.getConnection(), region.getRegionName()); - if (regionServer.serverName.equals(p.getSecond())) { - Boolean closing = regionServer.regionsInTransitionInRS.get(region.getEncodedNameAsBytes()); - // Map regionsInTransitionInRSOnly has an entry for a region only if the region - // is in transition on this RS, so here closing can be null. If not null, it can - // be true or false. True means the region is opening on this RS; while false - // means the region is closing. Only return ALREADY_OPENED if not closing (i.e. - // not in transition any more, or still transition to open. - if (!Boolean.FALSE.equals(closing) - && regionServer.getFromOnlineRegions(region.getEncodedName()) != null) { - LOG.warn("Attempted open of " + region.getEncodedName() - + " but already online on this server"); - builder.addOpeningState(RegionOpeningState.ALREADY_OPENED); - continue; - } - } else { - LOG.warn("The region " + region.getEncodedName() + " is online on this server" - + " but hbase:meta does not have this server - continue opening."); - regionServer.removeFromOnlineRegions(onlineRegion, null); - } + // The region is already online. This should not happen any more. + String error = "Received OPEN for the region:" + + region.getRegionNameAsString() + ", which is already online"; + regionServer.abort(error); + throw new IOException(error); } LOG.info("Open " + region.getRegionNameAsString()); htd = htds.get(region.getTable()); @@ -1350,21 +1334,23 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } final Boolean previous = regionServer.regionsInTransitionInRS.putIfAbsent( - region.getEncodedNameAsBytes(), Boolean.TRUE); + encodedNameBytes, Boolean.TRUE); if (Boolean.FALSE.equals(previous)) { - // There is a close in progress. We need to mark this open as failed in ZK. - - coordination.tryTransitionFromOfflineToFailedOpen(regionServer, region, ord); - - throw new RegionAlreadyInTransitionException("Received OPEN for the region:" - + region.getRegionNameAsString() + " , which we are already trying to CLOSE "); + if (regionServer.getFromOnlineRegions(encodedName) != null) { + // There is a close in progress. This should not happen any more. + String error = "Received OPEN for the region:" + + region.getRegionNameAsString() + ", which we are already trying to CLOSE"; + regionServer.abort(error); + throw new IOException(error); + } + regionServer.regionsInTransitionInRS.put(encodedNameBytes, Boolean.TRUE); } if (Boolean.TRUE.equals(previous)) { // An open is in progress. This is supported, but let's log this. LOG.info("Receiving OPEN for the region:" + - region.getRegionNameAsString() + " , which we are already trying to OPEN" + region.getRegionNameAsString() + ", which we are already trying to OPEN" + " - ignoring this new request for this region."); } @@ -1372,7 +1358,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, // want to keep returning the stale moved record while we are opening/if we close again. regionServer.removeFromMovedRegions(region.getEncodedName()); - if (previous == null) { + if (previous == null || !previous.booleanValue()) { // check if the region to be opened is marked in recovering state in ZK if (ZKSplitLog.isRegionMarkedRecoveringInZK(regionServer.getZooKeeper(), region.getEncodedName())) { @@ -1394,12 +1380,12 @@ public class RSRpcServices implements HBaseRPCErrorHandler, // Need to pass the expected version in the constructor. if (region.isMetaRegion()) { regionServer.service.submit(new OpenMetaHandler( - regionServer, regionServer, region, htd, coordination, ord)); + regionServer, regionServer, region, htd)); } else { regionServer.updateRegionFavoredNodesMapping(region.getEncodedName(), regionOpenInfo.getFavoredNodesList()); regionServer.service.submit(new OpenRegionHandler( - regionServer, regionServer, region, htd, coordination, ord)); + regionServer, regionServer, region, htd)); } } @@ -1441,11 +1427,24 @@ public class RSRpcServices implements HBaseRPCErrorHandler, // empty input return ReplicateWALEntryResponse.newBuilder().build(); } - HRegion region = regionServer.getRegionByEncodedName( - entries.get(0).getKey().getEncodedRegionName().toStringUtf8()); - RegionCoprocessorHost coprocessorHost = region.getCoprocessorHost(); + ByteString regionName = entries.get(0).getKey().getEncodedRegionName(); + HRegion region = regionServer.getRegionByEncodedName(regionName.toStringUtf8()); + RegionCoprocessorHost coprocessorHost = + ServerRegionReplicaUtil.isDefaultReplica(region.getRegionInfo()) + ? region.getCoprocessorHost() + : null; // do not invoke coprocessors if this is a secondary region replica List> walEntries = new ArrayList>(); + + // Skip adding the edits to WAL if this is a secondary region replica + boolean isPrimary = RegionReplicaUtil.isDefaultReplica(region.getRegionInfo()); + Durability durability = isPrimary ? Durability.USE_DEFAULT : Durability.SKIP_WAL; + for (WALEntry entry : entries) { + if (!regionName.equals(entry.getKey().getEncodedRegionName())) { + throw new NotServingRegionException("Replay request contains entries from multiple " + + "regions. First region:" + regionName.toStringUtf8() + " , other region:" + + entry.getKey().getEncodedRegionName()); + } if (regionServer.nonceManager != null) { long nonceGroup = entry.getKey().hasNonceGroup() ? entry.getKey().getNonceGroup() : HConstants.NO_NONCE; @@ -1455,7 +1454,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, Pair walEntry = (coprocessorHost == null) ? null : new Pair(); List edits = WALSplitter.getMutationsFromWALEntry(entry, - cells, walEntry); + cells, walEntry, durability); if (coprocessorHost != null) { // Start coprocessor replay here. The coprocessor is for each WALEdit instead of a // KeyValue. @@ -1561,6 +1560,10 @@ public class RSRpcServices implements HBaseRPCErrorHandler, requestCount.increment(); HRegion region = getRegion(request.getRegion()); region.startRegionOperation(Operation.SPLIT_REGION); + if (region.getRegionInfo().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + throw new IOException("Can't split replicas directly. " + + "Replicas are auto-split when their primary is split."); + } LOG.info("Splitting " + region.getRegionNameAsString()); long startTime = EnvironmentEdgeManager.currentTime(); HRegion.FlushResult flushResult = region.flushcache(); @@ -1689,6 +1692,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, public GetResponse get(final RpcController controller, final GetRequest request) throws ServiceException { long before = EnvironmentEdgeManager.currentTime(); + OperationQuota quota = null; try { checkOpen(); requestCount.increment(); @@ -1699,6 +1703,8 @@ public class RSRpcServices implements HBaseRPCErrorHandler, Boolean existence = null; Result r = null; + quota = getQuotaManager().checkQuota(region, OperationQuota.OperationType.GET); + if (get.hasClosestRowBefore() && get.getClosestRowBefore()) { if (get.getColumnCount() != 1) { throw new DoNotRetryIOException( @@ -1728,10 +1734,13 @@ public class RSRpcServices implements HBaseRPCErrorHandler, ClientProtos.Result pbr = ProtobufUtil.toResult(existence, region.getRegionInfo().getReplicaId() != 0); builder.setResult(pbr); - } else if (r != null) { + } else if (r != null) { ClientProtos.Result pbr = ProtobufUtil.toResult(r); builder.setResult(pbr); } + if (r != null) { + quota.addGetResult(r); + } return builder.build(); } catch (IOException ie) { throw new ServiceException(ie); @@ -1740,6 +1749,9 @@ public class RSRpcServices implements HBaseRPCErrorHandler, regionServer.metricsRegionServer.updateGet( EnvironmentEdgeManager.currentTime() - before); } + if (quota != null) { + quota.close(); + } } } @@ -1775,10 +1787,12 @@ public class RSRpcServices implements HBaseRPCErrorHandler, for (RegionAction regionAction : request.getRegionActionList()) { this.requestCount.add(regionAction.getActionCount()); + OperationQuota quota; HRegion region; regionActionResultBuilder.clear(); try { region = getRegion(regionAction.getRegion()); + quota = getQuotaManager().checkQuota(region, regionAction.getActionList()); } catch (IOException e) { regionActionResultBuilder.setException(ResponseConverter.buildException(e)); responseBuilder.addRegionActionResult(regionActionResultBuilder.build()); @@ -1800,6 +1814,13 @@ public class RSRpcServices implements HBaseRPCErrorHandler, processed = checkAndRowMutate(region, regionAction.getActionList(), cellScanner, row, family, qualifier, compareOp, comparator); } else { + ClientProtos.RegionLoadStats stats = mutateRows(region, regionAction.getActionList(), + cellScanner); + // add the stats to the request + if(stats != null) { + responseBuilder.addRegionActionResult(RegionActionResult.newBuilder() + .addResultOrException(ResultOrException.newBuilder().setLoadStats(stats))); + } mutateRows(region, regionAction.getActionList(), cellScanner); processed = Boolean.TRUE; } @@ -1809,10 +1830,11 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } } else { // doNonAtomicRegionMutation manages the exception internally - cellsToReturn = doNonAtomicRegionMutation(region, regionAction, cellScanner, + cellsToReturn = doNonAtomicRegionMutation(region, quota, regionAction, cellScanner, regionActionResultBuilder, cellsToReturn, nonceGroup); } responseBuilder.addRegionActionResult(regionActionResultBuilder.build()); + quota.close(); } // Load the controller with the Cells to return. if (cellsToReturn != null && !cellsToReturn.isEmpty() && controller != null) { @@ -1836,6 +1858,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, // It is also the conduit via which we pass back data. PayloadCarryingRpcController controller = (PayloadCarryingRpcController)rpcc; CellScanner cellScanner = controller != null? controller.cellScanner(): null; + OperationQuota quota = null; // Clear scanner so we are not holding on to reference across call. if (controller != null) controller.setCellScanner(null); try { @@ -1852,17 +1875,21 @@ public class RSRpcServices implements HBaseRPCErrorHandler, Result r = null; Boolean processed = null; MutationType type = mutation.getMutateType(); + + quota = getQuotaManager().checkQuota(region, OperationQuota.OperationType.MUTATE); + switch (type) { case APPEND: // TODO: this doesn't actually check anything. - r = append(region, mutation, cellScanner, nonceGroup); + r = append(region, quota, mutation, cellScanner, nonceGroup); break; case INCREMENT: // TODO: this doesn't actually check anything. - r = increment(region, mutation, cellScanner, nonceGroup); + r = increment(region, quota, mutation, cellScanner, nonceGroup); break; case PUT: Put put = ProtobufUtil.toPut(mutation, cellScanner); + quota.addMutation(put); if (request.hasCondition()) { Condition condition = request.getCondition(); byte[] row = condition.getRow().toByteArray(); @@ -1891,6 +1918,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, break; case DELETE: Delete delete = ProtobufUtil.toDelete(mutation, cellScanner); + quota.addMutation(delete); if (request.hasCondition()) { Condition condition = request.getCondition(); byte[] row = condition.getRow().toByteArray(); @@ -1921,12 +1949,18 @@ public class RSRpcServices implements HBaseRPCErrorHandler, throw new DoNotRetryIOException( "Unsupported mutate type: " + type.name()); } - if (processed != null) builder.setProcessed(processed.booleanValue()); + if (processed != null) { + builder.setProcessed(processed.booleanValue()); + } addResult(builder, r, controller); return builder.build(); } catch (IOException ie) { regionServer.checkFileSystem(); throw new ServiceException(ie); + } finally { + if (quota != null) { + quota.close(); + } } } @@ -1940,6 +1974,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, @Override public ScanResponse scan(final RpcController controller, final ScanRequest request) throws ServiceException { + OperationQuota quota = null; Leases.Lease lease = null; String scannerName = null; try { @@ -2025,6 +2060,9 @@ public class RSRpcServices implements HBaseRPCErrorHandler, ttl = this.scannerLeaseTimeoutPeriod; } + quota = getQuotaManager().checkQuota(region, OperationQuota.OperationType.SCAN); + long maxQuotaResultSize = Math.min(maxScannerResultSize, quota.getReadAvailable()); + if (rows > 0) { // if nextCallSeq does not match throw Exception straight away. This needs to be // performed even before checking of Lease. @@ -2070,9 +2108,9 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } if (!done) { - long maxResultSize = scanner.getMaxResultSize(); + long maxResultSize = Math.min(scanner.getMaxResultSize(), maxQuotaResultSize); if (maxResultSize <= 0) { - maxResultSize = maxScannerResultSize; + maxResultSize = maxQuotaResultSize; } List values = new ArrayList(); region.startRegionOperation(Operation.SCAN); @@ -2114,6 +2152,8 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } } + quota.addScanResult(results); + // If the scanner's filter - if any - is done with the scan // and wants to tell the client to stop the scan. This is done by passing // a null result, and setting moreResults to false. @@ -2123,7 +2163,7 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } else { addResults(builder, results, controller, RegionReplicaUtil.isDefaultReplica(region.getRegionInfo())); } - } finally { + } finally { // We're done. On way out re-add the above removed lease. // Adding resets expiration time on lease. if (scanners.containsKey(scannerName)) { @@ -2173,6 +2213,10 @@ public class RSRpcServices implements HBaseRPCErrorHandler, } } throw new ServiceException(ie); + } finally { + if (quota != null) { + quota.close(); + } } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java index fcb6fe2..87c8b9e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java @@ -33,6 +33,11 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; import java.util.regex.Matcher; +import com.google.common.collect.ImmutableList; +import com.google.common.collect.Lists; +import com.google.protobuf.Message; +import com.google.protobuf.Service; + import org.apache.commons.collections.map.AbstractReferenceMap; import org.apache.commons.collections.map.ReferenceMap; import org.apache.commons.logging.Log; @@ -70,7 +75,6 @@ import org.apache.hadoop.hbase.coprocessor.RegionObserver.MutationType; import org.apache.hadoop.hbase.filter.ByteArrayComparable; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.io.Reference; import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.regionserver.HRegion.Operation; @@ -82,11 +86,6 @@ import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CoprocessorClassLoader; import org.apache.hadoop.hbase.util.Pair; -import com.google.common.collect.ImmutableList; -import com.google.common.collect.Lists; -import com.google.protobuf.Message; -import com.google.protobuf.Service; - /** * Implements the coprocessor environment and runtime support for coprocessors * loaded within a {@link HRegion}. @@ -236,7 +235,7 @@ public class RegionCoprocessorHost static List getTableCoprocessorAttrsFromSchema(Configuration conf, HTableDescriptor htd) { List result = Lists.newArrayList(); - for (Map.Entry e: htd.getValues().entrySet()) { + for (Map.Entry e: htd.getValues().entrySet()) { String key = Bytes.toString(e.getKey().get()).trim(); if (HConstants.CP_HTD_ATTR_KEY_PATTERN.matcher(key).matches()) { String spec = Bytes.toString(e.getValue().get()).trim(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java index cbb8dd8..5226a98 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeRequest.java @@ -23,9 +23,9 @@ import java.io.IOException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.util.StringUtils; import com.google.common.base.Preconditions; @@ -118,8 +118,8 @@ class RegionMergeRequest implements Runnable { + ". Region merge took " + StringUtils.formatTimeDiff(EnvironmentEdgeManager.currentTime(), startTime)); } catch (IOException ex) { - LOG.error("Merge failed " + this, - RemoteExceptionHandler.checkIOException(ex)); + ex = ex instanceof RemoteException ? ((RemoteException) ex).unwrapRemoteException() : ex; + LOG.error("Merge failed " + this, ex); server.checkFileSystem(); } finally { releaseTableLock(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeTransaction.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeTransaction.java index e49193d..d478bfe 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeTransaction.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionMergeTransaction.java @@ -31,22 +31,17 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.MetaMutationAnnotation; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Delete; -import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.RegionMergeCoordination.RegionMergeDetails; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; import org.apache.hadoop.hbase.regionserver.SplitTransaction.LoggingProgressable; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; -import org.apache.zookeeper.KeeperException; /** * Executes region merge as a "transaction". It is similar with @@ -89,7 +84,6 @@ public class RegionMergeTransaction { private final Path mergesdir; // We only merge adjacent regions if forcible is false private final boolean forcible; - private boolean useCoordinationForAssignment; /** * Types to add to the transaction journal. Each enum is a step in the merge @@ -141,8 +135,6 @@ public class RegionMergeTransaction { private RegionServerCoprocessorHost rsCoprocessorHost = null; - private RegionMergeDetails rmd; - /** * Constructor * @param a region a to merge @@ -231,14 +223,6 @@ public class RegionMergeTransaction { */ public HRegion execute(final Server server, final RegionServerServices services) throws IOException { - useCoordinationForAssignment = - server == null ? true : ConfigUtil.useZKForAssignment(server.getConfiguration()); - if (rmd == null) { - rmd = - server != null && server.getCoordinatedStateManager() != null ? ((BaseCoordinatedStateManager) server - .getCoordinatedStateManager()).getRegionMergeCoordination().getDefaultDetails() - : null; - } if (rsCoprocessorHost == null) { rsCoprocessorHost = server != null ? ((HRegionServer) server).getRegionServerCoprocessorHost() : null; @@ -253,11 +237,6 @@ public class RegionMergeTransaction { public HRegion stepsAfterPONR(final Server server, final RegionServerServices services, HRegion mergedRegion) throws IOException { openMergedRegion(server, services, mergedRegion); - if (useCoordination(server)) { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().completeRegionMergeTransaction(services, mergedRegionInfo, - region_a, region_b, rmd, mergedRegion); - } if (rsCoprocessorHost != null) { rsCoprocessorHost.postMerge(this.region_a, this.region_b, mergedRegion); } @@ -322,35 +301,16 @@ public class RegionMergeTransaction { // will determine whether the region is merged or not in case of failures. // If it is successful, master will roll-forward, if not, master will // rollback - if (!testing && useCoordinationForAssignment) { - if (metaEntries.isEmpty()) { - MetaTableAccessor.mergeRegions(server.getConnection(), - mergedRegion.getRegionInfo(), region_a.getRegionInfo(), region_b.getRegionInfo(), - server.getServerName()); - } else { - mergeRegionsAndPutMetaEntries(server.getConnection(), - mergedRegion.getRegionInfo(), region_a.getRegionInfo(), region_b.getRegionInfo(), - server.getServerName(), metaEntries); - } - } else if (services != null && !useCoordinationForAssignment) { - if (!services.reportRegionStateTransition(TransitionCode.MERGE_PONR, - mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { - // Passed PONR, let SSH clean it up - throw new IOException("Failed to notify master that merge passed PONR: " - + region_a.getRegionInfo().getRegionNameAsString() + " and " - + region_b.getRegionInfo().getRegionNameAsString()); - } + if (services != null && !services.reportRegionStateTransition(TransitionCode.MERGE_PONR, + mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { + // Passed PONR, let SSH clean it up + throw new IOException("Failed to notify master that merge passed PONR: " + + region_a.getRegionInfo().getRegionNameAsString() + " and " + + region_b.getRegionInfo().getRegionNameAsString()); } return mergedRegion; } - private void mergeRegionsAndPutMetaEntries(HConnection hConnection, - HRegionInfo mergedRegion, HRegionInfo regionA, HRegionInfo regionB, - ServerName serverName, List metaEntries) throws IOException { - prepareMutationsForMerge(mergedRegion, regionA, regionB, serverName, metaEntries); - MetaTableAccessor.mutateMetaTable(hConnection, metaEntries); - } - public void prepareMutationsForMerge(HRegionInfo mergedRegion, HRegionInfo regionA, HRegionInfo regionB, ServerName serverName, List mutations) throws IOException { HRegionInfo copyOfMerged = new HRegionInfo(mergedRegion); @@ -380,40 +340,13 @@ public class RegionMergeTransaction { public HRegion stepsBeforePONR(final Server server, final RegionServerServices services, boolean testing) throws IOException { - if (rmd == null) { - rmd = - server != null && server.getCoordinatedStateManager() != null ? ((BaseCoordinatedStateManager) server - .getCoordinatedStateManager()).getRegionMergeCoordination().getDefaultDetails() - : null; - } - - // If server doesn't have a coordination state manager, don't do coordination actions. - if (useCoordination(server)) { - try { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().startRegionMergeTransaction(mergedRegionInfo, - server.getServerName(), region_a.getRegionInfo(), region_b.getRegionInfo()); - } catch (IOException e) { - throw new IOException("Failed to start region merge transaction for " - + this.mergedRegionInfo.getRegionNameAsString(), e); - } - } else if (services != null && !useCoordinationForAssignment) { - if (!services.reportRegionStateTransition(TransitionCode.READY_TO_MERGE, - mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { - throw new IOException("Failed to get ok from master to merge " - + region_a.getRegionInfo().getRegionNameAsString() + " and " - + region_b.getRegionInfo().getRegionNameAsString()); - } + if (services != null && !services.reportRegionStateTransition(TransitionCode.READY_TO_MERGE, + mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { + throw new IOException("Failed to get ok from master to merge " + + region_a.getRegionInfo().getRegionNameAsString() + " and " + + region_b.getRegionInfo().getRegionNameAsString()); } this.journal.add(JournalEntry.SET_MERGING); - if (useCoordination(server)) { - // After creating the merge node, wait for master to transition it - // from PENDING_MERGE to MERGING so that we can move on. We want master - // knows about it and won't transition any region which is merging. - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().waitForRegionMergeTransaction(services, mergedRegionInfo, - region_a, region_b, rmd); - } this.region_a.getRegionFileSystem().createMergesDir(); this.journal.add(JournalEntry.CREATED_MERGE_DIR); @@ -432,19 +365,6 @@ public class RegionMergeTransaction { // clean this up. mergeStoreFiles(hstoreFilesOfRegionA, hstoreFilesOfRegionB); - if (useCoordination(server)) { - try { - // Do the final check in case any merging region is moved somehow. If so, the transition - // will fail. - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().confirmRegionMergeTransaction(this.mergedRegionInfo, - region_a.getRegionInfo(), region_b.getRegionInfo(), server.getServerName(), rmd); - } catch (IOException e) { - throw new IOException("Failed setting MERGING on " - + this.mergedRegionInfo.getRegionNameAsString(), e); - } - } - // Log to the journal that we are creating merged region. We could fail // halfway through. If we do, we could have left // stuff in fs that needs cleanup -- a storefile or two. Thats why we @@ -578,20 +498,13 @@ public class RegionMergeTransaction { merged.openHRegion(reporter); if (services != null) { - try { - if (useCoordinationForAssignment) { - services.postOpenDeployTasks(merged); - } else if (!services.reportRegionStateTransition(TransitionCode.MERGED, - mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { - throw new IOException("Failed to report merged region to master: " - + mergedRegionInfo.getShortNameToLog()); - } - services.addToOnlineRegions(merged); - } catch (KeeperException ke) { - throw new IOException(ke); + if (!services.reportRegionStateTransition(TransitionCode.MERGED, + mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { + throw new IOException("Failed to report merged region to master: " + + mergedRegionInfo.getShortNameToLog()); } + services.addToOnlineRegions(merged); } - } /** @@ -652,10 +565,7 @@ public class RegionMergeTransaction { switch (je) { case SET_MERGING: - if (useCoordination(server)) { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getRegionMergeCoordination().clean(this.mergedRegionInfo); - } else if (services != null && !useCoordinationForAssignment + if (services != null && !services.reportRegionStateTransition(TransitionCode.MERGE_REVERTED, mergedRegionInfo, region_a.getRegionInfo(), region_b.getRegionInfo())) { return false; @@ -734,13 +644,6 @@ public class RegionMergeTransaction { return this.mergesdir; } - private boolean useCoordination(final Server server) { - return server != null && useCoordinationForAssignment - && server.getCoordinatedStateManager() != null; - } - - - /** * Checks if the given region has merge qualifier in hbase:meta * @param services diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java index 6bbe4eb..ec68dc7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java @@ -22,13 +22,16 @@ import java.io.IOException; import java.util.List; import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.HRegionInfo; /** * RegionScanner describes iterators over rows in an HRegion. */ -@InterfaceAudience.Private +@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC) +@InterfaceStability.Stable public interface RegionScanner extends InternalScanner { /** * @return The RegionInfo for this scanner. @@ -68,8 +71,8 @@ public interface RegionScanner extends InternalScanner { * Grab the next row's worth of values with the default limit on the number of values * to return. * This is a special internal method to be called from coprocessor hooks to avoid expensive setup. - * Caller must set the thread's readpoint, start and close a region operation, an - * synchronize on the scanner object. Caller should maintain and update metrics. + * Caller must set the thread's readpoint, start and close a region operation, an synchronize on the scanner object. + * Caller should maintain and update metrics. * See {@link #nextRaw(List, int)} * @param result return output array * @return true if more rows exist after this one, false if scanner is done diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java index 5ea630e..08d038c 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionServerServices.java @@ -22,15 +22,18 @@ import com.google.protobuf.Service; import java.io.IOException; import java.util.Map; +import java.util.Set; import java.util.concurrent.ConcurrentMap; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.executor.ExecutorService; import org.apache.hadoop.hbase.ipc.RpcServerInterface; import org.apache.hadoop.hbase.master.TableLockManager; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; +import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager; import org.apache.hadoop.hbase.wal.WAL; import org.apache.zookeeper.KeeperException; @@ -70,6 +73,11 @@ public interface RegionServerServices TableLockManager getTableLockManager(); /** + * @return RegionServer's instance of {@link RegionServerQuotaManager} + */ + RegionServerQuotaManager getRegionServerQuotaManager(); + + /** * Tasks to perform after region open to complete deploy of region on * regionserver * @@ -128,10 +136,17 @@ public interface RegionServerServices public ServerNonceManager getNonceManager(); /** + * @return all the online tables in this RS + */ + Set getOnlineTables(); + + + /** * Registers a new protocol buffer {@link Service} subclass as a coprocessor endpoint to be * available for handling * @param service the {@code Service} subclass instance to expose as a coprocessor endpoint * @return {@code true} if the registration was successful, {@code false} */ boolean registerService(Service service); + } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java index 77611da..ec7f9fe 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java @@ -137,5 +137,4 @@ public abstract class RegionSplitPolicy extends Configured { protected boolean skipStoreFileRangeCheck() { return false; } - } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowTooBigException.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowTooBigException.java index 7722baf..4a408e7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowTooBigException.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RowTooBigException.java @@ -19,16 +19,20 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.RegionException; /** * Gets or Scans throw this exception if running without in-row scan flag * set and row size appears to exceed max configured size (configurable via * hbase.table.max.rowsize). + * + * @deprecated use {@link org.apache.hadoop.hbase.client.RowTooBigException} instead. */ @InterfaceAudience.Public -public class RowTooBigException extends RegionException { +@Deprecated +@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="NM_SAME_SIMPLE_NAME_AS_SUPERCLASS", + justification="Temporary glue. To be removed") +public class RowTooBigException extends org.apache.hadoop.hbase.client.RowTooBigException { public RowTooBigException(String message) { super(message); } -} +} \ No newline at end of file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitRequest.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitRequest.java index a96f563..887b6ab 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitRequest.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitRequest.java @@ -23,10 +23,10 @@ import java.io.IOException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.master.TableLockManager.TableLock; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.util.StringUtils; import com.google.common.base.Preconditions; @@ -109,7 +109,8 @@ class SplitRequest implements Runnable { return; } } catch (IOException ex) { - LOG.error("Split failed " + this, RemoteExceptionHandler.checkIOException(ex)); + ex = ex instanceof RemoteException ? ((RemoteException) ex).unwrapRemoteException() : ex; + LOG.error("Split failed " + this, ex); server.checkFileSystem(); } finally { if (this.parent.getCoprocessorHost() != null) { @@ -117,7 +118,7 @@ class SplitRequest implements Runnable { this.parent.getCoprocessorHost().postCompleteSplit(); } catch (IOException io) { LOG.error("Split failed " + this, - RemoteExceptionHandler.checkIOException(io)); + io instanceof RemoteException ? ((RemoteException) io).unwrapRemoteException() : io); } } if (parent.shouldForceSplit()) { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java index 06e726f..dbcf033 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SplitTransaction.java @@ -40,16 +40,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.MetaTableAccessor; -import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.SplitTransactionCoordination; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CancelableProgressable; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HasThread; @@ -93,8 +88,6 @@ public class SplitTransaction { private HRegionInfo hri_a; private HRegionInfo hri_b; private long fileSplitTimeout = 30000; - public SplitTransactionCoordination.SplitTransactionDetails std; - boolean useZKForAssignment; /* * Row to split around @@ -336,52 +329,23 @@ public class SplitTransaction { // will determine whether the region is split or not in case of failures. // If it is successful, master will roll-forward, if not, master will rollback // and assign the parent region. - if (!testing && useZKForAssignment) { - if (metaEntries == null || metaEntries.isEmpty()) { - MetaTableAccessor.splitRegion(server.getConnection(), - parent.getRegionInfo(), daughterRegions.getFirst().getRegionInfo(), - daughterRegions.getSecond().getRegionInfo(), server.getServerName()); - } else { - offlineParentInMetaAndputMetaEntries(server.getConnection(), - parent.getRegionInfo(), daughterRegions.getFirst().getRegionInfo(), daughterRegions - .getSecond().getRegionInfo(), server.getServerName(), metaEntries); - } - } else if (services != null && !useZKForAssignment) { - if (!services.reportRegionStateTransition(TransitionCode.SPLIT_PONR, - parent.getRegionInfo(), hri_a, hri_b)) { - // Passed PONR, let SSH clean it up - throw new IOException("Failed to notify master that split passed PONR: " - + parent.getRegionInfo().getRegionNameAsString()); - } + if (services != null && !services.reportRegionStateTransition(TransitionCode.SPLIT_PONR, + parent.getRegionInfo(), hri_a, hri_b)) { + // Passed PONR, let SSH clean it up + throw new IOException("Failed to notify master that split passed PONR: " + + parent.getRegionInfo().getRegionNameAsString()); } return daughterRegions; } public PairOfSameType stepsBeforePONR(final Server server, final RegionServerServices services, boolean testing) throws IOException { - - if (useCoordinatedStateManager(server)) { - if (std == null) { - std = - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().getDefaultDetails(); - } - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().startSplitTransaction(parent, server.getServerName(), - hri_a, hri_b); - } else if (services != null && !useZKForAssignment) { - if (!services.reportRegionStateTransition(TransitionCode.READY_TO_SPLIT, - parent.getRegionInfo(), hri_a, hri_b)) { - throw new IOException("Failed to get ok from master to split " - + parent.getRegionNameAsString()); - } + if (services != null && !services.reportRegionStateTransition(TransitionCode.READY_TO_SPLIT, + parent.getRegionInfo(), hri_a, hri_b)) { + throw new IOException("Failed to get ok from master to split " + + parent.getRegionNameAsString()); } this.journal.add(new JournalEntry(JournalEntryType.SET_SPLITTING)); - if (useCoordinatedStateManager(server)) { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().waitForSplitTransaction(services, parent, hri_a, - hri_b, std); - } this.parent.getRegionFileSystem().createSplitsDir(); this.journal.add(new JournalEntry(JournalEntryType.CREATE_SPLIT_DIR)); @@ -499,24 +463,14 @@ public class SplitTransaction { bOpener.getName(), bOpener.getException()); } if (services != null) { - try { - if (useZKForAssignment) { - // add 2nd daughter first (see HBASE-4335) - services.postOpenDeployTasks(b); - } else if (!services.reportRegionStateTransition(TransitionCode.SPLIT, - parent.getRegionInfo(), hri_a, hri_b)) { - throw new IOException("Failed to report split region to master: " - + parent.getRegionInfo().getShortNameToLog()); - } - // Should add it to OnlineRegions - services.addToOnlineRegions(b); - if (useZKForAssignment) { - services.postOpenDeployTasks(a); - } - services.addToOnlineRegions(a); - } catch (KeeperException ke) { - throw new IOException(ke); + if (!services.reportRegionStateTransition(TransitionCode.SPLIT, + parent.getRegionInfo(), hri_a, hri_b)) { + throw new IOException("Failed to report split region to master: " + + parent.getRegionInfo().getShortNameToLog()); } + // Should add it to OnlineRegions + services.addToOnlineRegions(b); + services.addToOnlineRegions(a); } } } @@ -534,13 +488,6 @@ public class SplitTransaction { public PairOfSameType execute(final Server server, final RegionServerServices services) throws IOException { - useZKForAssignment = server == null ? true : - ConfigUtil.useZKForAssignment(server.getConfiguration()); - if (useCoordinatedStateManager(server)) { - std = - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().getDefaultDetails(); - } PairOfSameType regions = createDaughters(server, services); if (this.parent.getCoprocessorHost() != null) { this.parent.getCoprocessorHost().preSplitAfterPONR(); @@ -552,11 +499,6 @@ public class SplitTransaction { final RegionServerServices services, PairOfSameType regions) throws IOException { openDaughters(server, services, regions.getFirst(), regions.getSecond()); - if (useCoordinatedStateManager(server)) { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().completeSplitTransaction(services, regions.getFirst(), - regions.getSecond(), std, parent); - } journal.add(new JournalEntry(JournalEntryType.BEFORE_POST_SPLIT_HOOK)); // Coprocessor callback if (parent.getCoprocessorHost() != null) { @@ -566,30 +508,6 @@ public class SplitTransaction { return regions; } - private void offlineParentInMetaAndputMetaEntries(HConnection hConnection, - HRegionInfo parent, HRegionInfo splitA, HRegionInfo splitB, - ServerName serverName, List metaEntries) throws IOException { - List mutations = metaEntries; - HRegionInfo copyOfParent = new HRegionInfo(parent); - copyOfParent.setOffline(true); - copyOfParent.setSplit(true); - - //Put for parent - Put putParent = MetaTableAccessor.makePutFromRegionInfo(copyOfParent); - MetaTableAccessor.addDaughtersToPut(putParent, splitA, splitB); - mutations.add(putParent); - - //Puts for daughters - Put putA = MetaTableAccessor.makePutFromRegionInfo(splitA); - Put putB = MetaTableAccessor.makePutFromRegionInfo(splitB); - - addLocation(putA, serverName, 1); //these are new regions, openSeqNum = 1 is fine. - addLocation(putB, serverName, 1); - mutations.add(putA); - mutations.add(putB); - MetaTableAccessor.mutateMetaTable(hConnection, mutations); - } - public Put addLocation(final Put p, final ServerName sn, long openSeqNum) { p.addImmutable(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER, Bytes.toBytes(sn.getHostAndPort())); @@ -672,10 +590,6 @@ public class SplitTransaction { } } - private boolean useCoordinatedStateManager(final Server server) { - return server != null && useZKForAssignment && server.getCoordinatedStateManager() != null; - } - /** * Creates reference files for top and bottom half of the * @param hstoreFilesToSplit map of store files to create half file references for. @@ -697,6 +611,7 @@ public class SplitTransaction { // no file needs to be splitted. return new Pair(0,0); } + LOG.info("Preparing to split " + nbFiles + " storefiles for region " + this.parent); ThreadFactoryBuilder builder = new ThreadFactoryBuilder(); builder.setNameFormat("StoreFileSplitter-%1$d"); ThreadFactory factory = builder.build(); @@ -746,13 +661,16 @@ public class SplitTransaction { } } + if (LOG.isDebugEnabled()) { + LOG.debug("Split storefiles for region " + this.parent + " Daugther A: " + created_a + + " storefiles, Daugther B: " + created_b + " storefiles."); + } return new Pair(created_a, created_b); } private Pair splitStoreFile(final byte[] family, final StoreFile sf) throws IOException { HRegionFileSystem fs = this.parent.getRegionFileSystem(); String familyName = Bytes.toString(family); - Path path_a = fs.splitStoreFile(this.hri_a, familyName, sf, this.splitrow, false, this.parent.getSplitPolicy()); @@ -809,10 +727,7 @@ public class SplitTransaction { switch(je.type) { case SET_SPLITTING: - if (useCoordinatedStateManager(server) && server instanceof HRegionServer) { - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().clean(this.parent.getRegionInfo()); - } else if (services != null && !useZKForAssignment + if (services != null && !services.reportRegionStateTransition(TransitionCode.SPLIT_REVERTED, parent.getRegionInfo(), hri_a, hri_b)) { return false; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java index 1626cc3..b06dc98 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFile.java @@ -162,9 +162,6 @@ public class StoreFile { */ private final BloomType cfBloomType; - // the last modification time stamp - private long modificationTimeStamp = 0L; - /** * Constructor, loads a reader and it's indices, etc. May allocate a * substantial amount of ram depending on the underlying files (10-20MB?). @@ -214,9 +211,6 @@ public class StoreFile { "cfBloomType=" + cfBloomType + " (disabled in config)"); this.cfBloomType = BloomType.NONE; } - - // cache the modification time stamp of this store file - this.modificationTimeStamp = fileInfo.getModificationTime(); } /** @@ -228,7 +222,6 @@ public class StoreFile { this.fileInfo = other.fileInfo; this.cacheConf = other.cacheConf; this.cfBloomType = other.cfBloomType; - this.modificationTimeStamp = other.modificationTimeStamp; } /** @@ -285,10 +278,15 @@ public class StoreFile { return this.sequenceid; } - public long getModificationTimeStamp() { - return modificationTimeStamp; + public long getModificationTimeStamp() throws IOException { + return (fileInfo == null) ? 0 : fileInfo.getModificationTime(); } + /** + * Only used by the Striped Compaction Policy + * @param key + * @return value associated with the metadata key + */ public byte[] getMetadataValue(byte[] key) { return metadataMap.get(key); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java index 59da86a..0a360e2 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java @@ -43,7 +43,7 @@ import org.apache.hadoop.hbase.util.FSUtils; * Describe a StoreFile (hfile, reference, link) */ @InterfaceAudience.Private -public class StoreFileInfo implements Comparable { +public class StoreFileInfo { public static final Log LOG = LogFactory.getLog(StoreFileInfo.class); /** @@ -70,6 +70,9 @@ public class StoreFileInfo implements Comparable { // Configuration private Configuration conf; + // FileSystem handle + private final FileSystem fs; + // HDFS blocks distribution information private HDFSBlocksDistribution hdfsBlocksDistribution = null; @@ -79,8 +82,7 @@ public class StoreFileInfo implements Comparable { // If this storefile is a link to another, this is the link instance. private final HFileLink link; - // FileSystem information for the file. - private final FileStatus fileStatus; + private final Path initialPath; private RegionCoprocessorHost coprocessorHost; @@ -88,41 +90,35 @@ public class StoreFileInfo implements Comparable { * Create a Store File Info * @param conf the {@link Configuration} to use * @param fs The current file system to use. - * @param path The {@link Path} of the file + * @param initialPath The {@link Path} of the file */ - public StoreFileInfo(final Configuration conf, final FileSystem fs, final Path path) + public StoreFileInfo(final Configuration conf, final FileSystem fs, final Path initialPath) throws IOException { - this(conf, fs, fs.getFileStatus(path)); - } + assert fs != null; + assert initialPath != null; + assert conf != null; - /** - * Create a Store File Info - * @param conf the {@link Configuration} to use - * @param fs The current file system to use. - * @param fileStatus The {@link FileStatus} of the file - */ - public StoreFileInfo(final Configuration conf, final FileSystem fs, final FileStatus fileStatus) - throws IOException { + this.fs = fs; this.conf = conf; - this.fileStatus = fileStatus; - Path p = fileStatus.getPath(); + this.initialPath = initialPath; + Path p = initialPath; if (HFileLink.isHFileLink(p)) { // HFileLink this.reference = null; - this.link = new HFileLink(conf, p); + this.link = HFileLink.buildFromHFileLinkPattern(conf, p); if (LOG.isTraceEnabled()) LOG.trace(p + " is a link"); } else if (isReference(p)) { this.reference = Reference.read(fs, p); Path referencePath = getReferredToFile(p); if (HFileLink.isHFileLink(referencePath)) { // HFileLink Reference - this.link = new HFileLink(conf, referencePath); + this.link = HFileLink.buildFromHFileLinkPattern(conf, referencePath); } else { // Reference this.link = null; } if (LOG.isTraceEnabled()) LOG.trace(p + " is a " + reference.getFileRegion() + - " reference to " + referencePath); + " reference to " + referencePath); } else if (isHFile(p)) { // HFile this.reference = null; @@ -133,6 +129,17 @@ public class StoreFileInfo implements Comparable { } /** + * Create a Store File Info + * @param conf the {@link Configuration} to use + * @param fs The current file system to use. + * @param fileStatus The {@link FileStatus} of the file + */ + public StoreFileInfo(final Configuration conf, final FileSystem fs, final FileStatus fileStatus) + throws IOException { + this(conf, fs, fileStatus.getPath()); + } + + /** * Create a Store File Info from an HFileLink * @param conf the {@link Configuration} to use * @param fs The current file system to use. @@ -141,14 +148,34 @@ public class StoreFileInfo implements Comparable { public StoreFileInfo(final Configuration conf, final FileSystem fs, final FileStatus fileStatus, final HFileLink link) throws IOException { + this.fs = fs; this.conf = conf; - this.fileStatus = fileStatus; + // initialPath can be null only if we get a link. + this.initialPath = (fileStatus == null) ? null : fileStatus.getPath(); // HFileLink this.reference = null; this.link = link; } /** + * Create a Store File Info from an HFileLink + * @param conf + * @param fs + * @param fileStatus + * @param reference + * @throws IOException + */ + public StoreFileInfo(final Configuration conf, final FileSystem fs, final FileStatus fileStatus, + final Reference reference) + throws IOException { + this.fs = fs; + this.conf = conf; + this.initialPath = fileStatus.getPath(); + this.reference = reference; + this.link = null; + } + + /** * Sets the region coprocessor env. * @param coprocessorHost */ @@ -206,7 +233,7 @@ public class StoreFileInfo implements Comparable { status = fs.getFileStatus(referencePath); } else { in = new FSDataInputStreamWrapper(fs, this.getPath()); - status = fileStatus; + status = fs.getFileStatus(initialPath); } long length = status.getLen(); hdfsBlocksDistribution = computeHDFSBlocksDistribution(fs); @@ -221,7 +248,7 @@ public class StoreFileInfo implements Comparable { reader = new HalfStoreFileReader(fs, this.getPath(), in, length, cacheConf, reference, conf); } else { - reader = new StoreFile.Reader(fs, this.getPath(), in, length, cacheConf, conf); + reader = new StoreFile.Reader(fs, status.getPath(), in, length, cacheConf, conf); } } if (this.coprocessorHost != null) { @@ -237,7 +264,7 @@ public class StoreFileInfo implements Comparable { public HDFSBlocksDistribution computeHDFSBlocksDistribution(final FileSystem fs) throws IOException { - // guard agains the case where we get the FileStatus from link, but by the time we + // guard against the case where we get the FileStatus from link, but by the time we // call compute the file is moved again if (this.link != null) { FileNotFoundException exToThrow = null; @@ -304,7 +331,7 @@ public class StoreFileInfo implements Comparable { } throw exToThrow; } else { - status = this.fileStatus; + status = fs.getFileStatus(initialPath); } } return status; @@ -312,17 +339,17 @@ public class StoreFileInfo implements Comparable { /** @return The {@link Path} of the file */ public Path getPath() { - return this.fileStatus.getPath(); + return initialPath; } /** @return The {@link FileStatus} of the file */ - public FileStatus getFileStatus() { - return this.fileStatus; + public FileStatus getFileStatus() throws IOException { + return getReferencedFileStatus(fs); } /** @return Get the modification time of the file. */ - public long getModificationTime() { - return this.fileStatus.getModificationTime(); + public long getModificationTime() throws IOException { + return getFileStatus().getModificationTime(); } @Override @@ -458,24 +485,36 @@ public class StoreFileInfo implements Comparable { @Override public boolean equals(Object that) { - if (that == null) { - return false; - } + if (this == that) return true; + if (that == null) return false; - if (that instanceof StoreFileInfo) { - return this.compareTo((StoreFileInfo)that) == 0; - } + if (!(that instanceof StoreFileInfo)) return false; + + StoreFileInfo o = (StoreFileInfo)that; + if (initialPath != null && o.initialPath == null) return false; + if (initialPath == null && o.initialPath != null) return false; + if (initialPath != o.initialPath && initialPath != null + && !initialPath.equals(o.initialPath)) return false; - return false; + if (reference != null && o.reference == null) return false; + if (reference == null && o.reference != null) return false; + if (reference != o.reference && reference != null + && !reference.equals(o.reference)) return false; + + if (link != null && o.link == null) return false; + if (link == null && o.link != null) return false; + if (link != o.link && link != null && !link.equals(o.link)) return false; + + return true; }; - @Override - public int compareTo(StoreFileInfo o) { - return this.fileStatus.compareTo(o.fileStatus); - } @Override public int hashCode() { - return this.fileStatus.hashCode(); + int hash = 17; + hash = hash * 31 + ((reference == null) ? 0 : reference.hashCode()); + hash = hash * 31 + ((initialPath == null) ? 0 : initialPath.hashCode()); + hash = hash * 31 + ((link == null) ? 0 : link.hashCode()); + return hash; } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java index dba9240..70e5283 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseMetaHandler.java @@ -23,10 +23,9 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.coordination.CloseRegionCoordination; /** - * Handles closing of the root region on a region server. + * Handles closing of the meta region on a region server. */ @InterfaceAudience.Private public class CloseMetaHandler extends CloseRegionHandler { @@ -35,9 +34,7 @@ public class CloseMetaHandler extends CloseRegionHandler { public CloseMetaHandler(final Server server, final RegionServerServices rsServices, final HRegionInfo regionInfo, - final boolean abort, CloseRegionCoordination closeRegionCoordination, - CloseRegionCoordination.CloseRegionDetails crd) { - super(server, rsServices, regionInfo, abort, closeRegionCoordination, - crd, EventType.M_RS_CLOSE_META); + final boolean abort) { + super(server, rsServices, regionInfo, abort, EventType.M_RS_CLOSE_META, null); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java index 9e7786f..dbc45e7 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java @@ -26,13 +26,11 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.coordination.CloseRegionCoordination; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.util.ConfigUtil; /** * Handles closing of a region on a region server. @@ -41,7 +39,7 @@ import org.apache.hadoop.hbase.util.ConfigUtil; public class CloseRegionHandler extends EventHandler { // NOTE on priorities shutting down. There are none for close. There are some // for open. I think that is right. On shutdown, we want the meta to close - // before root and both to close after the user regions have closed. What + // after the user regions have closed. What // about the case where master tells us to shutdown a catalog region and we // have a running queue of user regions to close? private static final Log LOG = LogFactory.getLog(CloseRegionHandler.class); @@ -53,9 +51,6 @@ public class CloseRegionHandler extends EventHandler { // when we are aborting. private final boolean abort; private ServerName destination; - private CloseRegionCoordination closeRegionCoordination; - private CloseRegionCoordination.CloseRegionDetails closeRegionDetails; - private final boolean useZKForAssignment; /** * This method used internally by the RegionServer to close out regions. @@ -63,49 +58,25 @@ public class CloseRegionHandler extends EventHandler { * @param rsServices * @param regionInfo * @param abort If the regionserver is aborting. - * @param closeRegionCoordination consensus for closing regions - * @param crd object carrying details about region close task. + * @param destination */ public CloseRegionHandler(final Server server, final RegionServerServices rsServices, final HRegionInfo regionInfo, final boolean abort, - CloseRegionCoordination closeRegionCoordination, - CloseRegionCoordination.CloseRegionDetails crd) { - this(server, rsServices, regionInfo, abort, closeRegionCoordination, crd, - EventType.M_RS_CLOSE_REGION, null); - } - - public CloseRegionHandler(final Server server, - final RegionServerServices rsServices, - final HRegionInfo regionInfo, final boolean abort, - CloseRegionCoordination closeRegionCoordination, - CloseRegionCoordination.CloseRegionDetails crd, ServerName destination) { - this(server, rsServices, regionInfo, abort, closeRegionCoordination, crd, + this(server, rsServices, regionInfo, abort, EventType.M_RS_CLOSE_REGION, destination); } - public CloseRegionHandler(final Server server, + protected CloseRegionHandler(final Server server, final RegionServerServices rsServices, HRegionInfo regionInfo, - boolean abort, CloseRegionCoordination closeRegionCoordination, - CloseRegionCoordination.CloseRegionDetails crd, EventType eventType) { - this(server, rsServices, regionInfo, abort, closeRegionCoordination, crd, eventType, null); - } - - protected CloseRegionHandler(final Server server, - final RegionServerServices rsServices, HRegionInfo regionInfo, - boolean abort, CloseRegionCoordination closeRegionCoordination, - CloseRegionCoordination.CloseRegionDetails crd, - EventType eventType, ServerName destination) { + boolean abort, EventType eventType, ServerName destination) { super(server, eventType); this.server = server; this.rsServices = rsServices; this.regionInfo = regionInfo; this.abort = abort; this.destination = destination; - this.closeRegionCoordination = closeRegionCoordination; - this.closeRegionDetails = crd; - useZKForAssignment = ConfigUtil.useZKForAssignment(server.getConfiguration()); } public HRegionInfo getRegionInfo() { @@ -128,16 +99,8 @@ public class CloseRegionHandler extends EventHandler { // Close the region try { - if (useZKForAssignment && closeRegionCoordination.checkClosingState( - regionInfo, closeRegionDetails)) { - return; - } - - // TODO: If we need to keep updating CLOSING stamp to prevent against - // a timeout if this is long-running, need to spin up a thread? if (region.close(abort) == null) { - // This region got closed. Most likely due to a split. So instead - // of doing the setClosedState() below, let's just ignore cont + // This region got closed. Most likely due to a split. // The split message will clean up the master state. LOG.warn("Can't close region: was already closed during close(): " + regionInfo.getRegionNameAsString()); @@ -153,18 +116,13 @@ public class CloseRegionHandler extends EventHandler { } this.rsServices.removeFromOnlineRegions(region, destination); - if (!useZKForAssignment) { - rsServices.reportRegionStateTransition(TransitionCode.CLOSED, regionInfo); - } else { - closeRegionCoordination.setClosedState(region, this.server.getServerName(), - closeRegionDetails); - } + rsServices.reportRegionStateTransition(TransitionCode.CLOSED, regionInfo); // Done! Region is closed on this RS LOG.debug("Closed " + region.getRegionNameAsString()); } finally { this.rsServices.getRegionsInTransitionInRS(). - remove(this.regionInfo.getEncodedNameAsBytes()); + remove(this.regionInfo.getEncodedNameAsBytes(), Boolean.FALSE); } } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java index 57b740d..3b96a9e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenMetaHandler.java @@ -24,7 +24,6 @@ import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; /** * Handles opening of a meta region on a region server. @@ -35,9 +34,7 @@ import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; public class OpenMetaHandler extends OpenRegionHandler { public OpenMetaHandler(final Server server, final RegionServerServices rsServices, HRegionInfo regionInfo, - final HTableDescriptor htd, OpenRegionCoordination coordination, - OpenRegionCoordination.OpenRegionDetails ord) { - super(server, rsServices, regionInfo, htd, EventType.M_RS_OPEN_META, - coordination, ord); + final HTableDescriptor htd) { + super(server, rsServices, regionInfo, htd, EventType.M_RS_OPEN_META); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java index ea4c205..ecf0665 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java @@ -27,7 +27,6 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; import org.apache.hadoop.hbase.executor.EventHandler; import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; @@ -35,7 +34,6 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionServerAccounting; import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.util.CancelableProgressable; -import org.apache.hadoop.hbase.util.ConfigUtil; /** * Handles opening of a region on a region server. *

          @@ -50,30 +48,19 @@ public class OpenRegionHandler extends EventHandler { private final HRegionInfo regionInfo; private final HTableDescriptor htd; - private OpenRegionCoordination coordination; - private OpenRegionCoordination.OpenRegionDetails ord; - - private final boolean useZKForAssignment; - public OpenRegionHandler(final Server server, final RegionServerServices rsServices, HRegionInfo regionInfo, - HTableDescriptor htd, OpenRegionCoordination coordination, - OpenRegionCoordination.OpenRegionDetails ord) { - this(server, rsServices, regionInfo, htd, EventType.M_RS_OPEN_REGION, - coordination, ord); + HTableDescriptor htd) { + this(server, rsServices, regionInfo, htd, EventType.M_RS_OPEN_REGION); } protected OpenRegionHandler(final Server server, final RegionServerServices rsServices, final HRegionInfo regionInfo, - final HTableDescriptor htd, EventType eventType, - OpenRegionCoordination coordination, OpenRegionCoordination.OpenRegionDetails ord) { + final HTableDescriptor htd, EventType eventType) { super(server, eventType); this.rsServices = rsServices; this.regionInfo = regionInfo; this.htd = htd; - this.coordination = coordination; - this.ord = ord; - useZKForAssignment = ConfigUtil.useZKForAssignment(server.getConfiguration()); } public HRegionInfo getRegionInfo() { @@ -83,7 +70,6 @@ public class OpenRegionHandler extends EventHandler { @Override public void process() throws IOException { boolean openSuccessful = false; - boolean transitionedToOpening = false; final String regionName = regionInfo.getRegionNameAsString(); HRegion region = null; @@ -93,10 +79,9 @@ public class OpenRegionHandler extends EventHandler { } final String encodedName = regionInfo.getEncodedName(); - // 3 different difficult situations can occur + // 2 different difficult situations can occur // 1) The opening was cancelled. This is an expected situation - // 2) The region was hijacked, we no longer have the znode - // 3) The region is now marked as online while we're suppose to open. This would be a bug. + // 2) The region is now marked as online while we're suppose to open. This would be a bug. // Check that this region is not already online if (this.rsServices.getFromOnlineRegions(encodedName) != null) { @@ -106,21 +91,13 @@ public class OpenRegionHandler extends EventHandler { return; } - // Check that we're still supposed to open the region and transition. + // Check that we're still supposed to open the region. // If fails, just return. Someone stole the region from under us. - // Calling transitionFromOfflineToOpening initializes this.version. if (!isRegionStillOpening()){ LOG.error("Region " + encodedName + " opening cancelled"); return; } - if (useZKForAssignment - && !coordination.transitionFromOfflineToOpening(regionInfo, ord)) { - LOG.warn("Region was hijacked? Opening cancelled for encodedName=" + encodedName); - // This is a desperate attempt: the znode is unlikely to be ours. But we can't do more. - return; - } - transitionedToOpening = true; // Open region. After a successful open, failures in subsequent // processing needs to do a close as part of cleanup. region = openRegion(); @@ -128,37 +105,15 @@ public class OpenRegionHandler extends EventHandler { return; } - boolean failed = true; - if (isRegionStillOpening() && (!useZKForAssignment || - coordination.tickleOpening(ord, regionInfo, rsServices, "post_region_open"))) { - if (updateMeta(region)) { - failed = false; - } - } - if (failed || this.server.isStopped() || + if (!updateMeta(region) || this.server.isStopped() || this.rsServices.isStopping()) { return; } - if (!isRegionStillOpening() || - (useZKForAssignment && !coordination.transitionToOpened(region, ord))) { - // If we fail to transition to opened, it's because of one of two cases: - // (a) we lost our ZK lease - // OR (b) someone else opened the region before us - // OR (c) someone cancelled the open - // In all cases, we try to transition to failed_open to be safe. + if (!isRegionStillOpening()) { return; } - // We have a znode in the opened state now. We can't really delete it as the master job. - // Transitioning to failed open would create a race condition if the master has already - // acted the transition to opened. - // Cancelling the open is dangerous, because we would have a state where the master thinks - // the region is opened while the region is actually closed. It is a dangerous state - // to be in. For this reason, from now on, we're not going back. There is a message in the - // finally close to let the admin knows where we stand. - - // Successful region open, and add it to OnlineRegions this.rsServices.addToOnlineRegions(region); openSuccessful = true; @@ -166,12 +121,10 @@ public class OpenRegionHandler extends EventHandler { // Done! Successful region open LOG.debug("Opened " + regionName + " on " + this.server.getServerName()); - - } finally { // Do all clean up here if (!openSuccessful) { - doCleanUpOnFailedOpen(region, transitionedToOpening, ord); + doCleanUpOnFailedOpen(region); } final Boolean current = this.rsServices.getRegionsInTransitionInRS(). remove(this.regionInfo.getEncodedNameAsBytes()); @@ -180,9 +133,7 @@ public class OpenRegionHandler extends EventHandler { // A better solution would be to not have any race condition. // this.rsServices.getRegionsInTransitionInRS().remove( // this.regionInfo.getEncodedNameAsBytes(), Boolean.TRUE); - // would help, but we would still have a consistency issue to manage with - // 1) this.rsServices.addToOnlineRegions(region); - // 2) the ZK state. + // would help. if (openSuccessful) { if (current == null) { // Should NEVER happen, but let's be paranoid. LOG.error("Bad state: we've just opened a region that was NOT in transition. Region=" @@ -198,29 +149,14 @@ public class OpenRegionHandler extends EventHandler { } } - private void doCleanUpOnFailedOpen(HRegion region, boolean transitionedToOpening, - OpenRegionCoordination.OpenRegionDetails ord) + private void doCleanUpOnFailedOpen(HRegion region) throws IOException { - if (transitionedToOpening) { - try { - if (region != null) { - cleanupFailedOpen(region); - } - } finally { - if (!useZKForAssignment) { - rsServices.reportRegionStateTransition(TransitionCode.FAILED_OPEN, regionInfo); - } else { - // Even if cleanupFailed open fails we need to do this transition - // See HBASE-7698 - coordination.tryTransitionFromOpeningToFailedOpen(regionInfo, ord); - } + try { + if (region != null) { + cleanupFailedOpen(region); } - } else if (!useZKForAssignment) { + } finally { rsServices.reportRegionStateTransition(TransitionCode.FAILED_OPEN, regionInfo); - } else { - // If still transition to OPENING is not done, we need to transition znode - // to FAILED_OPEN - coordination.tryTransitionFromOfflineToFailedOpen(this.rsServices, regionInfo, ord); } } @@ -244,22 +180,8 @@ public class OpenRegionHandler extends EventHandler { // Post open deploy task: // meta => update meta location in ZK // other region => update meta - // It could fail if ZK/meta is not available and - // the update runs out of retries. - long now = System.currentTimeMillis(); - long lastUpdate = now; - boolean tickleOpening = true; while (!signaller.get() && t.isAlive() && !this.server.isStopped() && !this.rsServices.isStopping() && isRegionStillOpening()) { - long elapsed = now - lastUpdate; - if (elapsed > 120000) { // 2 minutes, no need to tickleOpening too often - // Only tickle OPENING if postOpenDeployTasks is taking some time. - lastUpdate = now; - if (useZKForAssignment) { - tickleOpening = coordination.tickleOpening( - ord, regionInfo, rsServices, "post_open_deploy"); - } - } synchronized (signaller) { try { // Wait for 10 seconds, so that server shutdown @@ -269,7 +191,6 @@ public class OpenRegionHandler extends EventHandler { // Go to the loop check. } } - now = System.currentTimeMillis(); } // Is thread still alive? We may have left above loop because server is // stopping or we timed out the edit. Is so, interrupt it. @@ -289,9 +210,8 @@ public class OpenRegionHandler extends EventHandler { } // Was there an exception opening the region? This should trigger on - // InterruptedException too. If so, we failed. Even if tickle opening fails - // then it is a failure. - return ((!Thread.interrupted() && t.getException() == null) && tickleOpening); + // InterruptedException too. If so, we failed. + return (!Thread.interrupted() && t.getException() == null); } /** @@ -359,11 +279,6 @@ public class OpenRegionHandler extends EventHandler { this.rsServices, new CancelableProgressable() { public boolean progress() { - if (useZKForAssignment) { - // if tickle failed, we need to cancel opening region. - return coordination.tickleOpening(ord, regionInfo, - rsServices, "open_region_progress"); - } if (!isRegionStillOpening()) { LOG.warn("Open region aborted since it isn't opening any more"); return false; @@ -392,14 +307,8 @@ public class OpenRegionHandler extends EventHandler { void cleanupFailedOpen(final HRegion region) throws IOException { if (region != null) { - byte[] encodedName = regionInfo.getEncodedNameAsBytes(); - try { - rsServices.getRegionsInTransitionInRS().put(encodedName,Boolean.FALSE); - this.rsServices.removeFromOnlineRegions(region, null); - region.close(); - } finally { - rsServices.getRegionsInTransitionInRS().remove(encodedName); - } + this.rsServices.removeFromOnlineRegions(region, null); + region.close(); } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java index 43072ce..1fad93d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java @@ -17,6 +17,8 @@ */ package org.apache.hadoop.hbase.regionserver.wal; +import static org.apache.hadoop.hbase.wal.DefaultWALProvider.WAL_FILE_NAME_DELIMITER; + import java.io.FileNotFoundException; import java.io.IOException; import java.io.InterruptedIOException; @@ -31,10 +33,12 @@ import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.NavigableMap; +import java.util.Set; import java.util.TreeMap; import java.util.UUID; import java.util.concurrent.BlockingQueue; import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; import java.util.concurrent.ConcurrentSkipListMap; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; @@ -50,7 +54,6 @@ import java.util.concurrent.locks.ReentrantLock; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; @@ -63,17 +66,8 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.TableName; -import static org.apache.hadoop.hbase.wal.DefaultWALProvider.WAL_FILE_NAME_DELIMITER; -import org.apache.hadoop.hbase.wal.DefaultWALProvider; -import org.apache.hadoop.hbase.wal.WAL; -import org.apache.hadoop.hbase.wal.WAL.Entry; -import org.apache.hadoop.hbase.wal.WALFactory; -import org.apache.hadoop.hbase.wal.WALKey; -import org.apache.hadoop.hbase.wal.WALPrettyPrinter; -import org.apache.hadoop.hbase.wal.WALProvider.Writer; -import org.apache.hadoop.hbase.wal.WALSplitter; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ClassSize; import org.apache.hadoop.hbase.util.DrainBarrier; @@ -81,6 +75,13 @@ import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HasThread; import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.hbase.wal.DefaultWALProvider; +import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.wal.WALPrettyPrinter; +import org.apache.hadoop.hbase.wal.WALProvider.Writer; +import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hdfs.protocol.DatanodeInfo; import org.apache.hadoop.util.StringUtils; import org.htrace.NullScope; @@ -89,6 +90,7 @@ import org.htrace.Trace; import org.htrace.TraceScope; import com.google.common.annotations.VisibleForTesting; +import com.google.common.collect.Maps; import com.lmax.disruptor.BlockingWaitStrategy; import com.lmax.disruptor.EventHandler; import com.lmax.disruptor.ExceptionHandler; @@ -201,7 +203,8 @@ public class FSHLog implements WAL { /** * file system instance */ - private final FileSystem fs; + protected final FileSystem fs; + /** * WAL directory, where all WAL files would be placed. */ @@ -236,7 +239,7 @@ public class FSHLog implements WAL { /** * conf object */ - private final Configuration conf; + protected final Configuration conf; /** Listeners that are called on WAL events. */ private final List listeners = new CopyOnWriteArrayList(); @@ -334,33 +337,35 @@ public class FSHLog implements WAL { // sequence id numbers are by region and unrelated to the ring buffer sequence number accounting // done above in failedSequence, highest sequence, etc. /** - * This lock ties all operations on oldestFlushingRegionSequenceIds and - * oldestFlushedRegionSequenceIds Maps with the exception of append's putIfAbsent call into - * oldestUnflushedSeqNums. We use these Maps to find out the low bound regions sequence id, or - * to find regions with old sequence ids to force flush; we are interested in old stuff not the - * new additions (TODO: IS THIS SAFE? CHECK!). + * This lock ties all operations on lowestFlushingStoreSequenceIds and + * oldestUnflushedStoreSequenceIds Maps with the exception of append's putIfAbsent call into + * oldestUnflushedStoreSequenceIds. We use these Maps to find out the low bound regions + * sequence id, or to find regions with old sequence ids to force flush; we are interested in + * old stuff not the new additions (TODO: IS THIS SAFE? CHECK!). */ private final Object regionSequenceIdLock = new Object(); /** - * Map of encoded region names to their OLDEST -- i.e. their first, the longest-lived -- - * sequence id in memstore. Note that this sequence id is the region sequence id. This is not - * related to the id we use above for {@link #highestSyncedSequence} and - * {@link #highestUnsyncedSequence} which is the sequence from the disruptor ring buffer. + * Map of encoded region names and family names to their OLDEST -- i.e. their first, + * the longest-lived -- sequence id in memstore. Note that this sequence id is the region + * sequence id. This is not related to the id we use above for {@link #highestSyncedSequence} + * and {@link #highestUnsyncedSequence} which is the sequence from the disruptor + * ring buffer. */ - private final ConcurrentSkipListMap oldestUnflushedRegionSequenceIds = - new ConcurrentSkipListMap(Bytes.BYTES_COMPARATOR); + private final ConcurrentMap> oldestUnflushedStoreSequenceIds + = new ConcurrentSkipListMap>( + Bytes.BYTES_COMPARATOR); /** - * Map of encoded region names to their lowest or OLDEST sequence/edit id in memstore currently - * being flushed out to hfiles. Entries are moved here from - * {@link #oldestUnflushedRegionSequenceIds} while the lock {@link #regionSequenceIdLock} is held + * Map of encoded region names and family names to their lowest or OLDEST sequence/edit id in + * memstore currently being flushed out to hfiles. Entries are moved here from + * {@link #oldestUnflushedStoreSequenceIds} while the lock {@link #regionSequenceIdLock} is held * (so movement between the Maps is atomic). This is not related to the id we use above for * {@link #highestSyncedSequence} and {@link #highestUnsyncedSequence} which is the sequence from * the disruptor ring buffer, an internal detail. */ - private final Map lowestFlushingRegionSequenceIds = - new TreeMap(Bytes.BYTES_COMPARATOR); + private final Map> lowestFlushingStoreSequenceIds = + new TreeMap>(Bytes.BYTES_COMPARATOR); /** * Map of region encoded names to the latest region sequence id. Updated on each append of @@ -735,6 +740,28 @@ public class FSHLog implements WAL { return DefaultWALProvider.createWriter(conf, fs, path, false); } + private long getLowestSeqId(Map seqIdMap) { + long result = HConstants.NO_SEQNUM; + for (Long seqNum: seqIdMap.values()) { + if (result == HConstants.NO_SEQNUM || seqNum.longValue() < result) { + result = seqNum.longValue(); + } + } + return result; + } + + private > Map copyMapWithLowestSeqId( + Map mapToCopy) { + Map copied = Maps.newHashMap(); + for (Map.Entry entry: mapToCopy.entrySet()) { + long lowestSeqId = getLowestSeqId(entry.getValue()); + if (lowestSeqId != HConstants.NO_SEQNUM) { + copied.put(entry.getKey(), lowestSeqId); + } + } + return copied; + } + /** * Archive old logs that could be archived: a log is eligible for archiving if all its WALEdits * have been flushed to hfiles. @@ -747,22 +774,23 @@ public class FSHLog implements WAL { * @throws IOException */ private void cleanOldLogs() throws IOException { - Map oldestFlushingSeqNumsLocal = null; - Map oldestUnflushedSeqNumsLocal = null; + Map lowestFlushingRegionSequenceIdsLocal = null; + Map oldestUnflushedRegionSequenceIdsLocal = null; List logsToArchive = new ArrayList(); // make a local copy so as to avoid locking when we iterate over these maps. synchronized (regionSequenceIdLock) { - oldestFlushingSeqNumsLocal = new HashMap(this.lowestFlushingRegionSequenceIds); - oldestUnflushedSeqNumsLocal = - new HashMap(this.oldestUnflushedRegionSequenceIds); + lowestFlushingRegionSequenceIdsLocal = + copyMapWithLowestSeqId(this.lowestFlushingStoreSequenceIds); + oldestUnflushedRegionSequenceIdsLocal = + copyMapWithLowestSeqId(this.oldestUnflushedStoreSequenceIds); } for (Map.Entry> e : byWalRegionSequenceIds.entrySet()) { // iterate over the log file. Path log = e.getKey(); Map sequenceNums = e.getValue(); // iterate over the map for this log file, and tell whether it should be archive or not. - if (areAllRegionsFlushed(sequenceNums, oldestFlushingSeqNumsLocal, - oldestUnflushedSeqNumsLocal)) { + if (areAllRegionsFlushed(sequenceNums, lowestFlushingRegionSequenceIdsLocal, + oldestUnflushedRegionSequenceIdsLocal)) { logsToArchive.add(log); LOG.debug("WAL file ready for archiving " + log); } @@ -816,10 +844,11 @@ public class FSHLog implements WAL { List regionsToFlush = null; // Keeping the old behavior of iterating unflushedSeqNums under oldestSeqNumsLock. synchronized (regionSequenceIdLock) { - for (Map.Entry e : regionsSequenceNums.entrySet()) { - Long unFlushedVal = this.oldestUnflushedRegionSequenceIds.get(e.getKey()); - if (unFlushedVal != null && unFlushedVal <= e.getValue()) { - if (regionsToFlush == null) regionsToFlush = new ArrayList(); + for (Map.Entry e: regionsSequenceNums.entrySet()) { + long unFlushedVal = getEarliestMemstoreSeqNum(e.getKey()); + if (unFlushedVal != HConstants.NO_SEQNUM && unFlushedVal <= e.getValue()) { + if (regionsToFlush == null) + regionsToFlush = new ArrayList(); regionsToFlush.add(e.getKey()); } } @@ -1585,36 +1614,53 @@ public class FSHLog implements WAL { // +1 for current use log return getNumRolledLogFiles() + 1; } - + // public only until class moves to o.a.h.h.wal /** @return the size of log files in use */ public long getLogFileSize() { return this.totalLogSize.get(); } - + @Override - public boolean startCacheFlush(final byte[] encodedRegionName) { - Long oldRegionSeqNum = null; + public boolean startCacheFlush(final byte[] encodedRegionName, + Set flushedFamilyNames) { + Map oldStoreSeqNum = Maps.newTreeMap(Bytes.BYTES_COMPARATOR); if (!closeBarrier.beginOp()) { LOG.info("Flush will not be started for " + Bytes.toString(encodedRegionName) + " - because the server is closing."); return false; } synchronized (regionSequenceIdLock) { - oldRegionSeqNum = this.oldestUnflushedRegionSequenceIds.remove(encodedRegionName); - if (oldRegionSeqNum != null) { - Long oldValue = - this.lowestFlushingRegionSequenceIds.put(encodedRegionName, oldRegionSeqNum); - assert oldValue == - null : "Flushing map not cleaned up for " + Bytes.toString(encodedRegionName); + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + oldestUnflushedStoreSequenceIds.get(encodedRegionName); + if (oldestUnflushedStoreSequenceIdsOfRegion != null) { + for (byte[] familyName: flushedFamilyNames) { + Long seqId = oldestUnflushedStoreSequenceIdsOfRegion.remove(familyName); + if (seqId != null) { + oldStoreSeqNum.put(familyName, seqId); + } + } + if (!oldStoreSeqNum.isEmpty()) { + Map oldValue = this.lowestFlushingStoreSequenceIds.put( + encodedRegionName, oldStoreSeqNum); + assert oldValue == null: "Flushing map not cleaned up for " + + Bytes.toString(encodedRegionName); + } + if (oldestUnflushedStoreSequenceIdsOfRegion.isEmpty()) { + // Remove it otherwise it will be in oldestUnflushedStoreSequenceIds for ever + // even if the region is already moved to other server. + // Do not worry about data racing, we held write lock of region when calling + // startCacheFlush, so no one can add value to the map we removed. + oldestUnflushedStoreSequenceIds.remove(encodedRegionName); + } } } - if (oldRegionSeqNum == null) { - // TODO: if we have no oldRegionSeqNum, and WAL is not disabled, presumably either - // the region is already flushing (which would make this call invalid), or there - // were no appends after last flush, so why are we starting flush? Maybe we should - // assert not null, and switch to "long" everywhere. Less rigorous, but safer, - // alternative is telling the caller to stop. For now preserve old logic. + if (oldStoreSeqNum.isEmpty()) { + // TODO: if we have no oldStoreSeqNum, and WAL is not disabled, presumably either + // the region is already flushing (which would make this call invalid), or there + // were no appends after last flush, so why are we starting flush? Maybe we should + // assert not empty. Less rigorous, but safer, alternative is telling the caller to stop. + // For now preserve old logic. LOG.warn("Couldn't find oldest seqNum for the region we are about to flush: [" + Bytes.toString(encodedRegionName) + "]"); } @@ -1624,30 +1670,59 @@ public class FSHLog implements WAL { @Override public void completeCacheFlush(final byte [] encodedRegionName) { synchronized (regionSequenceIdLock) { - this.lowestFlushingRegionSequenceIds.remove(encodedRegionName); + this.lowestFlushingStoreSequenceIds.remove(encodedRegionName); } closeBarrier.endOp(); } + private ConcurrentMap getOrCreateOldestUnflushedStoreSequenceIdsOfRegion( + byte[] encodedRegionName) { + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + oldestUnflushedStoreSequenceIds.get(encodedRegionName); + if (oldestUnflushedStoreSequenceIdsOfRegion != null) { + return oldestUnflushedStoreSequenceIdsOfRegion; + } + oldestUnflushedStoreSequenceIdsOfRegion = + new ConcurrentSkipListMap(Bytes.BYTES_COMPARATOR); + ConcurrentMap alreadyPut = + oldestUnflushedStoreSequenceIds.putIfAbsent(encodedRegionName, + oldestUnflushedStoreSequenceIdsOfRegion); + return alreadyPut == null ? oldestUnflushedStoreSequenceIdsOfRegion : alreadyPut; + } + @Override public void abortCacheFlush(byte[] encodedRegionName) { - Long currentSeqNum = null, seqNumBeforeFlushStarts = null; + Map storeSeqNumsBeforeFlushStarts; + Map currentStoreSeqNums = new TreeMap(Bytes.BYTES_COMPARATOR); synchronized (regionSequenceIdLock) { - seqNumBeforeFlushStarts = this.lowestFlushingRegionSequenceIds.remove(encodedRegionName); - if (seqNumBeforeFlushStarts != null) { - currentSeqNum = - this.oldestUnflushedRegionSequenceIds.put(encodedRegionName, seqNumBeforeFlushStarts); + storeSeqNumsBeforeFlushStarts = this.lowestFlushingStoreSequenceIds.remove( + encodedRegionName); + if (storeSeqNumsBeforeFlushStarts != null) { + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + getOrCreateOldestUnflushedStoreSequenceIdsOfRegion(encodedRegionName); + for (Map.Entry familyNameAndSeqId: storeSeqNumsBeforeFlushStarts + .entrySet()) { + currentStoreSeqNums.put(familyNameAndSeqId.getKey(), + oldestUnflushedStoreSequenceIdsOfRegion.put(familyNameAndSeqId.getKey(), + familyNameAndSeqId.getValue())); + } } } closeBarrier.endOp(); - if ((currentSeqNum != null) - && (currentSeqNum.longValue() <= seqNumBeforeFlushStarts.longValue())) { - String errorStr = "Region " + Bytes.toString(encodedRegionName) + - "acquired edits out of order current memstore seq=" + currentSeqNum - + ", previous oldest unflushed id=" + seqNumBeforeFlushStarts; - LOG.error(errorStr); - assert false : errorStr; - Runtime.getRuntime().halt(1); + if (storeSeqNumsBeforeFlushStarts != null) { + for (Map.Entry familyNameAndSeqId : storeSeqNumsBeforeFlushStarts.entrySet()) { + Long currentSeqNum = currentStoreSeqNums.get(familyNameAndSeqId.getKey()); + if (currentSeqNum != null + && currentSeqNum.longValue() <= familyNameAndSeqId.getValue().longValue()) { + String errorStr = + "Region " + Bytes.toString(encodedRegionName) + " family " + + Bytes.toString(familyNameAndSeqId.getKey()) + + " acquired edits out of order current memstore seq=" + currentSeqNum + + ", previous oldest unflushed id=" + familyNameAndSeqId.getValue(); + LOG.error(errorStr); + Runtime.getRuntime().halt(1); + } + } } } @@ -1678,8 +1753,23 @@ public class FSHLog implements WAL { @Override public long getEarliestMemstoreSeqNum(byte[] encodedRegionName) { - Long result = oldestUnflushedRegionSequenceIds.get(encodedRegionName); - return result == null ? HConstants.NO_SEQNUM : result.longValue(); + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + this.oldestUnflushedStoreSequenceIds.get(encodedRegionName); + return oldestUnflushedStoreSequenceIdsOfRegion != null ? + getLowestSeqId(oldestUnflushedStoreSequenceIdsOfRegion) : HConstants.NO_SEQNUM; + } + + @Override + public long getEarliestMemstoreSeqNum(byte[] encodedRegionName, + byte[] familyName) { + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + this.oldestUnflushedStoreSequenceIds.get(encodedRegionName); + if (oldestUnflushedStoreSequenceIdsOfRegion != null) { + Long result = oldestUnflushedStoreSequenceIdsOfRegion.get(familyName); + return result != null ? result.longValue() : HConstants.NO_SEQNUM; + } else { + return HConstants.NO_SEQNUM; + } } /** @@ -1915,6 +2005,15 @@ public class FSHLog implements WAL { } } + private void updateOldestUnflushedSequenceIds(byte[] encodedRegionName, + Set familyNameSet, Long lRegionSequenceId) { + ConcurrentMap oldestUnflushedStoreSequenceIdsOfRegion = + getOrCreateOldestUnflushedStoreSequenceIdsOfRegion(encodedRegionName); + for (byte[] familyName : familyNameSet) { + oldestUnflushedStoreSequenceIdsOfRegion.putIfAbsent(familyName, lRegionSequenceId); + } + } + /** * Append to the WAL. Does all CP and WAL listener calls. * @param entry @@ -1962,9 +2061,10 @@ public class FSHLog implements WAL { Long lRegionSequenceId = Long.valueOf(regionSequenceId); highestRegionSequenceIds.put(encodedRegionName, lRegionSequenceId); if (entry.isInMemstore()) { - oldestUnflushedRegionSequenceIds.putIfAbsent(encodedRegionName, lRegionSequenceId); + updateOldestUnflushedSequenceIds(encodedRegionName, + entry.getFamilyNames(), lRegionSequenceId); } - + coprocessorHost.postWALWrite(entry.getHRegionInfo(), entry.getKey(), entry.getEdit()); // Update metrics. postAppend(entry, EnvironmentEdgeManager.currentTime() - start); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java index d9942b3..147a13d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSWALEntry.java @@ -19,13 +19,21 @@ package org.apache.hadoop.hbase.regionserver.wal; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; import java.util.List; +import java.util.Set; import java.util.concurrent.atomic.AtomicLong; -import org.apache.hadoop.hbase.classification.InterfaceAudience; + import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CollectionUtils; + +import com.google.common.collect.Sets; import org.apache.hadoop.hbase.wal.WAL.Entry; import org.apache.hadoop.hbase.wal.WALKey; @@ -96,7 +104,7 @@ class FSWALEntry extends Entry { */ long stampRegionSequenceId() throws IOException { long regionSequenceId = this.regionSequenceIdReference.incrementAndGet(); - if (!this.getEdit().isReplay() && memstoreCells != null && !memstoreCells.isEmpty()) { + if (!this.getEdit().isReplay() && !CollectionUtils.isEmpty(memstoreCells)) { for (Cell cell : this.memstoreCells) { CellUtil.setSequenceId(cell, regionSequenceId); } @@ -105,4 +113,21 @@ class FSWALEntry extends Entry { key.setLogSeqNum(regionSequenceId); return regionSequenceId; } + + /** + * @return the family names which are effected by this edit. + */ + Set getFamilyNames() { + ArrayList cells = this.getEdit().getCells(); + if (CollectionUtils.isEmpty(cells)) { + return Collections.emptySet(); + } + Set familySet = Sets.newTreeSet(Bytes.BYTES_COMPARATOR); + for (Cell cell : cells) { + if (!CellUtil.matchingFamily(cell, WALEdit.METAFAMILY)) { + familySet.add(CellUtil.cloneFamily(cell)); + } + } + return familySet; + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java index 63eaa43..cf3b5c4 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogPrettyPrinter.java @@ -17,33 +17,12 @@ */ package org.apache.hadoop.hbase.regionserver.wal; -import java.io.FileNotFoundException; import java.io.IOException; import java.io.PrintStream; -import java.util.ArrayList; -import java.util.Date; -import java.util.HashMap; -import java.util.List; -import java.util.Map; - -import org.apache.commons.cli.CommandLine; -import org.apache.commons.cli.CommandLineParser; -import org.apache.commons.cli.HelpFormatter; -import org.apache.commons.cli.Options; -import org.apache.commons.cli.ParseException; -import org.apache.commons.cli.PosixParser; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.HBaseInterfaceAudience; import org.apache.hadoop.hbase.wal.WALPrettyPrinter; -import org.codehaus.jackson.map.ObjectMapper; /** * HLogPrettyPrinter prints the contents of a given HLog with a variety of diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java index 433e5c0..56137e8 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java @@ -90,6 +90,7 @@ public class WALCellCodec implements Codec { * Fully prepares the codec for use. * @param conf {@link Configuration} to read for the user-specified codec. If none is specified, * uses a {@link WALCellCodec}. + * @param cellCodecClsName name of codec * @param compression compression the codec should use * @return a {@link WALCellCodec} ready for use. * @throws UnsupportedOperationException if the codec cannot be instantiated diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java index ff5f2f5..59a1b43 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java @@ -162,7 +162,7 @@ public class WALEditsReplaySink { private void replayEdits(final HRegionLocation regionLoc, final HRegionInfo regionInfo, final List entries) throws IOException { try { - RpcRetryingCallerFactory factory = RpcRetryingCallerFactory.instantiate(conf); + RpcRetryingCallerFactory factory = RpcRetryingCallerFactory.instantiate(conf, null); ReplayServerCallable callable = new ReplayServerCallable(this.conn, this.tableName, regionLoc, regionInfo, entries); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java new file mode 100644 index 0000000..c3d4e5a --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/RegionReplicaReplicationEndpoint.java @@ -0,0 +1,557 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.replication.regionserver; + +import java.io.IOException; +import java.io.InterruptedIOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Future; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicLong; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CellScanner; +import org.apache.hadoop.hbase.DoNotRetryIOException; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HBaseIOException; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.RegionLocations; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.client.RegionAdminServiceCallable; +import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.HConnectionManager; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; +import org.apache.hadoop.hbase.client.RetriesExhaustedException; +import org.apache.hadoop.hbase.client.RetryingCallable; +import org.apache.hadoop.hbase.client.RpcRetryingCallerFactory; +import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; +import org.apache.hadoop.hbase.ipc.RpcControllerFactory; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.ReplicationProtbufUtil; +import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; +import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.ReplicateWALEntryResponse; +import org.apache.hadoop.hbase.regionserver.wal.WALCellCodec; +import org.apache.hadoop.hbase.wal.WAL.Entry; +import org.apache.hadoop.hbase.wal.WALSplitter.EntryBuffers; +import org.apache.hadoop.hbase.wal.WALSplitter.OutputSink; +import org.apache.hadoop.hbase.wal.WALSplitter.PipelineController; +import org.apache.hadoop.hbase.wal.WALSplitter.RegionEntryBuffer; +import org.apache.hadoop.hbase.wal.WALSplitter.SinkWriter; +import org.apache.hadoop.hbase.replication.HBaseReplicationEndpoint; +import org.apache.hadoop.hbase.replication.WALEntryFilter; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.util.StringUtils; + +import com.google.common.cache.Cache; +import com.google.common.cache.CacheBuilder; +import com.google.protobuf.ServiceException; + +/** + * A {@link org.apache.hadoop.hbase.replication.ReplicationEndpoint} endpoint + * which receives the WAL edits from the WAL, and sends the edits to replicas + * of regions. + */ +@InterfaceAudience.Private +public class RegionReplicaReplicationEndpoint extends HBaseReplicationEndpoint { + + private static final Log LOG = LogFactory.getLog(RegionReplicaReplicationEndpoint.class); + + private Configuration conf; + private ClusterConnection connection; + + // Reuse WALSplitter constructs as a WAL pipe + private PipelineController controller; + private RegionReplicaOutputSink outputSink; + private EntryBuffers entryBuffers; + + // Number of writer threads + private int numWriterThreads; + + private int operationTimeout; + + private ExecutorService pool; + + @Override + public void init(Context context) throws IOException { + super.init(context); + + this.conf = HBaseConfiguration.create(context.getConfiguration()); + + String codecClassName = conf + .get(WALCellCodec.WAL_CELL_CODEC_CLASS_KEY, WALCellCodec.class.getName()); + conf.set(HConstants.RPC_CODEC_CONF_KEY, codecClassName); + + this.numWriterThreads = this.conf.getInt( + "hbase.region.replica.replication.writer.threads", 3); + controller = new PipelineController(); + entryBuffers = new EntryBuffers(controller, + this.conf.getInt("hbase.region.replica.replication.buffersize", + 128*1024*1024)); + + // use the regular RPC timeout for replica replication RPC's + this.operationTimeout = conf.getInt(HConstants.HBASE_CLIENT_OPERATION_TIMEOUT, + HConstants.DEFAULT_HBASE_CLIENT_OPERATION_TIMEOUT); + } + + @Override + protected void doStart() { + try { + connection = (ClusterConnection) HConnectionManager.createConnection(ctx.getConfiguration()); + this.pool = getDefaultThreadPool(conf); + outputSink = new RegionReplicaOutputSink(controller, entryBuffers, connection, pool, + numWriterThreads, operationTimeout); + outputSink.startWriterThreads(); + super.doStart(); + } catch (IOException ex) { + LOG.warn("Received exception while creating connection :" + ex); + notifyFailed(ex); + } + } + + @Override + protected void doStop() { + if (outputSink != null) { + try { + outputSink.finishWritingAndClose(); + } catch (IOException ex) { + LOG.warn("Got exception while trying to close OutputSink"); + LOG.warn(ex); + } + } + if (this.pool != null) { + this.pool.shutdownNow(); + try { + // wait for 10 sec + boolean shutdown = this.pool.awaitTermination(10000, TimeUnit.MILLISECONDS); + if (!shutdown) { + LOG.warn("Failed to shutdown the thread pool after 10 seconds"); + } + } catch (InterruptedException e) { + LOG.warn("Got interrupted while waiting for the thread pool to shut down" + e); + } + } + if (connection != null) { + try { + connection.close(); + } catch (IOException ex) { + LOG.warn("Got exception closing connection :" + ex); + } + } + super.doStop(); + } + + /** + * Returns a Thread pool for the RPC's to region replicas. Similar to + * Connection's thread pool. + */ + private ExecutorService getDefaultThreadPool(Configuration conf) { + int maxThreads = conf.getInt("hbase.region.replica.replication.threads.max", 256); + int coreThreads = conf.getInt("hbase.region.replica.replication.threads.core", 16); + if (maxThreads == 0) { + maxThreads = Runtime.getRuntime().availableProcessors() * 8; + } + if (coreThreads == 0) { + coreThreads = Runtime.getRuntime().availableProcessors() * 8; + } + long keepAliveTime = conf.getLong("hbase.region.replica.replication.threads.keepalivetime", 60); + LinkedBlockingQueue workQueue = + new LinkedBlockingQueue(maxThreads * + conf.getInt(HConstants.HBASE_CLIENT_MAX_TOTAL_TASKS, + HConstants.DEFAULT_HBASE_CLIENT_MAX_TOTAL_TASKS)); + ThreadPoolExecutor tpe = new ThreadPoolExecutor( + coreThreads, + maxThreads, + keepAliveTime, + TimeUnit.SECONDS, + workQueue, + Threads.newDaemonThreadFactory(this.getClass().toString() + "-rpc-shared-")); + tpe.allowCoreThreadTimeOut(true); + return tpe; + } + + @Override + public boolean replicate(ReplicateContext replicateContext) { + /* A note on batching in RegionReplicaReplicationEndpoint (RRRE): + * + * RRRE relies on batching from two different mechanisms. The first is the batching from + * ReplicationSource since RRRE is a ReplicationEndpoint driven by RS. RS reads from a single + * WAL file filling up a buffer of heap size "replication.source.size.capacity"(64MB) or at most + * "replication.source.nb.capacity" entries or until it sees the end of file (in live tailing). + * Then RS passes all the buffered edits in this replicate() call context. RRRE puts the edits + * to the WALSplitter.EntryBuffers which is a blocking buffer space of up to + * "hbase.region.replica.replication.buffersize" (128MB) in size. This buffer splits the edits + * based on regions. + * + * There are "hbase.region.replica.replication.writer.threads"(default 3) writer threads which + * pick largest per-region buffer and send it to the SinkWriter (see RegionReplicaOutputSink). + * The SinkWriter in this case will send the wal edits to all secondary region replicas in + * parallel via a retrying rpc call. EntryBuffers guarantees that while a buffer is + * being written to the sink, another buffer for the same region will not be made available to + * writers ensuring regions edits are not replayed out of order. + * + * The replicate() call won't return until all the buffers are sent and ack'd by the sinks so + * that the replication can assume all edits are persisted. We may be able to do a better + * pipelining between the replication thread and output sinks later if it becomes a bottleneck. + */ + + while (this.isRunning()) { + try { + for (Entry entry: replicateContext.getEntries()) { + entryBuffers.appendEntry(entry); + } + outputSink.flush(); // make sure everything is flushed + return true; + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + return false; + } catch (IOException e) { + LOG.warn("Received IOException while trying to replicate" + + StringUtils.stringifyException(e)); + } + } + + return false; + } + + @Override + public boolean canReplicateToSameCluster() { + return true; + } + + @Override + protected WALEntryFilter getScopeWALEntryFilter() { + // we do not care about scope. We replicate everything. + return null; + } + + static class RegionReplicaOutputSink extends OutputSink { + private RegionReplicaSinkWriter sinkWriter; + + public RegionReplicaOutputSink(PipelineController controller, EntryBuffers entryBuffers, + ClusterConnection connection, ExecutorService pool, int numWriters, int operationTimeout) { + super(controller, entryBuffers, numWriters); + this.sinkWriter = new RegionReplicaSinkWriter(this, connection, pool, operationTimeout); + } + + @Override + public void append(RegionEntryBuffer buffer) throws IOException { + List entries = buffer.getEntryBuffer(); + + if (entries.isEmpty() || entries.get(0).getEdit().getCells().isEmpty()) { + return; + } + + sinkWriter.append(buffer.getTableName(), buffer.getEncodedRegionName(), + entries.get(0).getEdit().getCells().get(0).getRow(), entries); + } + + @Override + public boolean flush() throws IOException { + // nothing much to do for now. Wait for the Writer threads to finish up + // append()'ing the data. + entryBuffers.waitUntilDrained(); + return super.flush(); + } + + @Override + public List finishWritingAndClose() throws IOException { + finishWriting(); + return null; + } + + @Override + public Map getOutputCounts() { + return null; // only used in tests + } + + @Override + public int getNumberOfRecoveredRegions() { + return 0; + } + + AtomicLong getSkippedEditsCounter() { + return skippedEdits; + } + } + + static class RegionReplicaSinkWriter extends SinkWriter { + RegionReplicaOutputSink sink; + ClusterConnection connection; + RpcControllerFactory rpcControllerFactory; + RpcRetryingCallerFactory rpcRetryingCallerFactory; + int operationTimeout; + ExecutorService pool; + Cache disabledAndDroppedTables; + + public RegionReplicaSinkWriter(RegionReplicaOutputSink sink, ClusterConnection connection, + ExecutorService pool, int operationTimeout) { + this.sink = sink; + this.connection = connection; + this.operationTimeout = operationTimeout; + this.rpcRetryingCallerFactory + = RpcRetryingCallerFactory.instantiate(connection.getConfiguration()); + this.rpcControllerFactory = RpcControllerFactory.instantiate(connection.getConfiguration()); + this.pool = pool; + + int nonExistentTableCacheExpiryMs = connection.getConfiguration() + .getInt("hbase.region.replica.replication.cache.disabledAndDroppedTables.expiryMs", 5000); + // A cache for non existing tables that have a default expiry of 5 sec. This means that if the + // table is created again with the same name, we might miss to replicate for that amount of + // time. But this cache prevents overloading meta requests for every edit from a deleted file. + disabledAndDroppedTables = CacheBuilder.newBuilder() + .expireAfterWrite(nonExistentTableCacheExpiryMs, TimeUnit.MILLISECONDS) + .initialCapacity(10) + .maximumSize(1000) + .build(); + } + + public void append(TableName tableName, byte[] encodedRegionName, byte[] row, + List entries) throws IOException { + + if (disabledAndDroppedTables.getIfPresent(tableName) != null) { + sink.getSkippedEditsCounter().incrementAndGet(); + return; + } + + // get the replicas of the primary region + RegionLocations locations = null; + try { + locations = getRegionLocations(connection, tableName, row, true, 0); + + if (locations == null) { + throw new HBaseIOException("Cannot locate locations for " + + tableName + ", row:" + Bytes.toStringBinary(row)); + } + } catch (TableNotFoundException e) { + disabledAndDroppedTables.put(tableName, Boolean.TRUE); // put to cache. Value ignored + // skip this entry + sink.getSkippedEditsCounter().addAndGet(entries.size()); + return; + } + + if (locations.size() == 1) { + return; + } + + ArrayList> tasks + = new ArrayList>(2); + + // check whether we should still replay this entry. If the regions are changed, or the + // entry is not coming form the primary region, filter it out. + HRegionLocation primaryLocation = locations.getDefaultRegionLocation(); + if (!Bytes.equals(primaryLocation.getRegionInfo().getEncodedNameAsBytes(), + encodedRegionName)) { + sink.getSkippedEditsCounter().addAndGet(entries.size()); + return; + } + + + // All passed entries should belong to one region because it is coming from the EntryBuffers + // split per region. But the regions might split and merge (unlike log recovery case). + for (int replicaId = 0; replicaId < locations.size(); replicaId++) { + HRegionLocation location = locations.getRegionLocation(replicaId); + if (!RegionReplicaUtil.isDefaultReplica(replicaId)) { + HRegionInfo regionInfo = location == null + ? RegionReplicaUtil.getRegionInfoForReplica( + locations.getDefaultRegionLocation().getRegionInfo(), replicaId) + : location.getRegionInfo(); + RegionReplicaReplayCallable callable = new RegionReplicaReplayCallable(connection, + rpcControllerFactory, tableName, location, regionInfo, row, entries, + sink.getSkippedEditsCounter()); + Future task = pool.submit( + new RetryingRpcCallable(rpcRetryingCallerFactory, + callable, operationTimeout)); + tasks.add(task); + } + } + + boolean tasksCancelled = false; + for (Future task : tasks) { + try { + task.get(); + } catch (InterruptedException e) { + throw new InterruptedIOException(e.getMessage()); + } catch (ExecutionException e) { + Throwable cause = e.getCause(); + if (cause instanceof IOException) { + // The table can be disabled or dropped at this time. For disabled tables, we have no + // cheap mechanism to detect this case because meta does not contain this information. + // HConnection.isTableDisabled() is a zk call which we cannot do for every replay RPC. + // So instead we start the replay RPC with retries and + // check whether the table is dropped or disabled which might cause + // SocketTimeoutException, or RetriesExhaustedException or similar if we get IOE. + if (cause instanceof TableNotFoundException || connection.isTableDisabled(tableName)) { + disabledAndDroppedTables.put(tableName, Boolean.TRUE); // put to cache for later. + if (!tasksCancelled) { + sink.getSkippedEditsCounter().addAndGet(entries.size()); + tasksCancelled = true; // so that we do not add to skipped counter again + } + continue; + } + // otherwise rethrow + throw (IOException)cause; + } + // unexpected exception + throw new IOException(cause); + } + } + } + } + + static class RetryingRpcCallable implements Callable { + RpcRetryingCallerFactory factory; + RetryingCallable callable; + int timeout; + public RetryingRpcCallable(RpcRetryingCallerFactory factory, RetryingCallable callable, + int timeout) { + this.factory = factory; + this.callable = callable; + this.timeout = timeout; + } + @Override + public V call() throws Exception { + return factory.newCaller().callWithRetries(callable, timeout); + } + } + + /** + * Calls replay on the passed edits for the given set of entries belonging to the region. It skips + * the entry if the region boundaries have changed or the region is gone. + */ + static class RegionReplicaReplayCallable + extends RegionAdminServiceCallable { + // replicaId of the region replica that we want to replicate to + private final int replicaId; + + private final List entries; + private final byte[] initialEncodedRegionName; + private final AtomicLong skippedEntries; + private final RpcControllerFactory rpcControllerFactory; + private boolean skip; + + public RegionReplicaReplayCallable(ClusterConnection connection, + RpcControllerFactory rpcControllerFactory, TableName tableName, + HRegionLocation location, HRegionInfo regionInfo, byte[] row,List entries, + AtomicLong skippedEntries) { + super(connection, location, tableName, row); + this.replicaId = regionInfo.getReplicaId(); + this.entries = entries; + this.rpcControllerFactory = rpcControllerFactory; + this.skippedEntries = skippedEntries; + this.initialEncodedRegionName = regionInfo.getEncodedNameAsBytes(); + } + + @Override + public HRegionLocation getLocation(boolean useCache) throws IOException { + RegionLocations rl = getRegionLocations(connection, tableName, row, useCache, replicaId); + if (rl == null) { + throw new HBaseIOException(getExceptionMessage()); + } + location = rl.getRegionLocation(replicaId); + if (location == null) { + throw new HBaseIOException(getExceptionMessage()); + } + + // check whether we should still replay this entry. If the regions are changed, or the + // entry is not coming form the primary region, filter it out because we do not need it. + // Regions can change because of (1) region split (2) region merge (3) table recreated + if (!Bytes.equals(location.getRegionInfo().getEncodedNameAsBytes(), + initialEncodedRegionName)) { + skip = true; + return null; + } + + return location; + } + + @Override + public ReplicateWALEntryResponse call(int timeout) throws IOException { + return replayToServer(this.entries, timeout); + } + + private ReplicateWALEntryResponse replayToServer(List entries, int timeout) + throws IOException { + if (entries.isEmpty() || skip) { + skippedEntries.incrementAndGet(); + return ReplicateWALEntryResponse.newBuilder().build(); + } + + Entry[] entriesArray = new Entry[entries.size()]; + entriesArray = entries.toArray(entriesArray); + + // set the region name for the target region replica + Pair p = + ReplicationProtbufUtil.buildReplicateWALEntryRequest( + entriesArray, location.getRegionInfo().getEncodedNameAsBytes()); + try { + PayloadCarryingRpcController controller = rpcControllerFactory.newController(p.getSecond()); + controller.setCallTimeout(timeout); + controller.setPriority(tableName); + return stub.replay(controller, p.getFirst()); + } catch (ServiceException se) { + throw ProtobufUtil.getRemoteException(se); + } + } + + @Override + protected String getExceptionMessage() { + return super.getExceptionMessage() + " table=" + tableName + + " ,replica=" + replicaId + ", row=" + Bytes.toStringBinary(row); + } + } + + private static RegionLocations getRegionLocations( + ClusterConnection connection, TableName tableName, byte[] row, + boolean useCache, int replicaId) + throws RetriesExhaustedException, DoNotRetryIOException, InterruptedIOException { + RegionLocations rl; + try { + rl = connection.locateRegion(tableName, row, useCache, true, replicaId); + } catch (DoNotRetryIOException e) { + throw e; + } catch (RetriesExhaustedException e) { + throw e; + } catch (InterruptedIOException e) { + throw e; + } catch (IOException e) { + throw new RetriesExhaustedException("Can't get the location", e); + } + if (rl == null) { + throw new RetriesExhaustedException("Can't get the locations"); + } + + return rl; + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java index 4729644..b30698c 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java @@ -41,7 +41,6 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.WALEntry; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java index 6e2ef2d..ee43956 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java @@ -717,7 +717,8 @@ public class ReplicationSource extends Thread } break; } catch (Exception ex) { - LOG.warn(replicationEndpoint.getClass().getName() + " threw unknown exception:" + ex); + LOG.warn(replicationEndpoint.getClass().getName() + " threw unknown exception:" + + org.apache.hadoop.util.StringUtils.stringifyException(ex)); if (sleepForRetries("ReplicationEndpoint threw exception", sleepMultiplier)) { sleepMultiplier++; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java index abbf784..b120d67 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java @@ -38,4 +38,12 @@ public class SecurityUtil { } return (i > -1) ? principal.substring(0, i) : principal; } + + /** + * Get the user name from a principal + */ + public static String getPrincipalWithoutRealm(final String principal) { + int i = principal.indexOf("@"); + return (i > -1) ? principal.substring(0, i) : principal; + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlFilter.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlFilter.java index 40c12b7..48a982e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlFilter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlFilter.java @@ -138,13 +138,6 @@ class AccessControlFilter extends FilterBase { return ReturnCode.SKIP; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override public void reset() throws IOException { this.prevFam.unset(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java index c88bf9d..48464f6 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java @@ -83,6 +83,7 @@ import org.apache.hadoop.hbase.protobuf.ResponseConverter; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; import org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos.CleanupBulkLoadRequest; import org.apache.hadoop.hbase.protobuf.generated.SecureBulkLoadProtos.PrepareBulkLoadRequest; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -97,6 +98,7 @@ import org.apache.hadoop.hbase.security.AccessDeniedException; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.util.ByteRange; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -504,8 +506,7 @@ public class AccessController extends BaseMasterAndRegionObserver private void requireGlobalPermission(String request, Action perm, TableName tableName, Map> familyMap) throws IOException { User user = getActiveUser(); - if (authManager.authorize(user, perm) || (tableName != null && - authManager.authorize(user, tableName.getNamespaceAsString(), perm))) { + if (authManager.authorize(user, perm)) { logResult(AuthResult.allow(request, "Global check allowed", user, perm, tableName, familyMap)); } else { logResult(AuthResult.deny(request, "Global check failed", user, perm, tableName, familyMap)); @@ -525,8 +526,7 @@ public class AccessController extends BaseMasterAndRegionObserver private void requireGlobalPermission(String request, Action perm, String namespace) throws IOException { User user = getActiveUser(); - if (authManager.authorize(user, perm) - || (namespace != null && authManager.authorize(user, namespace, perm))) { + if (authManager.authorize(user, perm)) { logResult(AuthResult.allow(request, "Global check allowed", user, perm, namespace)); } else { logResult(AuthResult.deny(request, "Global check failed", user, perm, namespace)); @@ -537,6 +537,34 @@ public class AccessController extends BaseMasterAndRegionObserver } /** + * Checks that the user has the given global or namespace permission. + * @param namespace + * @param permissions Actions being requested + */ + public void requireNamespacePermission(String request, String namespace, + Action... permissions) throws IOException { + User user = getActiveUser(); + AuthResult result = null; + + for (Action permission : permissions) { + if (authManager.authorize(user, namespace, permission)) { + result = AuthResult.allow(request, "Namespace permission granted", + user, permission, namespace); + break; + } else { + // rest of the world + result = AuthResult.deny(request, "Insufficient permissions", user, + permission, namespace); + } + } + logResult(result); + if (!result.isAllowed()) { + throw new AccessDeniedException("Insufficient permissions " + + result.toContextString()); + } + } + + /** * Returns true if the current user is allowed the given action * over at least one of the column qualifiers in the given column families. */ @@ -822,7 +850,7 @@ public class AccessController extends BaseMasterAndRegionObserver } /* ---- MasterObserver implementation ---- */ - + @Override public void start(CoprocessorEnvironment env) throws IOException { CompoundConfiguration conf = new CompoundConfiguration(); conf.add(env.getConfiguration()); @@ -877,6 +905,7 @@ public class AccessController extends BaseMasterAndRegionObserver tableAcls = new MapMaker().weakValues().makeMap(); } + @Override public void stop(CoprocessorEnvironment env) { } @@ -889,7 +918,7 @@ public class AccessController extends BaseMasterAndRegionObserver for (byte[] family: families) { familyMap.put(family, null); } - requireGlobalPermission("createTable", Action.CREATE, desc.getTableName(), familyMap); + requireNamespacePermission("createTable", desc.getTableName().getNamespaceAsString(), Action.CREATE); } @Override @@ -1130,7 +1159,18 @@ public class AccessController extends BaseMasterAndRegionObserver public void preSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { - requirePermission("snapshot", Action.ADMIN); + requirePermission("snapshot", hTableDescriptor.getTableName(), null, null, + Permission.Action.ADMIN); + } + + @Override + public void preListSnapshot(ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, getActiveUser())) { + // list it, if user is the owner of snapshot + } else { + requirePermission("listSnapshot", Action.ADMIN); + } } @Override @@ -1144,19 +1184,28 @@ public class AccessController extends BaseMasterAndRegionObserver public void preRestoreSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { - requirePermission("restore", Action.ADMIN); + if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, getActiveUser())) { + requirePermission("restoreSnapshot", hTableDescriptor.getTableName(), null, null, + Permission.Action.ADMIN); + } else { + requirePermission("restoreSnapshot", Action.ADMIN); + } } @Override public void preDeleteSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot) throws IOException { - requirePermission("deleteSnapshot", Action.ADMIN); + if (SnapshotDescriptionUtils.isSnapshotOwner(snapshot, getActiveUser())) { + // Snapshot owner is allowed to delete the snapshot + } else { + requirePermission("deleteSnapshot", Action.ADMIN); + } } @Override public void preCreateNamespace(ObserverContext ctx, NamespaceDescriptor ns) throws IOException { - requirePermission("createNamespace", Action.ADMIN); + requireGlobalPermission("createNamespace", Action.ADMIN, ns.getName()); } @Override @@ -1176,19 +1225,21 @@ public class AccessController extends BaseMasterAndRegionObserver return null; } }); - LOG.info(namespace + "entry deleted in "+AccessControlLists.ACL_TABLE_NAME+" table."); + LOG.info(namespace + "entry deleted in " + AccessControlLists.ACL_TABLE_NAME + " table."); } @Override public void preModifyNamespace(ObserverContext ctx, NamespaceDescriptor ns) throws IOException { + // We require only global permission so that + // a user with NS admin cannot altering namespace configurations. i.e. namespace quota requireGlobalPermission("modifyNamespace", Action.ADMIN, ns.getName()); } @Override public void preGetNamespaceDescriptor(ObserverContext ctx, String namespace) throws IOException { - requireGlobalPermission("getNamespaceDescriptor", Action.ADMIN, namespace); + requireNamespacePermission("getNamespaceDescriptor", namespace, Action.ADMIN); } @Override @@ -1200,7 +1251,7 @@ public class AccessController extends BaseMasterAndRegionObserver while (itr.hasNext()) { NamespaceDescriptor desc = itr.next(); try { - requireGlobalPermission("listNamespaces", Action.ADMIN, desc.getName()); + requireNamespacePermission("listNamespaces", desc.getName(), Action.ADMIN); } catch (AccessDeniedException e) { itr.remove(); } @@ -2141,7 +2192,7 @@ public class AccessController extends BaseMasterAndRegionObserver }); } else if (request.getType() == AccessControlProtos.Permission.Type.Namespace) { final String namespace = request.getNamespaceName().toStringUtf8(); - requireGlobalPermission("userPermissions", Action.ADMIN, namespace); + requireNamespacePermission("userPermissions", namespace, Action.ADMIN); perms = User.runAsLoginUser(new PrivilegedExceptionAction>() { @Override public List run() throws Exception { @@ -2293,9 +2344,8 @@ public class AccessController extends BaseMasterAndRegionObserver MasterServices masterServices = ctx.getEnvironment().getMasterServices(); for (TableName tableName: tableNamesList) { // Skip checks for a table that does not exist - if (masterServices.getTableDescriptors().get(tableName) == null) { + if (!masterServices.getTableStateManager().isTablePresent(tableName)) continue; - } requirePermission("getTableDescriptors", tableName, null, null, Action.ADMIN, Action.CREATE); } @@ -2378,6 +2428,36 @@ public class AccessController extends BaseMasterAndRegionObserver throws IOException { } @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + requirePermission("setUserQuota", Action.ADMIN); + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + requirePermission("setUserTableQuota", tableName, null, null, Action.ADMIN); + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + requirePermission("setUserNamespaceQuota", Action.ADMIN); + } + + @Override + public void preSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + requirePermission("setTableQuota", tableName, null, null, Action.ADMIN); + } + + @Override + public void preSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + requirePermission("setNamespaceQuota", Action.ADMIN); + } + + @Override public ReplicationEndpoint postCreateReplicationEndPoint( ObserverContext ctx, ReplicationEndpoint endpoint) { return endpoint; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HbaseObjectWritableFor96Migration.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HbaseObjectWritableFor96Migration.java index 2d7d9c9..bae665e 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HbaseObjectWritableFor96Migration.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/HbaseObjectWritableFor96Migration.java @@ -82,7 +82,6 @@ import org.apache.hadoop.hbase.filter.SingleColumnValueFilter; import org.apache.hadoop.hbase.filter.SkipFilter; import org.apache.hadoop.hbase.filter.ValueFilter; import org.apache.hadoop.hbase.filter.WhileMatchFilter; -import org.apache.hadoop.hbase.io.DataOutputOutputStream; import org.apache.hadoop.hbase.io.WritableWithSize; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; @@ -93,6 +92,7 @@ import org.apache.hadoop.hbase.wal.WAL.Entry; import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ProtoUtil; +import org.apache.hadoop.io.DataOutputOutputStream; import org.apache.hadoop.io.MapWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.Writable; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java index e417417..058992f 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java @@ -77,22 +77,23 @@ import java.util.Map; * security in HBase. * * This service addresses two issues: - * - * 1. Moving files in a secure filesystem wherein the HBase Client - * and HBase Server are different filesystem users. - * 2. Does moving in a secure manner. Assuming that the filesystem - * is POSIX compliant. + *

            + *
          1. Moving files in a secure filesystem wherein the HBase Client + * and HBase Server are different filesystem users.
          2. + *
          3. Does moving in a secure manner. Assuming that the filesystem + * is POSIX compliant.
          4. + *
          * * The algorithm is as follows: - * - * 1. Create an hbase owned staging directory which is - * world traversable (711): /hbase/staging - * 2. A user writes out data to his secure output directory: /user/foo/data - * 3. A call is made to hbase to create a secret staging directory - * which globally rwx (777): /user/staging/averylongandrandomdirectoryname - * 4. The user moves the data into the random staging directory, - * then calls bulkLoadHFiles() - * + *
            + *
          1. Create an hbase owned staging directory which is + * world traversable (711): {@code /hbase/staging}
          2. + *
          3. A user writes out data to his secure output directory: {@code /user/foo/data}
          4. + *
          5. A call is made to hbase to create a secret staging directory + * which globally rwx (777): {@code /user/staging/averylongandrandomdirectoryname}
          6. + *
          7. The user moves the data into the random staging directory, + * then calls bulkLoadHFiles()
          8. + *
          * Like delegation tokens the strength of the security lies in the length * and randomness of the secret directory. * diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java index 51b1ebc..9deeca3 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java @@ -466,7 +466,6 @@ public class VisibilityController extends BaseMasterAndRegionObserver implements * could be used. * * @param cell - * @return true or false * @throws IOException */ private boolean checkForReservedVisibilityTagPresence(Cell cell) throws IOException { @@ -947,13 +946,6 @@ public class VisibilityController extends BaseMasterAndRegionObserver implements deleteCellVisTagsFormat); return matchFound ? ReturnCode.INCLUDE : ReturnCode.SKIP; } - - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } } /** diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java index 18fc466..eb8abbe 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelFilter.java @@ -78,13 +78,6 @@ class VisibilityLabelFilter extends FilterBase { return this.expEvaluator.evaluate(cell) ? ReturnCode.INCLUDE : ReturnCode.SKIP; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override public void reset() throws IOException { this.curFamily.unset(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityReplicationEndpoint.java hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityReplicationEndpoint.java index 6519fc2..aca4994 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityReplicationEndpoint.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityReplicationEndpoint.java @@ -57,8 +57,10 @@ public class VisibilityReplicationEndpoint implements ReplicationEndpoint { @Override public boolean replicate(ReplicateContext replicateContext) { if (!delegator.canReplicateToSameCluster()) { - // Only when the replication is inter cluster replication we need to covert the visibility tags to - // string based tags. But for intra cluster replication like region replicas it is not needed. + // Only when the replication is inter cluster replication we need to + // convert the visibility tags to + // string based tags. But for intra cluster replication like region + // replicas it is not needed. List entries = replicateContext.getEntries(); List visTags = new ArrayList(); List nonVisTags = new ArrayList(); @@ -82,7 +84,8 @@ public class VisibilityReplicationEndpoint implements ReplicationEndpoint { } catch (Exception ioe) { LOG.error( "Exception while reading the visibility labels from the cell. The replication " - + "would happen as per the existing format and not as string type for the cell " + + "would happen as per the existing format and not as " + + "string type for the cell " + cell + ".", ioe); // just return the old entries as it is without applying the string type change newEdit.add(cell); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java index a2fd75f..02ae346 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java @@ -391,13 +391,14 @@ public class ExportSnapshot extends Configured implements Tool { * if the file is not found. */ private FSDataInputStream openSourceFile(Context context, final SnapshotFileInfo fileInfo) - throws IOException { + throws IOException { try { + Configuration conf = context.getConfiguration(); FileLink link = null; switch (fileInfo.getType()) { case HFILE: Path inputPath = new Path(fileInfo.getHfile()); - link = new HFileLink(inputRoot, inputArchive, inputPath); + link = HFileLink.buildFromHFileLinkPattern(conf, inputPath); break; case WAL: String serverName = fileInfo.getWalServer(); @@ -418,11 +419,12 @@ public class ExportSnapshot extends Configured implements Tool { private FileStatus getSourceFileStatus(Context context, final SnapshotFileInfo fileInfo) throws IOException { try { + Configuration conf = context.getConfiguration(); FileLink link = null; switch (fileInfo.getType()) { case HFILE: Path inputPath = new Path(fileInfo.getHfile()); - link = new HFileLink(inputRoot, inputArchive, inputPath); + link = HFileLink.buildFromHFileLinkPattern(conf, inputPath); break; case WAL: link = new WALLink(inputRoot, fileInfo.getWalServer(), fileInfo.getWalName()); @@ -510,7 +512,7 @@ public class ExportSnapshot extends Configured implements Tool { if (storeFile.hasFileSize()) { size = storeFile.getFileSize(); } else { - size = new HFileLink(conf, path).getFileStatus(fs).getLen(); + size = HFileLink.buildFromHFileLinkPattern(conf, path).getFileStatus(fs).getLen(); } files.add(new Pair(fileInfo, size)); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java index f28125e..a1c2777 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java @@ -49,7 +49,6 @@ import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; import org.apache.hadoop.hbase.io.HFileLink; import org.apache.hadoop.hbase.io.Reference; -import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.monitoring.MonitoredTask; import org.apache.hadoop.hbase.monitoring.TaskMonitor; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; @@ -618,7 +617,7 @@ public class RestoreSnapshotHelper { } else { InputStream in; if (linkPath != null) { - in = new HFileLink(conf, linkPath).open(fs); + in = HFileLink.buildFromHFileLinkPattern(conf, linkPath).open(fs); } else { linkPath = new Path(new Path(HRegion.getRegionDir(snapshotManifest.getSnapshotDir(), regionInfo.getEncodedName()), familyDir.getName()), hfileName); @@ -690,7 +689,7 @@ public class RestoreSnapshotHelper { for (HColumnDescriptor hcd: snapshotTableDescriptor.getColumnFamilies()) { htd.addFamily(hcd); } - for (Map.Entry e: + for (Map.Entry e: snapshotTableDescriptor.getValues().entrySet()) { htd.setValue(e.getKey(), e.getValue()); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java index e1b1929..cd04b82 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java @@ -30,6 +30,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.snapshot.SnapshotManifestV2; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; @@ -317,4 +318,16 @@ public class SnapshotDescriptionUtils { } } + /** + * Check if the user is this table snapshot's owner + * @param snapshot the table snapshot description + * @param user the user + * @return true if the user is the owner of the snapshot, + * false otherwise or the snapshot owner field is not present. + */ + public static boolean isSnapshotOwner(final SnapshotDescription snapshot, final User user) { + if (user == null) return false; + if (!snapshot.hasOwner()) return false; + return snapshot.getOwner().equals(user.getShortName()); + } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java index 77b17d7..606b9c9 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java @@ -208,13 +208,13 @@ public final class SnapshotInfo extends Configured implements Tool { * Add the specified store file to the stats * @param region region encoded Name * @param family family name - * @param hfile store file name + * @param storeFile store file name * @return the store file information */ FileInfo addStoreFile(final HRegionInfo region, final String family, final SnapshotRegionManifest.StoreFile storeFile) throws IOException { - HFileLink link = HFileLink.create(conf, snapshotTable, region.getEncodedName(), - family, storeFile.getName()); + HFileLink link = HFileLink.build(conf, snapshotTable, region.getEncodedName(), + family, storeFile.getName()); boolean isCorrupted = false; boolean inArchive = false; long size = -1; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java index 38ccf08..a3cfa04 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java @@ -38,6 +38,8 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableDescriptor; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionSnare; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotDataManifest; @@ -259,7 +261,8 @@ public class SnapshotManifest { private void load() throws IOException { switch (getSnapshotFormat(desc)) { case SnapshotManifestV1.DESCRIPTOR_VERSION: { - this.htd = FSTableDescriptors.getTableDescriptorFromFs(fs, workingDir); + this.htd = FSTableDescriptors.getTableDescriptorFromFs(fs, workingDir) + .getHTableDescriptor(); ThreadPoolExecutor tpool = createExecutor("SnapshotManifestLoader"); try { this.regionManifests = @@ -353,7 +356,8 @@ public class SnapshotManifest { LOG.info("Using old Snapshot Format"); // write a copy of descriptor to the snapshot directory new FSTableDescriptors(conf, fs, rootDir) - .createTableDescriptorForTableDirectory(workingDir, htd, false); + .createTableDescriptorForTableDirectory(workingDir, new TableDescriptor( + htd, TableState.State.ENABLED), false); } else { LOG.debug("Convert to Single Snapshot Manifest"); convertToV2SingleManifest(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java index 9297ea0..d1f787a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotReferenceUtil.java @@ -273,7 +273,7 @@ public final class SnapshotReferenceUtil { refPath = StoreFileInfo.getReferredToFile(refPath); String refRegion = refPath.getParent().getParent().getName(); refPath = HFileLink.createPath(table, refRegion, family, refPath.getName()); - if (!new HFileLink(conf, refPath).exists(fs)) { + if (!HFileLink.buildFromHFileLinkPattern(conf, refPath).exists(fs)) { throw new CorruptedSnapshotException("Missing parent hfile for: " + fileName + " path=" + refPath, snapshot); } @@ -292,11 +292,11 @@ public final class SnapshotReferenceUtil { linkPath = new Path(family, fileName); } else { linkPath = new Path(family, HFileLink.createHFileLinkName( - table, regionInfo.getEncodedName(), fileName)); + table, regionInfo.getEncodedName(), fileName)); } // check if the linked file exists (in the archive, or in the table dir) - HFileLink link = new HFileLink(conf, linkPath); + HFileLink link = HFileLink.buildFromHFileLinkPattern(conf, linkPath); try { FileStatus fstat = link.getFileStatus(fs); if (storeFile.hasFileSize() && storeFile.getFileSize() != fstat.getLen()) { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedPriorityBlockingQueue.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedPriorityBlockingQueue.java index 8d1664b..4a93151 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedPriorityBlockingQueue.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/BoundedPriorityBlockingQueue.java @@ -90,6 +90,7 @@ public class BoundedPriorityBlockingQueue extends AbstractQueue implements public E poll() { E elem = objects[head]; + objects[head] = null; head = (head + 1) % objects.length; if (head == 0) tail = 0; return elem; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java index fd65754..034a1e2 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompressionTest.java @@ -155,6 +155,11 @@ public class CompressionTest { Configuration conf = new Configuration(); Path path = new Path(args[0]); FileSystem fs = path.getFileSystem(conf); + if (fs.exists(path)) { + System.err.println("The specified path exists, aborting!"); + System.exit(1); + } + try { doSmokeTest(fs, path, args[1]); } finally { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConfigUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConfigUtil.java deleted file mode 100644 index 882d199..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/ConfigUtil.java +++ /dev/null @@ -1,33 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.util; - -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.classification.InterfaceAudience; - -/** - * Some configuration related utilities - */ -@InterfaceAudience.Private -public class ConfigUtil { - - public static boolean useZKForAssignment(Configuration conf) { - // To change the default, please also update ZooKeeperWatcher.java - return conf.getBoolean("hbase.assignment.usezk", true); - } -} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java index 7cd2673..7a6811c 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSTableDescriptors.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase.util; +import javax.annotation.Nullable; import java.io.FileNotFoundException; import java.io.IOException; import java.util.Comparator; @@ -27,6 +28,8 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.regex.Matcher; import java.util.regex.Pattern; +import com.google.common.annotations.VisibleForTesting; +import com.google.common.primitives.Ints; import org.apache.commons.lang.NotImplementedException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -38,18 +41,16 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.PathFilter; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableDescriptors; import org.apache.hadoop.hbase.TableInfoMissingException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import com.google.common.annotations.VisibleForTesting; -import com.google.common.primitives.Ints; - - /** * Implementation of {@link TableDescriptors} that reads descriptors from the * passed filesystem. It expects descriptors to be in a file in the @@ -88,15 +89,15 @@ public class FSTableDescriptors implements TableDescriptors { // This cache does not age out the old stuff. Thinking is that the amount // of data we keep up in here is so small, no need to do occasional purge. // TODO. - private final Map cache = - new ConcurrentHashMap(); + private final Map cache = + new ConcurrentHashMap(); /** * Table descriptor for hbase:meta catalog table */ - private final HTableDescriptor metaTableDescriptor; + private final HTableDescriptor metaTableDescritor; - /** + /** * Construct a FSTableDescriptors instance using the hbase root dir of the given * conf and the filesystem where that root dir lives. * This instance can do write operations (is not read only). @@ -106,7 +107,7 @@ public class FSTableDescriptors implements TableDescriptors { } public FSTableDescriptors(final Configuration conf, final FileSystem fs, final Path rootdir) - throws IOException { + throws IOException { this(conf, fs, rootdir, false, true); } @@ -121,7 +122,8 @@ public class FSTableDescriptors implements TableDescriptors { this.rootdir = rootdir; this.fsreadonly = fsreadonly; this.usecache = usecache; - this.metaTableDescriptor = HTableDescriptor.metaTableDescriptor(conf); + + this.metaTableDescritor = TableDescriptor.metaTableDescriptor(conf); } public void setCacheOn() throws IOException { @@ -146,12 +148,13 @@ public class FSTableDescriptors implements TableDescriptors { * to see if a newer file has been created since the cached one was read. */ @Override - public HTableDescriptor get(final TableName tablename) + @Nullable + public TableDescriptor getDescriptor(final TableName tablename) throws IOException { invocations++; if (TableName.META_TABLE_NAME.equals(tablename)) { cachehits++; - return metaTableDescriptor; + return new TableDescriptor(metaTableDescritor, TableState.State.ENABLED); } // hbase:meta is already handled. If some one tries to get the descriptor for // .logs, .oldlogs or .corrupt throw an exception. @@ -161,13 +164,13 @@ public class FSTableDescriptors implements TableDescriptors { if (usecache) { // Look in cache of descriptors. - HTableDescriptor cachedtdm = this.cache.get(tablename); + TableDescriptor cachedtdm = this.cache.get(tablename); if (cachedtdm != null) { cachehits++; return cachedtdm; } } - HTableDescriptor tdmt = null; + TableDescriptor tdmt = null; try { tdmt = getTableDescriptorFromFs(fs, rootdir, tablename, !fsreadonly); } catch (NullPointerException e) { @@ -186,27 +189,43 @@ public class FSTableDescriptors implements TableDescriptors { } /** + * Get the current table descriptor for the given table, or null if none exists. + * + * Uses a local cache of the descriptor but still checks the filesystem on each call + * to see if a newer file has been created since the cached one was read. + */ + @Override + public HTableDescriptor get(TableName tableName) throws IOException { + if (TableName.META_TABLE_NAME.equals(tableName)) { + cachehits++; + return metaTableDescritor; + } + TableDescriptor descriptor = getDescriptor(tableName); + return descriptor == null ? null : descriptor.getHTableDescriptor(); + } + + /** * Returns a map from table name to table descriptor for all tables. */ @Override - public Map getAll() + public Map getAllDescriptors() throws IOException { - Map htds = new TreeMap(); + Map tds = new TreeMap(); if (fsvisited && usecache) { - for (Map.Entry entry: this.cache.entrySet()) { - htds.put(entry.getKey().toString(), entry.getValue()); + for (Map.Entry entry: this.cache.entrySet()) { + tds.put(entry.getKey().toString(), entry.getValue()); } // add hbase:meta to the response - htds.put(HTableDescriptor.META_TABLEDESC.getTableName().getNameAsString(), - HTableDescriptor.META_TABLEDESC); + tds.put(this.metaTableDescritor.getNameAsString(), + new TableDescriptor(metaTableDescritor, TableState.State.ENABLED)); } else { LOG.debug("Fetching table descriptors from the filesystem."); boolean allvisited = true; for (Path d : FSUtils.getTableDirs(fs, rootdir)) { - HTableDescriptor htd = null; + TableDescriptor htd = null; try { - htd = get(FSUtils.getTableName(d)); + htd = getDescriptor(FSUtils.getTableName(d)); } catch (FileNotFoundException fnfe) { // inability of retrieving one HTD shouldn't stop getting the remaining LOG.warn("Trouble retrieving htd", fnfe); @@ -215,18 +234,33 @@ public class FSTableDescriptors implements TableDescriptors { allvisited = false; continue; } else { - htds.put(htd.getTableName().getNameAsString(), htd); + tds.put(htd.getHTableDescriptor().getTableName().getNameAsString(), htd); } fsvisited = allvisited; } } - return htds; + return tds; } - /* (non-Javadoc) - * @see org.apache.hadoop.hbase.TableDescriptors#getTableDescriptors(org.apache.hadoop.fs.FileSystem, org.apache.hadoop.fs.Path) + /** + * Returns a map from table name to table descriptor for all tables. */ @Override + public Map getAll() throws IOException { + Map htds = new TreeMap(); + Map allDescriptors = getAllDescriptors(); + for (Map.Entry entry : allDescriptors + .entrySet()) { + htds.put(entry.getKey(), entry.getValue().getHTableDescriptor()); + } + return htds; + } + + /** + * Find descriptors by namespace. + * @see #get(org.apache.hadoop.hbase.TableName) + */ + @Override public Map getByNamespace(String name) throws IOException { Map htds = new TreeMap(); @@ -251,21 +285,49 @@ public class FSTableDescriptors implements TableDescriptors { * and updates the local cache with it. */ @Override - public void add(HTableDescriptor htd) throws IOException { + public void add(TableDescriptor htd) throws IOException { if (fsreadonly) { throw new NotImplementedException("Cannot add a table descriptor - in read only mode"); } - if (TableName.META_TABLE_NAME.equals(htd.getTableName())) { + TableName tableName = htd.getHTableDescriptor().getTableName(); + if (TableName.META_TABLE_NAME.equals(tableName)) { throw new NotImplementedException(); } - if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(htd.getTableName().getNameAsString())) { + if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(tableName.getNameAsString())) { throw new NotImplementedException( - "Cannot add a table descriptor for a reserved subdirectory name: " + htd.getNameAsString()); + "Cannot add a table descriptor for a reserved subdirectory name: " + + htd.getHTableDescriptor().getNameAsString()); } updateTableDescriptor(htd); } /** + * Adds (or updates) the table descriptor to the FileSystem + * and updates the local cache with it. + */ + @Override + public void add(HTableDescriptor htd) throws IOException { + if (fsreadonly) { + throw new NotImplementedException("Cannot add a table descriptor - in read only mode"); + } + TableName tableName = htd.getTableName(); + if (TableName.META_TABLE_NAME.equals(tableName)) { + throw new NotImplementedException(); + } + if (HConstants.HBASE_NON_USER_TABLE_DIRS.contains(tableName.getNameAsString())) { + throw new NotImplementedException( + "Cannot add a table descriptor for a reserved subdirectory name: " + + htd.getNameAsString()); + } + TableDescriptor descriptor = getDescriptor(htd.getTableName()); + if (descriptor == null) + descriptor = new TableDescriptor(htd); + else + descriptor.setHTableDescriptor(htd); + updateTableDescriptor(descriptor); + } + + /** * Removes the table descriptor from the local cache and returns it. * If not in read only mode, it also deletes the entire table directory(!) * from the FileSystem. @@ -282,11 +344,11 @@ public class FSTableDescriptors implements TableDescriptors { throw new IOException("Failed delete of " + tabledir.toString()); } } - HTableDescriptor descriptor = this.cache.remove(tablename); + TableDescriptor descriptor = this.cache.remove(tablename); if (descriptor == null) { return null; } else { - return descriptor; + return descriptor.getHTableDescriptor(); } } @@ -470,19 +532,19 @@ public class FSTableDescriptors implements TableDescriptors { * if it exists, bypassing the local cache. * Returns null if it's not found. */ - public static HTableDescriptor getTableDescriptorFromFs(FileSystem fs, - Path hbaseRootDir, TableName tableName) throws IOException { + public static TableDescriptor getTableDescriptorFromFs(FileSystem fs, + Path hbaseRootDir, TableName tableName) throws IOException { Path tableDir = FSUtils.getTableDir(hbaseRootDir, tableName); return getTableDescriptorFromFs(fs, tableDir); } /** - * Returns the latest table descriptor for the table located at the given directory - * directly from the file system if it exists. - * @throws TableInfoMissingException if there is no descriptor + * Returns the latest table descriptor for the given table directly from the file system + * if it exists, bypassing the local cache. + * Returns null if it's not found. */ - public static HTableDescriptor getTableDescriptorFromFs(FileSystem fs, - Path hbaseRootDir, TableName tableName, boolean rewritePb) throws IOException { + public static TableDescriptor getTableDescriptorFromFs(FileSystem fs, + Path hbaseRootDir, TableName tableName, boolean rewritePb) throws IOException { Path tableDir = FSUtils.getTableDir(hbaseRootDir, tableName); return getTableDescriptorFromFs(fs, tableDir, rewritePb); } @@ -491,7 +553,7 @@ public class FSTableDescriptors implements TableDescriptors { * directly from the file system if it exists. * @throws TableInfoMissingException if there is no descriptor */ - public static HTableDescriptor getTableDescriptorFromFs(FileSystem fs, Path tableDir) + public static TableDescriptor getTableDescriptorFromFs(FileSystem fs, Path tableDir) throws IOException { return getTableDescriptorFromFs(fs, tableDir, false); } @@ -501,7 +563,7 @@ public class FSTableDescriptors implements TableDescriptors { * directly from the file system if it exists. * @throws TableInfoMissingException if there is no descriptor */ - public static HTableDescriptor getTableDescriptorFromFs(FileSystem fs, Path tableDir, + public static TableDescriptor getTableDescriptorFromFs(FileSystem fs, Path tableDir, boolean rewritePb) throws IOException { FileStatus status = getTableInfoPath(fs, tableDir, false); @@ -511,7 +573,7 @@ public class FSTableDescriptors implements TableDescriptors { return readTableDescriptor(fs, status, rewritePb); } - private static HTableDescriptor readTableDescriptor(FileSystem fs, FileStatus status, + private static TableDescriptor readTableDescriptor(FileSystem fs, FileStatus status, boolean rewritePb) throws IOException { int len = Ints.checkedCast(status.getLen()); byte [] content = new byte[len]; @@ -521,30 +583,30 @@ public class FSTableDescriptors implements TableDescriptors { } finally { fsDataInputStream.close(); } - HTableDescriptor htd = null; + TableDescriptor td = null; try { - htd = HTableDescriptor.parseFrom(content); + td = TableDescriptor.parseFrom(content); } catch (DeserializationException e) { // we have old HTableDescriptor here try { - HTableDescriptor ohtd = HTableDescriptor.parseFrom(content); + HTableDescriptor htd = HTableDescriptor.parseFrom(content); LOG.warn("Found old table descriptor, converting to new format for table " + - ohtd.getTableName()); - htd = new HTableDescriptor(ohtd); - if (rewritePb) rewriteTableDescriptor(fs, status, htd); + htd.getTableName() + "; NOTE table will be in ENABLED state!"); + td = new TableDescriptor(htd, TableState.State.ENABLED); + if (rewritePb) rewriteTableDescriptor(fs, status, td); } catch (DeserializationException e1) { - throw new IOException("content=" + Bytes.toShort(content), e1); + throw new IOException("content=" + Bytes.toShort(content), e); } } if (rewritePb && !ProtobufUtil.isPBMagicPrefix(content)) { // Convert the file over to be pb before leaving here. - rewriteTableDescriptor(fs, status, htd); + rewriteTableDescriptor(fs, status, td); } - return htd; + return td; } private static void rewriteTableDescriptor(final FileSystem fs, final FileStatus status, - final HTableDescriptor td) + final TableDescriptor td) throws IOException { Path tableInfoDir = status.getPath().getParent(); Path tableDir = tableInfoDir.getParent(); @@ -556,17 +618,18 @@ public class FSTableDescriptors implements TableDescriptors { * @throws IOException Thrown if failed update. * @throws NotImplementedException if in read only mode */ - @VisibleForTesting Path updateTableDescriptor(HTableDescriptor htd) + @VisibleForTesting Path updateTableDescriptor(TableDescriptor td) throws IOException { if (fsreadonly) { throw new NotImplementedException("Cannot update a table descriptor - in read only mode"); } - Path tableDir = getTableDir(htd.getTableName()); - Path p = writeTableDescriptor(fs, htd, tableDir, getTableInfoPath(tableDir)); + TableName tableName = td.getHTableDescriptor().getTableName(); + Path tableDir = getTableDir(tableName); + Path p = writeTableDescriptor(fs, td, tableDir, getTableInfoPath(tableDir)); if (p == null) throw new IOException("Failed update"); LOG.info("Updated tableinfo=" + p); if (usecache) { - this.cache.put(htd.getTableName(), htd); + this.cache.put(td.getHTableDescriptor().getTableName(), td); } return p; } @@ -617,7 +680,7 @@ public class FSTableDescriptors implements TableDescriptors { * @return Descriptor file or null if we failed write. */ private static Path writeTableDescriptor(final FileSystem fs, - final HTableDescriptor htd, final Path tableDir, + final TableDescriptor htd, final Path tableDir, final FileStatus currentDescriptorFile) throws IOException { // Get temporary dir into which we'll first write a file to avoid half-written file phenomenon. @@ -648,7 +711,7 @@ public class FSTableDescriptors implements TableDescriptors { } tableInfoDirPath = new Path(tableInfoDir, filename); try { - writeHTD(fs, tempPath, htd); + writeTD(fs, tempPath, htd); fs.mkdirs(tableInfoDirPath.getParent()); if (!fs.rename(tempPath, tableInfoDirPath)) { throw new IOException("Failed rename of " + tempPath + " to " + tableInfoDirPath); @@ -672,7 +735,7 @@ public class FSTableDescriptors implements TableDescriptors { return tableInfoDirPath; } - private static void writeHTD(final FileSystem fs, final Path p, final HTableDescriptor htd) + private static void writeTD(final FileSystem fs, final Path p, final TableDescriptor htd) throws IOException { FSDataOutputStream out = fs.create(p, false); try { @@ -689,24 +752,42 @@ public class FSTableDescriptors implements TableDescriptors { * Used by tests. * @return True if we successfully created file. */ - public boolean createTableDescriptor(HTableDescriptor htd) throws IOException { + public boolean createTableDescriptor(TableDescriptor htd) throws IOException { return createTableDescriptor(htd, false); } /** + * Create new HTableDescriptor in HDFS. Happens when we are creating table. + * Used by tests. + * @return True if we successfully created file. + */ + public boolean createTableDescriptor(HTableDescriptor htd) throws IOException { + return createTableDescriptor(new TableDescriptor(htd), false); + } + + /** * Create new HTableDescriptor in HDFS. Happens when we are creating table. If * forceCreation is true then even if previous table descriptor is present it * will be overwritten * * @return True if we successfully created file. */ - public boolean createTableDescriptor(HTableDescriptor htd, boolean forceCreation) + public boolean createTableDescriptor(TableDescriptor htd, boolean forceCreation) throws IOException { - Path tableDir = getTableDir(htd.getTableName()); + Path tableDir = getTableDir(htd.getHTableDescriptor().getTableName()); return createTableDescriptorForTableDirectory(tableDir, htd, forceCreation); } /** + * Create tables descriptor for given HTableDescriptor. Default TableDescriptor state + * will be used (typically ENABLED). + */ + public boolean createTableDescriptor(HTableDescriptor htd, boolean forceCreation) + throws IOException { + return createTableDescriptor(new TableDescriptor(htd), forceCreation); + } + + /** * Create a new HTableDescriptor in HDFS in the specified table directory. Happens when we create * a new table or snapshot a table. * @param tableDir table directory under which we should write the file @@ -718,7 +799,7 @@ public class FSTableDescriptors implements TableDescriptors { * @throws IOException if a filesystem error occurs */ public boolean createTableDescriptorForTableDirectory(Path tableDir, - HTableDescriptor htd, boolean forceCreation) throws IOException { + TableDescriptor htd, boolean forceCreation) throws IOException { if (fsreadonly) { throw new NotImplementedException("Cannot create a table descriptor - in read only mode"); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java index 1def840..50532a1 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java @@ -60,7 +60,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HDFSBlocksDistribution; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.fs.HFileSystem; @@ -70,9 +69,12 @@ import org.apache.hadoop.hbase.security.AccessDeniedException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FSProtos; import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hdfs.DFSClient; +import org.apache.hadoop.hdfs.DFSHedgedReadMetrics; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.io.IOUtils; import org.apache.hadoop.io.SequenceFile; +import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.security.UserGroupInformation; import org.apache.hadoop.util.Progressable; import org.apache.hadoop.util.ReflectionUtils; @@ -398,7 +400,8 @@ public abstract class FSUtils { return; } } catch (IOException e) { - exception = RemoteExceptionHandler.checkIOException(e); + exception = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; } try { fs.close(); @@ -1909,4 +1912,47 @@ public abstract class FSUtils { int hbaseSize = conf.getInt("hbase." + dfsKey, defaultSize); conf.setIfUnset(dfsKey, Integer.toString(hbaseSize)); } -} + + /** + * @param c + * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on hdfs. + * @throws IOException + */ + public static DFSHedgedReadMetrics getDFSHedgedReadMetrics(final Configuration c) + throws IOException { + if (!isHDFS(c)) return null; + // getHedgedReadMetrics is package private. Get the DFSClient instance that is internal + // to the DFS FS instance and make the method getHedgedReadMetrics accessible, then invoke it + // to get the singleton instance of DFSHedgedReadMetrics shared by DFSClients. + final String name = "getHedgedReadMetrics"; + DFSClient dfsclient = ((DistributedFileSystem)FileSystem.get(c)).getClient(); + Method m; + try { + m = dfsclient.getClass().getDeclaredMethod(name); + } catch (NoSuchMethodException e) { + LOG.warn("Failed find method " + name + " in dfsclient; no hedged read metrics: " + + e.getMessage()); + return null; + } catch (SecurityException e) { + LOG.warn("Failed find method " + name + " in dfsclient; no hedged read metrics: " + + e.getMessage()); + return null; + } + m.setAccessible(true); + try { + return (DFSHedgedReadMetrics)m.invoke(dfsclient); + } catch (IllegalAccessException e) { + LOG.warn("Failed invoking method " + name + " on dfsclient; no hedged read metrics: " + + e.getMessage()); + return null; + } catch (IllegalArgumentException e) { + LOG.warn("Failed invoking method " + name + " on dfsclient; no hedged read metrics: " + + e.getMessage()); + return null; + } catch (InvocationTargetException e) { + LOG.warn("Failed invoking method " + name + " on dfsclient; no hedged read metrics: " + + e.getMessage()); + return null; + } + } +} \ No newline at end of file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java index 264b91b..e507df4 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java @@ -17,9 +17,9 @@ */ package org.apache.hadoop.hbase.util; +import java.io.Closeable; import java.io.FileNotFoundException; import java.io.IOException; -import java.io.InterruptedIOException; import java.io.PrintWriter; import java.io.StringWriter; import java.net.InetAddress; @@ -75,10 +75,12 @@ import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.MasterNotRunningException; +import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.hbase.client.Admin; @@ -98,6 +100,7 @@ import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.RowMutations; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.master.MasterFileSystem; @@ -117,7 +120,6 @@ import org.apache.hadoop.hbase.util.hbck.TableIntegrityErrorHandlerImpl; import org.apache.hadoop.hbase.util.hbck.TableLockChecker; import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException; import org.apache.hadoop.io.IOUtils; @@ -128,6 +130,7 @@ import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import org.apache.zookeeper.KeeperException; +import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Joiner; import com.google.common.base.Preconditions; import com.google.common.collect.Lists; @@ -182,7 +185,7 @@ import com.google.protobuf.ServiceException; */ @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.TOOLS) @InterfaceStability.Evolving -public class HBaseFsck extends Configured { +public class HBaseFsck extends Configured implements Closeable { public static final long DEFAULT_TIME_LAG = 60000; // default value of 1 minute public static final long DEFAULT_SLEEP_BEFORE_RERUN = 10000; private static final int MAX_NUM_THREADS = 50; // #threads to contact regions @@ -320,7 +323,7 @@ public class HBaseFsck extends Configured { errors = getErrorReporter(getConf()); this.executor = exec; } - + /** * This method maintains a lock using a file. If the creation fails we return null * @@ -328,6 +331,7 @@ public class HBaseFsck extends Configured { * @throws IOException */ private FSDataOutputStream checkAndMarkRunningHbck() throws IOException { + long start = EnvironmentEdgeManager.currentTime(); try { FileSystem fs = FSUtils.getCurrentFileSystem(getConf()); FsPermission defaultPerms = FSUtils.getFilePermissions(fs, getConf(), @@ -345,6 +349,13 @@ public class HBaseFsck extends Configured { } else { throw e; } + } finally { + long duration = EnvironmentEdgeManager.currentTime() - start; + if (duration > 30000) { + LOG.warn("Took " + duration + " milliseconds to obtain lock"); + // took too long to obtain lock + return null; + } } } @@ -385,7 +396,8 @@ public class HBaseFsck extends Configured { Runtime.getRuntime().addShutdownHook(new Thread() { @Override public void run() { - unlockHbck(); + IOUtils.closeStream(HBaseFsck.this); + unlockHbck(); } }); LOG.debug("Launching hbck"); @@ -601,6 +613,11 @@ public class HBaseFsck extends Configured { return result; } + @Override + public void close() throws IOException { + IOUtils.cleanup(null, admin, meta, connection); + } + private static class RegionBoundariesInformation { public byte [] regionName; public byte [] metaFirstKey; @@ -620,7 +637,7 @@ public class HBaseFsck extends Configured { public void checkRegionBoundaries() { try { ByteArrayComparator comparator = new ByteArrayComparator(); - List regions = MetaScanner.listAllRegions(getConf(), false); + List regions = MetaScanner.listAllRegions(getConf(), connection, false); final RegionBoundariesInformation currentRegionBoundariesInformation = new RegionBoundariesInformation(); Path hbaseRoot = FSUtils.getRootDir(getConf()); @@ -1040,9 +1057,9 @@ public class HBaseFsck extends Configured { modTInfo = new TableInfo(tableName); tablesInfo.put(tableName, modTInfo); try { - HTableDescriptor htd = + TableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(fs, hbaseRoot, tableName); - modTInfo.htds.add(htd); + modTInfo.htds.add(htd.getHTableDescriptor()); } catch (IOException ioe) { if (!orphanTableDirs.containsKey(tableName)) { LOG.warn("Unable to read .tableinfo from " + hbaseRoot, ioe); @@ -1096,7 +1113,7 @@ public class HBaseFsck extends Configured { for (String columnfamimly : columns) { htd.addFamily(new HColumnDescriptor(columnfamimly)); } - fstd.createTableDescriptor(htd, true); + fstd.createTableDescriptor(new TableDescriptor(htd, TableState.State.ENABLED), true); return true; } @@ -1144,7 +1161,7 @@ public class HBaseFsck extends Configured { if (tableName.equals(htds[j].getTableName())) { HTableDescriptor htd = htds[j]; LOG.info("fixing orphan table: " + tableName + " from cache"); - fstd.createTableDescriptor(htd, true); + fstd.createTableDescriptor(new TableDescriptor(htd, TableState.State.ENABLED), true); j++; iter.remove(); } @@ -1469,22 +1486,16 @@ public class HBaseFsck extends Configured { * @throws IOException */ private void loadDisabledTables() - throws ZooKeeperConnectionException, IOException { + throws IOException { HConnectionManager.execute(new HConnectable(getConf()) { @Override public Void connect(HConnection connection) throws IOException { - ZooKeeperWatcher zkw = createZooKeeperWatcher(); - try { - for (TableName tableName : - ZKTableStateClientSideReader.getDisabledOrDisablingTables(zkw)) { - disabledTables.add(tableName); + TableName[] tables = connection.listTableNames(); + for (TableName table : tables) { + if (connection.getTableState(table) + .inStates(TableState.State.DISABLED, TableState.State.DISABLING)) { + disabledTables.add(table); } - } catch (KeeperException ke) { - throw new IOException(ke); - } catch (InterruptedException e) { - throw new InterruptedIOException(); - } finally { - zkw.close(); } return null; } @@ -1650,9 +1661,23 @@ public class HBaseFsck extends Configured { */ private void checkAndFixConsistency() throws IOException, KeeperException, InterruptedException { + // Divide the checks in two phases. One for default/primary replicas and another + // for the non-primary ones. Keeps code cleaner this way. + for (java.util.Map.Entry e: regionInfoMap.entrySet()) { + if (e.getValue().getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) { + checkRegionConsistency(e.getKey(), e.getValue()); + } + } + boolean prevHdfsCheck = shouldCheckHdfs(); + setCheckHdfs(false); //replicas don't have any hdfs data + // Run a pass over the replicas and fix any assignment issues that exist on the currently + // deployed/undeployed replicas. for (java.util.Map.Entry e: regionInfoMap.entrySet()) { - checkRegionConsistency(e.getKey(), e.getValue()); + if (e.getValue().getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + checkRegionConsistency(e.getKey(), e.getValue()); + } } + setCheckHdfs(prevHdfsCheck); } private void preCheckPermission() throws IOException, AccessDeniedException { @@ -1754,10 +1779,31 @@ public class HBaseFsck extends Configured { } private void undeployRegions(HbckInfo hi) throws IOException, InterruptedException { + undeployRegionsForHbi(hi); + // undeploy replicas of the region (but only if the method is invoked for the primary) + if (hi.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + return; + } + int numReplicas = admin.getTableDescriptor(hi.getTableName()).getRegionReplication(); + for (int i = 1; i < numReplicas; i++) { + if (hi.getPrimaryHRIForDeployedReplica() == null) continue; + HRegionInfo hri = RegionReplicaUtil.getRegionInfoForReplica( + hi.getPrimaryHRIForDeployedReplica(), i); + HbckInfo h = regionInfoMap.get(hri.getEncodedName()); + if (h != null) { + undeployRegionsForHbi(h); + //set skip checks; we undeployed it, and we don't want to evaluate this anymore + //in consistency checks + h.setSkipChecks(true); + } + } + } + + private void undeployRegionsForHbi(HbckInfo hi) throws IOException, InterruptedException { for (OnlineEntry rse : hi.deployedEntries) { LOG.debug("Undeploy region " + rse.hri + " from " + rse.hsa); try { - HBaseFsckRepair.closeRegionSilentlyAndWait(admin, rse.hsa, rse.hri); + HBaseFsckRepair.closeRegionSilentlyAndWait(connection, rse.hsa, rse.hri); offline(rse.hri.getRegionName()); } catch (IOException ioe) { LOG.warn("Got exception when attempting to offline region " @@ -1789,27 +1835,41 @@ public class HBaseFsck extends Configured { get.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); get.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER); get.addColumn(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER); + // also get the locations of the replicas to close if the primary region is being closed + if (hi.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) { + int numReplicas = admin.getTableDescriptor(hi.getTableName()).getRegionReplication(); + for (int i = 0; i < numReplicas; i++) { + get.addColumn(HConstants.CATALOG_FAMILY, MetaTableAccessor.getServerColumn(i)); + get.addColumn(HConstants.CATALOG_FAMILY, MetaTableAccessor.getStartCodeColumn(i)); + } + } Result r = meta.get(get); - ServerName serverName = HRegionInfo.getServerName(r); - if (serverName == null) { - errors.reportError("Unable to close region " - + hi.getRegionNameAsString() + " because meta does not " - + "have handle to reach it."); + RegionLocations rl = MetaTableAccessor.getRegionLocations(r); + if (rl == null) { + LOG.warn("Unable to close region " + hi.getRegionNameAsString() + + " since meta does not have handle to reach it"); return; } - - HRegionInfo hri = HRegionInfo.getHRegionInfo(r); - if (hri == null) { - LOG.warn("Unable to close region " + hi.getRegionNameAsString() - + " because hbase:meta had invalid or missing " - + HConstants.CATALOG_FAMILY_STR + ":" - + Bytes.toString(HConstants.REGIONINFO_QUALIFIER) - + " qualifier value."); - return; + for (HRegionLocation h : rl.getRegionLocations()) { + ServerName serverName = h.getServerName(); + if (serverName == null) { + errors.reportError("Unable to close region " + + hi.getRegionNameAsString() + " because meta does not " + + "have handle to reach it."); + continue; + } + HRegionInfo hri = h.getRegionInfo(); + if (hri == null) { + LOG.warn("Unable to close region " + hi.getRegionNameAsString() + + " because hbase:meta had invalid or missing " + + HConstants.CATALOG_FAMILY_STR + ":" + + Bytes.toString(HConstants.REGIONINFO_QUALIFIER) + + " qualifier value."); + continue; + } + // close the region -- close files and remove assignment + HBaseFsckRepair.closeRegionSilentlyAndWait(connection, serverName, hri); } - - // close the region -- close files and remove assignment - HBaseFsckRepair.closeRegionSilentlyAndWait(admin, serverName, hri); } private void tryAssignmentRepair(HbckInfo hbi, String msg) throws IOException, @@ -1825,6 +1885,23 @@ public class HBaseFsck extends Configured { } HBaseFsckRepair.fixUnassigned(admin, hri); HBaseFsckRepair.waitUntilAssigned(admin, hri); + + // also assign replicas if needed (do it only when this call operates on a primary replica) + if (hbi.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) return; + int replicationCount = admin.getTableDescriptor(hri.getTable()).getRegionReplication(); + for (int i = 1; i < replicationCount; i++) { + hri = RegionReplicaUtil.getRegionInfoForReplica(hri, i); + HbckInfo h = regionInfoMap.get(hri.getEncodedName()); + if (h != null) { + undeployRegions(h); + //set skip checks; we undeploy & deploy it; we don't want to evaluate this hbi anymore + //in consistency checks + h.setSkipChecks(true); + } + HBaseFsckRepair.fixUnassigned(admin, hri); + HBaseFsckRepair.waitUntilAssigned(admin, hri); + } + } } @@ -1833,8 +1910,9 @@ public class HBaseFsck extends Configured { */ private void checkRegionConsistency(final String key, final HbckInfo hbi) throws IOException, KeeperException, InterruptedException { - String descriptiveName = hbi.toString(); + if (hbi.isSkipChecks()) return; + String descriptiveName = hbi.toString(); boolean inMeta = hbi.metaEntry != null; // In case not checking HDFS, assume the region is on HDFS boolean inHdfs = !shouldCheckHdfs() || hbi.getHdfsRegionDir() != null; @@ -1854,7 +1932,6 @@ public class HBaseFsck extends Configured { if (hbi.containsOnlyHdfsEdits()) { return; } - if (hbi.isSkipChecks()) return; if (inMeta && inHdfs && isDeployed && deploymentMatchesMeta && shouldBeDeployed) { return; } else if (inMeta && inHdfs && !shouldBeDeployed && !isDeployed) { @@ -1899,7 +1976,9 @@ public class HBaseFsck extends Configured { } LOG.info("Patching hbase:meta with .regioninfo: " + hbi.getHdfsHRI()); - HBaseFsckRepair.fixMetaHoleOnline(getConf(), hbi.getHdfsHRI()); + int numReplicas = admin.getTableDescriptor(hbi.getTableName()).getRegionReplication(); + HBaseFsckRepair.fixMetaHoleOnlineAndAddReplicas(getConf(), hbi.getHdfsHRI(), + admin.getClusterStatus().getServers(), numReplicas); tryAssignmentRepair(hbi, "Trying to reassign region..."); } @@ -1908,15 +1987,25 @@ public class HBaseFsck extends Configured { errors.reportError(ERROR_CODE.NOT_IN_META, "Region " + descriptiveName + " not in META, but deployed on " + Joiner.on(", ").join(hbi.deployedOn)); debugLsr(hbi.getHdfsRegionDir()); - if (shouldFixMeta()) { + if (hbi.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) { + // for replicas, this means that we should undeploy the region (we would have + // gone over the primaries and fixed meta holes in first phase under + // checkAndFixConsistency; we shouldn't get the condition !inMeta at + // this stage unless unwanted replica) + if (shouldFixAssignments()) { + undeployRegionsForHbi(hbi); + } + } + if (shouldFixMeta() && hbi.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) { if (!hbi.isHdfsRegioninfoPresent()) { LOG.error("This should have been repaired in table integrity repair phase"); return; } LOG.info("Patching hbase:meta with with .regioninfo: " + hbi.getHdfsHRI()); - HBaseFsckRepair.fixMetaHoleOnline(getConf(), hbi.getHdfsHRI()); - + int numReplicas = admin.getTableDescriptor(hbi.getTableName()).getRegionReplication(); + HBaseFsckRepair.fixMetaHoleOnlineAndAddReplicas(getConf(), hbi.getHdfsHRI(), + admin.getClusterStatus().getServers(), numReplicas); tryAssignmentRepair(hbi, "Trying to fix unassigned region..."); } @@ -1974,7 +2063,7 @@ public class HBaseFsck extends Configured { if (shouldFixAssignments()) { errors.print("Trying to close the region " + descriptiveName); setShouldRerun(); - HBaseFsckRepair.fixMultiAssignment(admin, hbi.metaEntry, hbi.deployedOn); + HBaseFsckRepair.fixMultiAssignment(connection, hbi.metaEntry, hbi.deployedOn); } } else if (inMeta && inHdfs && isMultiplyDeployed) { errors.reportError(ERROR_CODE.MULTI_DEPLOYED, "Region " + descriptiveName @@ -1985,7 +2074,7 @@ public class HBaseFsck extends Configured { if (shouldFixAssignments()) { errors.print("Trying to fix assignment error..."); setShouldRerun(); - HBaseFsckRepair.fixMultiAssignment(admin, hbi.metaEntry, hbi.deployedOn); + HBaseFsckRepair.fixMultiAssignment(connection, hbi.metaEntry, hbi.deployedOn); } } else if (inMeta && inHdfs && isDeployed && !deploymentMatchesMeta) { errors.reportError(ERROR_CODE.SERVER_DOES_NOT_MATCH_META, "Region " @@ -1996,7 +2085,7 @@ public class HBaseFsck extends Configured { if (shouldFixAssignments()) { errors.print("Trying to fix assignment error..."); setShouldRerun(); - HBaseFsckRepair.fixMultiAssignment(admin, hbi.metaEntry, hbi.deployedOn); + HBaseFsckRepair.fixMultiAssignment(connection, hbi.metaEntry, hbi.deployedOn); HBaseFsckRepair.waitUntilAssigned(admin, hbi.getHdfsHRI()); } } else { @@ -2236,7 +2325,8 @@ public class HBaseFsck extends Configured { public void addRegionInfo(HbckInfo hir) { if (Bytes.equals(hir.getEndKey(), HConstants.EMPTY_END_ROW)) { // end key is absolute end key, just add it. - sc.add(hir); + // ignore replicas other than primary for these checks + if (hir.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) sc.add(hir); return; } @@ -2253,7 +2343,8 @@ public class HBaseFsck extends Configured { } // main case, add to split calculator - sc.add(hir); + // ignore replicas other than primary for these checks + if (hir.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) sc.add(hir); } public void addServer(ServerName server) { @@ -2628,8 +2719,10 @@ public class HBaseFsck extends Configured { ArrayList subRange = new ArrayList(ranges); // this dumb and n^2 but this shouldn't happen often for (HbckInfo r1 : ranges) { + if (r1.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) continue; subRange.remove(r1); for (HbckInfo r2 : subRange) { + if (r2.getReplicaId() != HRegionInfo.DEFAULT_REPLICA_ID) continue; if (Bytes.compareTo(r1.getStartKey(), r2.getStartKey())==0) { handler.handleDuplicateStartKeys(r1,r2); } else { @@ -2901,7 +2994,7 @@ public class HBaseFsck extends Configured { errors.print("Trying to fix a problem with hbase:meta.."); setShouldRerun(); // try fix it (treat is a dupe assignment) - HBaseFsckRepair.fixMultiAssignment(admin, metaHbckInfo.metaEntry, servers); + HBaseFsckRepair.fixMultiAssignment(connection, metaHbckInfo.metaEntry, servers); } } // rerun hbck with hopefully fixed META @@ -2933,33 +3026,49 @@ public class HBaseFsck extends Configured { // record the latest modification of this META record long ts = Collections.max(result.listCells(), comp).getTimestamp(); - Pair pair = HRegionInfo.getHRegionInfoAndServerName(result); - if (pair == null || pair.getFirst() == null) { + RegionLocations rl = MetaTableAccessor.getRegionLocations(result); + if (rl == null) { emptyRegionInfoQualifiers.add(result); errors.reportError(ERROR_CODE.EMPTY_META_CELL, "Empty REGIONINFO_QUALIFIER found in hbase:meta"); return true; } ServerName sn = null; - if (pair.getSecond() != null) { - sn = pair.getSecond(); + if (rl.getRegionLocation(HRegionInfo.DEFAULT_REPLICA_ID) == null || + rl.getRegionLocation(HRegionInfo.DEFAULT_REPLICA_ID).getRegionInfo() == null) { + emptyRegionInfoQualifiers.add(result); + errors.reportError(ERROR_CODE.EMPTY_META_CELL, + "Empty REGIONINFO_QUALIFIER found in hbase:meta"); + return true; } - HRegionInfo hri = pair.getFirst(); + HRegionInfo hri = rl.getRegionLocation(HRegionInfo.DEFAULT_REPLICA_ID).getRegionInfo(); if (!(isTableIncluded(hri.getTable()) || hri.isMetaRegion())) { return true; } PairOfSameType daughters = HRegionInfo.getDaughterRegions(result); - MetaEntry m = new MetaEntry(hri, sn, ts, daughters.getFirst(), daughters.getSecond()); - HbckInfo previous = regionInfoMap.get(hri.getEncodedName()); - if (previous == null) { - regionInfoMap.put(hri.getEncodedName(), new HbckInfo(m)); - } else if (previous.metaEntry == null) { - previous.metaEntry = m; - } else { - throw new IOException("Two entries in hbase:meta are same " + previous); + for (HRegionLocation h : rl.getRegionLocations()) { + if (h == null || h.getRegionInfo() == null) { + continue; + } + sn = h.getServerName(); + hri = h.getRegionInfo(); + + MetaEntry m = null; + if (hri.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) { + m = new MetaEntry(hri, sn, ts, daughters.getFirst(), daughters.getSecond()); + } else { + m = new MetaEntry(hri, sn, ts, null, null); + } + HbckInfo previous = regionInfoMap.get(hri.getEncodedName()); + if (previous == null) { + regionInfoMap.put(hri.getEncodedName(), new HbckInfo(m)); + } else if (previous.metaEntry == null) { + previous.metaEntry = m; + } else { + throw new IOException("Two entries in hbase:meta are same " + previous); + } } - PairOfSameType mergeRegions = HRegionInfo.getMergeRegions(result); for (HRegionInfo mergeRegion : new HRegionInfo[] { mergeRegions.getFirst(), mergeRegions.getSecond() }) { @@ -2984,7 +3093,7 @@ public class HBaseFsck extends Configured { }; if (!checkMetaOnly) { // Scan hbase:meta to pick up user regions - MetaScanner.metaScan(getConf(), visitor); + MetaScanner.metaScan(connection, visitor); } errors.print(""); @@ -3077,17 +3186,28 @@ public class HBaseFsck extends Configured { private List deployedOn = Lists.newArrayList(); // info on RS's private boolean skipChecks = false; // whether to skip further checks to this region info. private boolean isMerged = false;// whether this region has already been merged into another one + private int deployedReplicaId = HRegionInfo.DEFAULT_REPLICA_ID; + private HRegionInfo primaryHRIForDeployedReplica = null; HbckInfo(MetaEntry metaEntry) { this.metaEntry = metaEntry; } + public int getReplicaId() { + if (metaEntry != null) return metaEntry.getReplicaId(); + return deployedReplicaId; + } + public synchronized void addServer(HRegionInfo hri, ServerName server) { OnlineEntry rse = new OnlineEntry() ; rse.hri = hri; rse.hsa = server; this.deployedEntries.add(rse); this.deployedOn.add(server); + // save the replicaId that we see deployed in the cluster + this.deployedReplicaId = hri.getReplicaId(); + this.primaryHRIForDeployedReplica = + RegionReplicaUtil.getRegionInfoForDefaultReplica(hri); } @Override @@ -3097,6 +3217,7 @@ public class HBaseFsck extends Configured { sb.append((metaEntry != null)? metaEntry.getRegionNameAsString() : "null"); sb.append( ", hdfs => " + getHdfsRegionDir()); sb.append( ", deployed => " + Joiner.on(", ").join(deployedEntries)); + sb.append( ", replicaId => " + getReplicaId()); sb.append(" }"); return sb.toString(); } @@ -3134,8 +3255,10 @@ public class HBaseFsck extends Configured { Path tableDir = this.hdfsEntry.hdfsRegionDir.getParent(); return FSUtils.getTableName(tableDir); } else { - // Currently no code exercises this path, but we could add one for - // getting table name from OnlineEntry + // return the info from the first online/deployed hri + for (OnlineEntry e : deployedEntries) { + return e.hri.getTable(); + } return null; } } @@ -3147,6 +3270,11 @@ public class HBaseFsck extends Configured { if (hdfsEntry.hri != null) { return hdfsEntry.hri.getRegionNameAsString(); } + } else { + // return the info from the first online/deployed hri + for (OnlineEntry e : deployedEntries) { + return e.hri.getRegionNameAsString(); + } } return null; } @@ -3157,10 +3285,18 @@ public class HBaseFsck extends Configured { } else if (hdfsEntry != null) { return hdfsEntry.hri.getRegionName(); } else { + // return the info from the first online/deployed hri + for (OnlineEntry e : deployedEntries) { + return e.hri.getRegionName(); + } return null; } } + public HRegionInfo getPrimaryHRIForDeployedReplica() { + return primaryHRIForDeployedReplica; + } + Path getHdfsRegionDir() { if (hdfsEntry == null) { return null; @@ -3489,7 +3625,6 @@ public class HBaseFsck extends Configured { // check to see if the existence of this region matches the region in META for (HRegionInfo r:regions) { HbckInfo hbi = hbck.getOrCreateInfo(r.getEncodedName()); - if (!RegionReplicaUtil.isDefaultReplica(r)) hbi.setSkipChecks(true); hbi.addServer(r, rsinfo); } } catch (IOException e) { // unable to connect to the region server. @@ -3629,7 +3764,7 @@ public class HBaseFsck extends Configured { * Display the full report from fsck. This displays all live and dead region * servers, and all known regions. */ - public static void setDisplayFullReport() { + public void setDisplayFullReport() { details = true; } @@ -3954,6 +4089,7 @@ public class HBaseFsck extends Configured { public int run(String[] args) throws Exception { HBaseFsck hbck = new HBaseFsck(getConf()); hbck.exec(hbck.executor, args); + hbck.close(); return hbck.getRetCode(); } }; @@ -4172,7 +4308,7 @@ public class HBaseFsck extends Configured { setRetCode(code); } } finally { - IOUtils.cleanup(null, connection, meta, admin); + IOUtils.cleanup(null, this); } return this; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java index bef990c..6660408 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java @@ -18,25 +18,23 @@ */ package org.apache.hadoop.hbase.util; -import java.io.IOException; -import java.util.List; -import java.util.Map; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ZooKeeperConnectionException; -import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Admin; -import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -44,6 +42,12 @@ import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.zookeeper.KeeperException; +import java.io.IOException; +import java.util.Collection; +import java.util.List; +import java.util.Map; +import java.util.Random; + /** * This class contains helper methods that repair parts of hbase's filesystem * contents. @@ -57,22 +61,22 @@ public class HBaseFsckRepair { * and then force ZK unassigned node to OFFLINE to trigger assignment by * master. * - * @param admin HBase admin used to undeploy + * @param connection HBase connection to the cluster * @param region Region to undeploy * @param servers list of Servers to undeploy from */ - public static void fixMultiAssignment(HBaseAdmin admin, HRegionInfo region, + public static void fixMultiAssignment(HConnection connection, HRegionInfo region, List servers) throws IOException, KeeperException, InterruptedException { HRegionInfo actualRegion = new HRegionInfo(region); // Close region on the servers silently for(ServerName server : servers) { - closeRegionSilentlyAndWait(admin, server, actualRegion); + closeRegionSilentlyAndWait(connection, server, actualRegion); } // Force ZK node to OFFLINE so master assigns - forceOfflineInZK(admin, actualRegion); + forceOfflineInZK(connection.getAdmin(), actualRegion); } /** @@ -146,16 +150,15 @@ public class HBaseFsckRepair { * (default 120s) to close the region. This bypasses the active hmaster. */ @SuppressWarnings("deprecation") - public static void closeRegionSilentlyAndWait(HBaseAdmin admin, + public static void closeRegionSilentlyAndWait(HConnection connection, ServerName server, HRegionInfo region) throws IOException, InterruptedException { - HConnection connection = admin.getConnection(); AdminService.BlockingInterface rs = connection.getAdmin(server); try { - ProtobufUtil.closeRegion(rs, server, region.getRegionName(), false); + ProtobufUtil.closeRegion(rs, server, region.getRegionName()); } catch (IOException e) { LOG.warn("Exception when closing region: " + region.getRegionNameAsString(), e); } - long timeout = admin.getConfiguration() + long timeout = connection.getConfiguration() .getLong("hbase.hbck.close.timeout", 120000); long expiration = timeout + System.currentTimeMillis(); while (System.currentTimeMillis() < expiration) { @@ -173,13 +176,28 @@ public class HBaseFsckRepair { } /** - * Puts the specified HRegionInfo into META. + * Puts the specified HRegionInfo into META with replica related columns */ - public static void fixMetaHoleOnline(Configuration conf, - HRegionInfo hri) throws IOException { - Table meta = new HTable(conf, TableName.META_TABLE_NAME); - MetaTableAccessor.addRegionToMeta(meta, hri); + public static void fixMetaHoleOnlineAndAddReplicas(Configuration conf, + HRegionInfo hri, Collection servers, int numReplicas) throws IOException { + Connection conn = ConnectionFactory.createConnection(conf); + Table meta = conn.getTable(TableName.META_TABLE_NAME); + Put put = MetaTableAccessor.makePutFromRegionInfo(hri); + if (numReplicas > 1) { + Random r = new Random(); + ServerName[] serversArr = servers.toArray(new ServerName[servers.size()]); + for (int i = 1; i < numReplicas; i++) { + ServerName sn = serversArr[r.nextInt(serversArr.length)]; + // the column added here is just to make sure the master is able to + // see the additional replicas when it is asked to assign. The + // final value of these columns will be different and will be updated + // by the actual regionservers that start hosting the respective replicas + MetaTableAccessor.addLocation(put, sn, sn.getStartcode(), i); + } + } + meta.put(put); meta.close(); + conn.close(); } /** @@ -192,7 +210,7 @@ public class HBaseFsckRepair { HRegion region = HRegion.createHRegion(hri, root, conf, htd, null); // Close the new region to flush to disk. Close log file too. - region.close(); + HRegion.closeHRegion(region); return region; } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java index 51bd117..faced06 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java @@ -365,7 +365,7 @@ public class HFileV1Detector extends Configured implements Tool { * @throws IOException */ public FileLink getFileLinkWithPreNSPath(Path storeFilePath) throws IOException { - HFileLink link = new HFileLink(getConf(), storeFilePath); + HFileLink link = HFileLink.buildFromHFileLinkPattern(getConf(), storeFilePath); List pathsToProcess = getPreNSPathsForHFileLink(link); pathsToProcess.addAll(Arrays.asList(link.getLocations())); return new FileLink(pathsToProcess); @@ -383,7 +383,7 @@ public class HFileV1Detector extends Configured implements Tool { /** * Removes the prefix of defaultNamespace from the path. - * @param originPath + * @param originalPath */ private String removeDefaultNSPath(Path originalPath) { String pathStr = originalPath.toString(); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java index 2e0f53d..c577abf 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/HMerge.java @@ -30,13 +30,12 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.RemoteExceptionHandler; -import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.MetaTableAccessor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -49,6 +48,7 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.ipc.RemoteException; /** * A non-instantiable class that has a static method capable of compacting @@ -155,7 +155,8 @@ class HMerge { this.rootDir = FSUtils.getRootDir(conf); Path tabledir = FSUtils.getTableDir(this.rootDir, tableName); - this.htd = FSTableDescriptors.getTableDescriptorFromFs(this.fs, tabledir); + this.htd = FSTableDescriptors.getTableDescriptorFromFs(this.fs, tabledir) + .getHTableDescriptor(); String logname = "merge_" + System.currentTimeMillis() + HConstants.HREGION_LOGDIR_NAME; final Configuration walConf = new Configuration(conf); @@ -267,7 +268,7 @@ class HMerge { } return region; } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? ((RemoteException) e).unwrapRemoteException() : e; LOG.error("meta scanner error", e); metaScanner.close(); throw e; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/Merge.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/Merge.java index 783341e..6002f29 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/Merge.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/Merge.java @@ -29,6 +29,7 @@ import org.apache.hadoop.conf.Configured; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; @@ -41,7 +42,6 @@ import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.io.WritableComparator; import org.apache.hadoop.util.GenericOptionsParser; import org.apache.hadoop.util.Tool; @@ -153,9 +153,9 @@ public class Merge extends Configured implements Tool { if (info2 == null) { throw new NullPointerException("info2 is null using key " + meta); } - HTableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(FileSystem.get(getConf()), + TableDescriptor htd = FSTableDescriptors.getTableDescriptorFromFs(FileSystem.get(getConf()), this.rootdir, this.tableName); - HRegion merged = merge(htd, meta, info1, info2); + HRegion merged = merge(htd.getHTableDescriptor(), meta, info1, info2); LOG.info("Adding " + merged.getRegionInfo() + " to " + meta.getRegionInfo()); diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java index 92c4410..4f7c0a5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSizeCalculator.java @@ -68,7 +68,7 @@ public class RegionSizeCalculator { public RegionSizeCalculator(HTable table) throws IOException { HBaseAdmin admin = new HBaseAdmin(table.getConfiguration()); try { - init(table, admin); + init(table.getRegionLocator(), admin); } finally { admin.close(); } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java index c926d54..977593d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java @@ -1115,4 +1115,4 @@ public class RegionSplitter { + "," + rowToStr(lastRow()) + "]"; } } -} +} \ No newline at end of file diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/ServerRegionReplicaUtil.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/ServerRegionReplicaUtil.java index 237e316..cf87219 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/ServerRegionReplicaUtil.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/ServerRegionReplicaUtil.java @@ -25,9 +25,15 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.client.RegionReplicaUtil; +import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; import org.apache.hadoop.hbase.io.HFileLink; +import org.apache.hadoop.hbase.io.Reference; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.StoreFileInfo; +import org.apache.hadoop.hbase.replication.ReplicationException; +import org.apache.hadoop.hbase.replication.ReplicationPeerConfig; +import org.apache.hadoop.hbase.replication.regionserver.RegionReplicaReplicationEndpoint; +import org.apache.hadoop.hbase.zookeeper.ZKUtil; /** * Similar to {@link RegionReplicaUtil} but for the server side @@ -35,6 +41,21 @@ import org.apache.hadoop.hbase.regionserver.StoreFileInfo; public class ServerRegionReplicaUtil extends RegionReplicaUtil { /** + * Whether asynchronous WAL replication to the secondary region replicas is enabled or not. + * If this is enabled, a replication peer named "region_replica_replication" will be created + * which will tail the logs and replicate the mutatations to region replicas for tables that + * have region replication > 1. If this is enabled once, disabling this replication also + * requires disabling the replication peer using shell or ReplicationAdmin java class. + * Replication to secondary region replicas works over standard inter-cluster replication.· + * So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication"· + * to true for this feature to work. + */ + public static final String REGION_REPLICA_REPLICATION_CONF_KEY + = "hbase.region.replica.replication.enabled"; + private static final boolean DEFAULT_REGION_REPLICA_REPLICATION = false; + private static final String REGION_REPLICA_REPLICATION_PEER = "region_replica_replication"; + + /** * Returns the regionInfo object to use for interacting with the file system. * @return An HRegionInfo object to interact with the filesystem */ @@ -83,11 +104,46 @@ public class ServerRegionReplicaUtil extends RegionReplicaUtil { return new StoreFileInfo(conf, fs, status); } + if (StoreFileInfo.isReference(status.getPath())) { + Reference reference = Reference.read(fs, status.getPath()); + return new StoreFileInfo(conf, fs, status, reference); + } + // else create a store file link. The link file does not exists on filesystem though. - HFileLink link = new HFileLink(conf, - HFileLink.createPath(regionInfoForFs.getTable(), regionInfoForFs.getEncodedName() - , familyName, status.getPath().getName())); + HFileLink link = HFileLink.build(conf, regionInfoForFs.getTable(), + regionInfoForFs.getEncodedName(), familyName, status.getPath().getName()); return new StoreFileInfo(conf, fs, status, link); } + /** + * Create replication peer for replicating to region replicas if needed. + * @param conf configuration to use + * @throws IOException + */ + public static void setupRegionReplicaReplication(Configuration conf) throws IOException { + if (!conf.getBoolean(REGION_REPLICA_REPLICATION_CONF_KEY, DEFAULT_REGION_REPLICA_REPLICATION)) { + return; + } + ReplicationAdmin repAdmin = new ReplicationAdmin(conf); + try { + if (repAdmin.getPeerConfig(REGION_REPLICA_REPLICATION_PEER) == null) { + ReplicationPeerConfig peerConfig = new ReplicationPeerConfig(); + peerConfig.setClusterKey(ZKUtil.getZooKeeperClusterKey(conf)); + peerConfig.setReplicationEndpointImpl(RegionReplicaReplicationEndpoint.class.getName()); + repAdmin.addPeer(REGION_REPLICA_REPLICATION_PEER, peerConfig, null); + } + } catch (ReplicationException ex) { + throw new IOException(ex); + } finally { + repAdmin.close(); + } + } + + /** + * Return the peer id used for replicating to secondary region replicas + */ + public static String getReplicationPeerId() { + return REGION_REPLICA_REPLICATION_PEER; + } + } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/util/ZKDataMigrator.java hbase-server/src/main/java/org/apache/hadoop/hbase/util/ZKDataMigrator.java index f773b06..85f8d6a 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/util/ZKDataMigrator.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/util/ZKDataMigrator.java @@ -17,252 +17,101 @@ */ package org.apache.hadoop.hbase.util; -import java.io.IOException; +import java.util.HashMap; import java.util.List; +import java.util.Map; +import com.google.protobuf.InvalidProtocolBufferException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.conf.Configured; -import org.apache.hadoop.hbase.Abortable; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.ReplicationPeer; -import org.apache.hadoop.hbase.replication.ReplicationStateZKBase; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.hadoop.util.Tool; -import org.apache.hadoop.util.ToolRunner; import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NoNodeException; /** - * Tool to migrate zookeeper data of older hbase versions(<0.95.0) to PB. + * utlity method to migrate zookeeper data across HBase versions. */ -public class ZKDataMigrator extends Configured implements Tool { +@InterfaceAudience.Private +public class ZKDataMigrator { private static final Log LOG = LogFactory.getLog(ZKDataMigrator.class); - @Override - public int run(String[] as) throws Exception { - Configuration conf = getConf(); - ZooKeeperWatcher zkw = null; - try { - zkw = new ZooKeeperWatcher(getConf(), "Migrate ZK data to PB.", - new ZKDataMigratorAbortable()); - if (ZKUtil.checkExists(zkw, zkw.baseZNode) == -1) { - LOG.info("No hbase related data available in zookeeper. returning.."); - return 0; - } - List children = ZKUtil.listChildrenNoWatch(zkw, zkw.baseZNode); - if (children == null) { - LOG.info("No child nodes to mirgrate. returning.."); - return 0; - } - String childPath = null; - for (String child : children) { - childPath = ZKUtil.joinZNode(zkw.baseZNode, child); - if (child.equals(conf.get("zookeeper.znode.rootserver", "root-region-server"))) { - // -ROOT- region no longer present from 0.95.0, so we can remove this - // znode - ZKUtil.deleteNodeRecursively(zkw, childPath); - // TODO delete root table path from file system. - } else if (child.equals(conf.get("zookeeper.znode.rs", "rs"))) { - // Since there is no live region server instance during migration, we - // can remove this znode as well. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.draining.rs", "draining"))) { - // If we want to migrate to 0.95.0 from older versions we need to stop - // the existing cluster. So there wont be any draining servers so we - // can - // remove it. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.master", "master"))) { - // Since there is no live master instance during migration, we can - // remove this znode as well. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.backup.masters", "backup-masters"))) { - // Since there is no live backup master instances during migration, we - // can remove this znode as well. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.state", "shutdown"))) { - // shutdown node is not present from 0.95.0 onwards. Its renamed to - // "running". We can delete it. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.unassigned", "unassigned"))) { - // Any way during clean cluster startup we will remove all unassigned - // region nodes. we can delete all children nodes as well. This znode - // is - // renamed to "regions-in-transition" from 0.95.0 onwards. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.tableEnableDisable", "table")) - || child.equals(conf.get("zookeeper.znode.masterTableEnableDisable", "table"))) { - checkAndMigrateTableStatesToPB(zkw); - } else if (child.equals(conf.get("zookeeper.znode.masterTableEnableDisable92", - "table92"))) { - // This is replica of table states from tableZnode so we can remove - // this. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.splitlog", "splitlog"))) { - // This znode no longer available from 0.95.0 onwards, we can remove - // it. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.replication", "replication"))) { - checkAndMigrateReplicationNodesToPB(zkw); - } else if (child.equals(conf.get("zookeeper.znode.clusterId", "hbaseid"))) { - // it will be re-created by master. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(SnapshotManager.ONLINE_SNAPSHOT_CONTROLLER_DESCRIPTION)) { - // not needed as it is transient. - ZKUtil.deleteNodeRecursively(zkw, childPath); - } else if (child.equals(conf.get("zookeeper.znode.acl.parent", "acl"))) { - // it will be re-created when hbase:acl is re-opened - ZKUtil.deleteNodeRecursively(zkw, childPath); - } - } - } catch (Exception e) { - LOG.error("Got exception while updating znodes ", e); - throw new IOException(e); - } finally { - if (zkw != null) { - zkw.close(); - } - } - return 0; - } - - private void checkAndMigrateTableStatesToPB(ZooKeeperWatcher zkw) throws KeeperException, - InterruptedException { - List tables = ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode); - if (tables == null) { - LOG.info("No table present to migrate table state to PB. returning.."); - return; - } - for (String table : tables) { - String znode = ZKUtil.joinZNode(zkw.tableZNode, table); - // Delete -ROOT- table state znode since its no longer present in 0.95.0 - // onwards. - if (table.equals("-ROOT-") || table.equals(".META.")) { - ZKUtil.deleteNode(zkw, znode); - continue; - } - byte[] data = ZKUtil.getData(zkw, znode); - if (ProtobufUtil.isPBMagicPrefix(data)) continue; - ZooKeeperProtos.Table.Builder builder = ZooKeeperProtos.Table.newBuilder(); - builder.setState(ZooKeeperProtos.Table.State.valueOf(Bytes.toString(data))); - data = ProtobufUtil.prependPBMagic(builder.build().toByteArray()); - ZKUtil.setData(zkw, znode, data); - } - } - - private void checkAndMigrateReplicationNodesToPB(ZooKeeperWatcher zkw) throws KeeperException, - InterruptedException { - String replicationZnodeName = getConf().get("zookeeper.znode.replication", "replication"); - String replicationPath = ZKUtil.joinZNode(zkw.baseZNode, replicationZnodeName); - List replicationZnodes = ZKUtil.listChildrenNoWatch(zkw, replicationPath); - if (replicationZnodes == null) { - LOG.info("No replication related znodes present to migrate. returning.."); - return; - } - for (String child : replicationZnodes) { - String znode = ZKUtil.joinZNode(replicationPath, child); - if (child.equals(getConf().get("zookeeper.znode.replication.peers", "peers"))) { - List peers = ZKUtil.listChildrenNoWatch(zkw, znode); - if (peers == null || peers.isEmpty()) { - LOG.info("No peers present to migrate. returning.."); - continue; - } - checkAndMigratePeerZnodesToPB(zkw, znode, peers); - } else if (child.equals(getConf().get("zookeeper.znode.replication.state", "state"))) { - // This is no longer used in >=0.95.x - ZKUtil.deleteNodeRecursively(zkw, znode); - } else if (child.equals(getConf().get("zookeeper.znode.replication.rs", "rs"))) { - List rsList = ZKUtil.listChildrenNoWatch(zkw, znode); - if (rsList == null || rsList.isEmpty()) continue; - for (String rs : rsList) { - checkAndMigrateQueuesToPB(zkw, znode, rs); + /** + * Method for table states migration. + * Used when upgrading from pre-2.0 to 2.0 + * Reading state from zk, applying them to internal state + * and delete. + * Used by master to clean migration from zk based states to + * table descriptor based states. + */ + @Deprecated + public static Map queryForTableStates(ZooKeeperWatcher zkw) + throws KeeperException, InterruptedException { + Map rv = new HashMap<>(); + List children = ZKUtil.listChildrenNoWatch(zkw, zkw.tableZNode); + if (children == null) + return rv; + for (String child: children) { + TableName tableName = TableName.valueOf(child); + ZooKeeperProtos.DeprecatedTableState.State state = getTableState(zkw, tableName); + TableState.State newState = TableState.State.ENABLED; + if (state != null) { + switch (state) { + case ENABLED: + newState = TableState.State.ENABLED; + break; + case DISABLED: + newState = TableState.State.DISABLED; + break; + case DISABLING: + newState = TableState.State.DISABLING; + break; + case ENABLING: + newState = TableState.State.ENABLING; + break; + default: } } + rv.put(tableName, newState); } + return rv; } - private void checkAndMigrateQueuesToPB(ZooKeeperWatcher zkw, String znode, String rs) - throws KeeperException, NoNodeException, InterruptedException { - String rsPath = ZKUtil.joinZNode(znode, rs); - List peers = ZKUtil.listChildrenNoWatch(zkw, rsPath); - if (peers == null || peers.isEmpty()) return; - String peerPath = null; - for (String peer : peers) { - peerPath = ZKUtil.joinZNode(rsPath, peer); - List files = ZKUtil.listChildrenNoWatch(zkw, peerPath); - if (files == null || files.isEmpty()) continue; - String filePath = null; - for (String file : files) { - filePath = ZKUtil.joinZNode(peerPath, file); - byte[] data = ZKUtil.getData(zkw, filePath); - if (data == null || Bytes.equals(data, HConstants.EMPTY_BYTE_ARRAY)) continue; - if (ProtobufUtil.isPBMagicPrefix(data)) continue; - ZKUtil.setData(zkw, filePath, - ZKUtil.positionToByteArray(Long.parseLong(Bytes.toString(data)))); - } - } - } - - private void checkAndMigratePeerZnodesToPB(ZooKeeperWatcher zkw, String znode, - List peers) throws KeeperException, NoNodeException, InterruptedException { - for (String peer : peers) { - String peerZnode = ZKUtil.joinZNode(znode, peer); - byte[] data = ZKUtil.getData(zkw, peerZnode); - if (!ProtobufUtil.isPBMagicPrefix(data)) { - migrateClusterKeyToPB(zkw, peerZnode, data); - } - String peerStatePath = ZKUtil.joinZNode(peerZnode, - getConf().get("zookeeper.znode.replication.peers.state", "peer-state")); - if (ZKUtil.checkExists(zkw, peerStatePath) != -1) { - data = ZKUtil.getData(zkw, peerStatePath); - if (ProtobufUtil.isPBMagicPrefix(data)) continue; - migratePeerStateToPB(zkw, data, peerStatePath); - } - } - } - - private void migrateClusterKeyToPB(ZooKeeperWatcher zkw, String peerZnode, byte[] data) - throws KeeperException, NoNodeException { - ReplicationPeer peer = ZooKeeperProtos.ReplicationPeer.newBuilder() - .setClusterkey(Bytes.toString(data)).build(); - ZKUtil.setData(zkw, peerZnode, ProtobufUtil.prependPBMagic(peer.toByteArray())); - } - - private void migratePeerStateToPB(ZooKeeperWatcher zkw, byte[] data, - String peerStatePath) - throws KeeperException, NoNodeException { - String state = Bytes.toString(data); - if (ZooKeeperProtos.ReplicationState.State.ENABLED.name().equals(state)) { - ZKUtil.setData(zkw, peerStatePath, ReplicationStateZKBase.ENABLED_ZNODE_BYTES); - } else if (ZooKeeperProtos.ReplicationState.State.DISABLED.name().equals(state)) { - ZKUtil.setData(zkw, peerStatePath, ReplicationStateZKBase.DISABLED_ZNODE_BYTES); + /** + * Gets table state from ZK. + * @param zkw ZooKeeperWatcher instance to use + * @param tableName table we're checking + * @return Null or {@link ZooKeeperProtos.DeprecatedTableState.State} found in znode. + * @throws KeeperException + */ + @Deprecated + private static ZooKeeperProtos.DeprecatedTableState.State getTableState( + final ZooKeeperWatcher zkw, final TableName tableName) + throws KeeperException, InterruptedException { + String znode = ZKUtil.joinZNode(zkw.tableZNode, tableName.getNameAsString()); + byte [] data = ZKUtil.getData(zkw, znode); + if (data == null || data.length <= 0) return null; + try { + ProtobufUtil.expectPBMagicPrefix(data); + ZooKeeperProtos.DeprecatedTableState.Builder builder = + ZooKeeperProtos.DeprecatedTableState.newBuilder(); + int magicLen = ProtobufUtil.lengthOfPBMagic(); + ZooKeeperProtos.DeprecatedTableState t = builder.mergeFrom(data, + magicLen, data.length - magicLen).build(); + return t.getState(); + } catch (InvalidProtocolBufferException e) { + KeeperException ke = new KeeperException.DataInconsistencyException(); + ke.initCause(e); + throw ke; + } catch (DeserializationException e) { + throw ZKUtil.convert(e); } } - public static void main(String args[]) throws Exception { - System.exit(ToolRunner.run(HBaseConfiguration.create(), new ZKDataMigrator(), args)); - } - - static class ZKDataMigratorAbortable implements Abortable { - private boolean aborted = false; - - @Override - public void abort(String why, Throwable e) { - LOG.error("Got aborted with reason: " + why + ", and error: " + e); - this.aborted = true; - } - - @Override - public boolean isAborted() { - return this.aborted; - } - } } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRegionGroupingProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRegionGroupingProvider.java new file mode 100644 index 0000000..478d5c3 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRegionGroupingProvider.java @@ -0,0 +1,106 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.wal; + +import java.io.IOException; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; + +// imports for classes still in regionserver.wal +import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; + +/** + * A WAL Provider that pre-creates N WALProviders and then limits our grouping strategy to them. + * Control the number of delegate providers via "hbase.wal.regiongrouping.numgroups." Control + * the choice of delegate provider implementation and the grouping strategy the same as + * {@link RegionGroupingProvider}. + */ +@InterfaceAudience.Private +class BoundedRegionGroupingProvider extends RegionGroupingProvider { + private static final Log LOG = LogFactory.getLog(BoundedRegionGroupingProvider.class); + + static final String NUM_REGION_GROUPS = "hbase.wal.regiongrouping.numgroups"; + static final int DEFAULT_NUM_REGION_GROUPS = 2; + private WALProvider[] delegates; + private AtomicInteger counter = new AtomicInteger(0); + + @Override + public void init(final WALFactory factory, final Configuration conf, + final List listeners, final String providerId) throws IOException { + super.init(factory, conf, listeners, providerId); + // no need to check for and close down old providers; our parent class will throw on re-invoke + delegates = new WALProvider[Math.max(1, conf.getInt(NUM_REGION_GROUPS, + DEFAULT_NUM_REGION_GROUPS))]; + for (int i = 0; i < delegates.length; i++) { + delegates[i] = factory.getProvider(DELEGATE_PROVIDER, DEFAULT_DELEGATE_PROVIDER, listeners, + providerId + i); + } + LOG.info("Configured to run with " + delegates.length + " delegate WAL providers."); + } + + @Override + WALProvider populateCache(final byte[] group) { + final WALProvider temp = delegates[counter.getAndIncrement() % delegates.length]; + final WALProvider extant = cached.putIfAbsent(group, temp); + // if someone else beat us to initializing, just take what they set. + // note that in such a case we skew load away from the provider we picked at first + return extant == null ? temp : extant; + } + + @Override + public void shutdown() throws IOException { + // save the last exception and rethrow + IOException failure = null; + for (WALProvider provider : delegates) { + try { + provider.shutdown(); + } catch (IOException exception) { + LOG.error("Problem shutting down provider '" + provider + "': " + exception.getMessage()); + LOG.debug("Details of problem shutting down provider '" + provider + "'", exception); + failure = exception; + } + } + if (failure != null) { + throw failure; + } + } + + @Override + public void close() throws IOException { + // save the last exception and rethrow + IOException failure = null; + for (WALProvider provider : delegates) { + try { + provider.close(); + } catch (IOException exception) { + LOG.error("Problem closing provider '" + provider + "': " + exception.getMessage()); + LOG.debug("Details of problem shutting down provider '" + provider + "'", exception); + failure = exception; + } + } + if (failure != null) { + throw failure; + } + } +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java index b710059..f889672 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DefaultWALProvider.java @@ -18,9 +18,6 @@ */ package org.apache.hadoop.hbase.wal; -import java.io.Closeable; -import java.io.DataInput; -import java.io.DataOutput; import java.io.IOException; import java.util.List; import java.util.regex.Pattern; diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java index 70bfff1..5bffea5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/DisabledWALProvider.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.wal; import java.io.IOException; import java.util.List; +import java.util.Set; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicLong; @@ -27,7 +28,6 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; @@ -56,10 +56,13 @@ class DisabledWALProvider implements WALProvider { @Override public void init(final WALFactory factory, final Configuration conf, - final List listeners, final String providerId) throws IOException { + final List listeners, String providerId) throws IOException { if (null != disabled) { throw new IllegalStateException("WALProvider.init should only be called once."); } + if (null == providerId) { + providerId = "defaultDisabled"; + } disabled = new DisabledWAL(new Path(FSUtils.getRootDir(conf), providerId), conf, null); } @@ -183,7 +186,7 @@ class DisabledWALProvider implements WALProvider { } @Override - public boolean startCacheFlush(final byte[] encodedRegionName) { + public boolean startCacheFlush(final byte[] encodedRegionName, Set flushedFamilyNames) { return !(closed.get()); } @@ -206,6 +209,11 @@ class DisabledWALProvider implements WALProvider { } @Override + public long getEarliestMemstoreSeqNum(byte[] encodedRegionName, byte[] familyName) { + return HConstants.NO_SEQNUM; + } + + @Override public String toString() { return "WAL disabled."; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java new file mode 100644 index 0000000..eb2c426 --- /dev/null +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/RegionGroupingProvider.java @@ -0,0 +1,212 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.wal; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.UUID; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; + +// imports for classes still in regionserver.wal +import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; + +/** + * A WAL Provider that returns a WAL per group of regions. + * + * Region grouping is handled via {@link RegionGroupingStrategy} and can be configured via the + * property "hbase.wal.regiongrouping.strategy". Current strategy choices are + *
            + *
          • defaultStrategy : Whatever strategy this version of HBase picks. currently + * "identity".
          • + *
          • identity : each region belongs to its own group.
          • + *
          + * Optionally, a FQCN to a custom implementation may be given. + * + * WAL creation is delegated to another WALProvider, configured via the property + * "hbase.wal.regiongrouping.delegate". The property takes the same options as "hbase.wal.provider" + * (ref {@link WALFactory}) and defaults to the defaultProvider. + */ +@InterfaceAudience.Private +class RegionGroupingProvider implements WALProvider { + private static final Log LOG = LogFactory.getLog(RegionGroupingProvider.class); + + /** + * Map identifiers to a group number. + */ + public static interface RegionGroupingStrategy { + /** + * Given an identifier, pick a group. + * the byte[] returned for a given group must always use the same instance, since we + * will be using it as a hash key. + */ + byte[] group(final byte[] identifier); + void init(Configuration config); + } + + /** + * Maps between configuration names for strategies and implementation classes. + */ + static enum Strategies { + defaultStrategy(IdentityGroupingStrategy.class), + identity(IdentityGroupingStrategy.class); + + final Class clazz; + Strategies(Class clazz) { + this.clazz = clazz; + } + } + + /** + * instantiate a strategy from a config property. + * requires conf to have already been set (as well as anything the provider might need to read). + */ + RegionGroupingStrategy getStrategy(final Configuration conf, final String key, + final String defaultValue) throws IOException { + Class clazz; + try { + clazz = Strategies.valueOf(conf.get(key, defaultValue)).clazz; + } catch (IllegalArgumentException exception) { + // Fall back to them specifying a class name + // Note that the passed default class shouldn't actually be used, since the above only fails + // when there is a config value present. + clazz = conf.getClass(key, IdentityGroupingStrategy.class, RegionGroupingStrategy.class); + } + LOG.info("Instantiating RegionGroupingStrategy of type " + clazz); + try { + final RegionGroupingStrategy result = clazz.newInstance(); + result.init(conf); + return result; + } catch (InstantiationException exception) { + LOG.error("couldn't set up region grouping strategy, check config key " + + REGION_GROUPING_STRATEGY); + LOG.debug("Exception details for failure to load region grouping strategy.", exception); + throw new IOException("couldn't set up region grouping strategy", exception); + } catch (IllegalAccessException exception) { + LOG.error("couldn't set up region grouping strategy, check config key " + + REGION_GROUPING_STRATEGY); + LOG.debug("Exception details for failure to load region grouping strategy.", exception); + throw new IOException("couldn't set up region grouping strategy", exception); + } + } + + private static final String REGION_GROUPING_STRATEGY = "hbase.wal.regiongrouping.strategy"; + private static final String DEFAULT_REGION_GROUPING_STRATEGY = Strategies.defaultStrategy.name(); + + static final String DELEGATE_PROVIDER = "hbase.wal.regiongrouping.delegate"; + static final String DEFAULT_DELEGATE_PROVIDER = WALFactory.Providers.defaultProvider.name(); + + protected final ConcurrentMap cached = + new ConcurrentHashMap(); + + + protected RegionGroupingStrategy strategy = null; + private WALFactory factory = null; + private List listeners = null; + private String providerId = null; + + @Override + public void init(final WALFactory factory, final Configuration conf, + final List listeners, final String providerId) throws IOException { + if (null != strategy) { + throw new IllegalStateException("WALProvider.init should only be called once."); + } + this.factory = factory; + this.listeners = null == listeners ? null : Collections.unmodifiableList(listeners); + this.providerId = providerId; + this.strategy = getStrategy(conf, REGION_GROUPING_STRATEGY, DEFAULT_REGION_GROUPING_STRATEGY); + } + + /** + * Populate the cache for this group. + */ + WALProvider populateCache(final byte[] group) throws IOException { + final WALProvider temp = factory.getProvider(DELEGATE_PROVIDER, DEFAULT_DELEGATE_PROVIDER, + listeners, providerId + "-" + UUID.randomUUID()); + final WALProvider extant = cached.putIfAbsent(group, temp); + if (null != extant) { + // someone else beat us to initializing, just take what they set. + temp.close(); + return extant; + } + return temp; + } + + @Override + public WAL getWAL(final byte[] identifier) throws IOException { + final byte[] group = strategy.group(identifier); + WALProvider provider = cached.get(group); + if (null == provider) { + provider = populateCache(group); + } + return provider.getWAL(identifier); + } + + @Override + public void shutdown() throws IOException { + // save the last exception and rethrow + IOException failure = null; + for (WALProvider provider : cached.values()) { + try { + provider.shutdown(); + } catch (IOException exception) { + LOG.error("Problem shutting down provider '" + provider + "': " + exception.getMessage()); + LOG.debug("Details of problem shutting down provider '" + provider + "'", exception); + failure = exception; + } + } + if (failure != null) { + throw failure; + } + } + + @Override + public void close() throws IOException { + // save the last exception and rethrow + IOException failure = null; + for (WALProvider provider : cached.values()) { + try { + provider.close(); + } catch (IOException exception) { + LOG.error("Problem closing provider '" + provider + "': " + exception.getMessage()); + LOG.debug("Details of problem shutting down provider '" + provider + "'", exception); + failure = exception; + } + } + if (failure != null) { + throw failure; + } + } + + static class IdentityGroupingStrategy implements RegionGroupingStrategy { + @Override + public void init(Configuration config) {} + @Override + public byte[] group(final byte[] identifier) { + return identifier; + } + } + +} diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java index 23f8c9f..5a2b08d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hbase.wal; import java.io.Closeable; import java.io.IOException; import java.util.List; +import java.util.Set; import java.util.concurrent.atomic.AtomicLong; import org.apache.hadoop.hbase.classification.InterfaceStability; @@ -152,7 +153,7 @@ public interface WAL { * @return true if the flush can proceed, false in case wal is closing (ususally, when server is * closing) and flush couldn't be started. */ - boolean startCacheFlush(final byte[] encodedRegionName); + boolean startCacheFlush(final byte[] encodedRegionName, Set flushedFamilyNames); /** * Complete the cache flush. @@ -182,6 +183,14 @@ public interface WAL { long getEarliestMemstoreSeqNum(byte[] encodedRegionName); /** + * Gets the earliest sequence number in the memstore for this particular region and store. + * @param encodedRegionName The region to get the number for. + * @param familyName The family to get the number for. + * @return The number if present, HConstants.NO_SEQNUM if absent. + */ + long getEarliestMemstoreSeqNum(byte[] encodedRegionName, byte[] familyName); + + /** * Human readable identifying information about the state of this WAL. * Implementors are encouraged to include information appropriate for debugging. * Consumers are advised not to rely on the details of the returned String; it does diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java index 497a4d8..ba349e5 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALFactory.java @@ -35,7 +35,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.wal.WAL.Reader; import org.apache.hadoop.hbase.wal.WALProvider.Writer; import org.apache.hadoop.hbase.util.CancelableProgressable; @@ -55,7 +54,12 @@ import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; * Configure which provider gets used with the configuration setting "hbase.wal.provider". Available * implementations: *
            - *
          • defaultProvider : whatever provider is standard for the hbase version.
          • + *
          • defaultProvider : whatever provider is standard for the hbase version. Currently + * "filesystem"
          • + *
          • filesystem : a provider that will run on top of an implementation of the Hadoop + * FileSystem interface, normally HDFS.
          • + *
          • multiwal : a provider that will use multiple "filesystem" wal instances per region + * server.
          • *
          * * Alternatively, you may provide a custome implementation of {@link WALProvider} by class name. @@ -69,7 +73,9 @@ public class WALFactory { * Maps between configuration names for providers and implementation classes. */ static enum Providers { - defaultProvider(DefaultWALProvider.class); + defaultProvider(DefaultWALProvider.class), + filesystem(DefaultWALProvider.class), + multiwal(BoundedRegionGroupingProvider.class); Class clazz; Providers(Class clazz) { @@ -134,6 +140,7 @@ public class WALFactory { // when there is a config value present. clazz = conf.getClass(key, DefaultWALProvider.class, WALProvider.class); } + LOG.info("Instantiating WALProvider of type " + clazz); try { final WALProvider result = clazz.newInstance(); result.init(this, conf, listeners, providerId); @@ -356,7 +363,7 @@ public class WALFactory { private static final AtomicReference singleton = new AtomicReference(); private static final String SINGLETON_ID = WALFactory.class.getName(); - // public only for FSHLog and UpgradeTo96 + // public only for FSHLog public static WALFactory getInstance(Configuration configuration) { WALFactory factory = singleton.get(); if (null == factory) { diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALProvider.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALProvider.java index b27abf9..178c322 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALProvider.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALProvider.java @@ -23,7 +23,6 @@ import java.io.IOException; import java.util.List; import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.classification.InterfaceStability; import org.apache.hadoop.conf.Configuration; // imports for things that haven't moved from regionserver.wal yet. diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java index d7d4a61..2ddc9d1 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALSplitter.java @@ -65,17 +65,14 @@ import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; -import org.apache.hadoop.hbase.CoordinatedStateException; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.RemoteExceptionHandler; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; -import org.apache.hadoop.hbase.TableStateManager; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.TagRewriteCell; import org.apache.hadoop.hbase.TagType; @@ -88,6 +85,7 @@ import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; import org.apache.hadoop.hbase.coordination.ZKSplitLogManagerCoordination; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.exceptions.RegionOpeningException; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.master.SplitLogManager; @@ -102,7 +100,6 @@ import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.WALEntry; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutationProto.MutationType; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionStoreSequenceIds; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.StoreSequenceId; @@ -120,10 +117,7 @@ import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.ZKSplitLog; import org.apache.hadoop.io.MultipleIOException; - -import com.google.common.base.Preconditions; -import com.google.common.collect.Lists; -import com.google.protobuf.ServiceException; +import org.apache.hadoop.ipc.RemoteException; // imports for things that haven't moved from regionserver.wal yet. import org.apache.hadoop.hbase.regionserver.wal.FSHLog; @@ -152,6 +146,7 @@ public class WALSplitter { // Major subcomponents of the split process. // These are separated into inner classes to make testing easier. + PipelineController controller; OutputSink outputSink; EntryBuffers entryBuffers; @@ -160,14 +155,6 @@ public class WALSplitter { private BaseCoordinatedStateManager csm; private final WALFactory walFactory; - // If an exception is thrown by one of the other threads, it will be - // stored here. - protected AtomicReference thrown = new AtomicReference(); - - // Wait/notify for when data has been produced by the reader thread, - // consumed by the reader thread, or an exception occurred - final Object dataAvailable = new Object(); - private MonitoredTask status; // For checking the latest flushed sequence id @@ -203,8 +190,9 @@ public class WALSplitter { this.sequenceIdChecker = idChecker; this.csm = (BaseCoordinatedStateManager)csm; this.walFactory = factory; + this.controller = new PipelineController(); - entryBuffers = new EntryBuffers( + entryBuffers = new EntryBuffers(controller, this.conf.getInt("hbase.regionserver.hlog.splitlog.buffersize", 128*1024*1024)); @@ -215,13 +203,13 @@ public class WALSplitter { this.numWriterThreads = this.conf.getInt("hbase.regionserver.hlog.splitlog.writer.threads", 3); if (csm != null && this.distributedLogReplay) { - outputSink = new LogReplayOutputSink(numWriterThreads); + outputSink = new LogReplayOutputSink(controller, entryBuffers, numWriterThreads); } else { if (this.distributedLogReplay) { LOG.info("ZooKeeperWatcher is passed in as NULL so disable distrubitedLogRepaly."); } this.distributedLogReplay = false; - outputSink = new LogRecoveredEditsOutputSink(numWriterThreads); + outputSink = new LogRecoveredEditsOutputSink(controller, entryBuffers, numWriterThreads); } } @@ -251,8 +239,9 @@ public class WALSplitter { // A wrapper to split one log folder using the method used by distributed // log splitting. Used by tools and unit tests. It should be package private. - // It is public only because UpgradeTo96 and TestWALObserver are in different packages, + // It is public only because TestWALObserver is in a different package, // which uses this method to do log splitting. + @VisibleForTesting public static List split(Path rootDir, Path logDir, Path oldLogDir, FileSystem fs, Configuration conf, final WALFactory factory) throws IOException { final FileStatus[] logfiles = SplitLogManager.getFileList(conf, @@ -319,13 +308,14 @@ public class WALSplitter { LOG.warn("Nothing to split in log file " + logPath); return true; } - if (csm != null) { - try { - TableStateManager tsm = csm.getTableStateManager(); - disablingOrDisabledTables = tsm.getTablesInStates( - ZooKeeperProtos.Table.State.DISABLED, ZooKeeperProtos.Table.State.DISABLING); - } catch (CoordinatedStateException e) { - throw new IOException("Can't get disabling/disabled tables", e); + if(csm != null) { + HConnection scc = csm.getServer().getConnection(); + TableName[] tables = scc.listTableNames(); + for (TableName table : tables) { + if (scc.getTableState(table) + .inStates(TableState.State.DISABLED, TableState.State.DISABLING)) { + disablingOrDisabledTables.add(table); + } } } int numOpenedFilesBeforeReporting = conf.getInt("hbase.splitlog.report.openedfiles", 3); @@ -387,7 +377,7 @@ public class WALSplitter { logfile.getPath().getName(), fs); isCorrupted = true; } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? ((RemoteException) e).unwrapRemoteException() : e; throw e; } finally { LOG.debug("Finishing writing output logs and closing down."); @@ -829,22 +819,6 @@ public class WALSplitter { } } - private void writerThreadError(Throwable t) { - thrown.compareAndSet(null, t); - } - - /** - * Check for errors in the writer threads. If any is found, rethrow it. - */ - private void checkForErrors() throws IOException { - Throwable thrown = this.thrown.get(); - if (thrown == null) return; - if (thrown instanceof IOException) { - throw new IOException(thrown); - } else { - throw new RuntimeException(thrown); - } - } /** * Create a new {@link Writer} for writing log splits. * @return a new Writer instance, caller should close @@ -874,13 +848,45 @@ public class WALSplitter { } /** + * Contains some methods to control WAL-entries producer / consumer interactions + */ + public static class PipelineController { + // If an exception is thrown by one of the other threads, it will be + // stored here. + AtomicReference thrown = new AtomicReference(); + + // Wait/notify for when data has been produced by the writer thread, + // consumed by the reader thread, or an exception occurred + public final Object dataAvailable = new Object(); + + void writerThreadError(Throwable t) { + thrown.compareAndSet(null, t); + } + + /** + * Check for errors in the writer threads. If any is found, rethrow it. + */ + void checkForErrors() throws IOException { + Throwable thrown = this.thrown.get(); + if (thrown == null) return; + if (thrown instanceof IOException) { + throw new IOException(thrown); + } else { + throw new RuntimeException(thrown); + } + } + } + + /** * Class which accumulates edits and separates them into a buffer per region * while simultaneously accounting RAM usage. Blocks if the RAM usage crosses * a predefined threshold. * * Writer threads then pull region-specific buffers from this class. */ - class EntryBuffers { + public static class EntryBuffers { + PipelineController controller; + Map buffers = new TreeMap(Bytes.BYTES_COMPARATOR); @@ -892,7 +898,8 @@ public class WALSplitter { long totalBuffered = 0; long maxHeapUsage; - EntryBuffers(long maxHeapUsage) { + public EntryBuffers(PipelineController controller, long maxHeapUsage) { + this.controller = controller; this.maxHeapUsage = maxHeapUsage; } @@ -903,7 +910,7 @@ public class WALSplitter { * @throws InterruptedException * @throws IOException */ - void appendEntry(Entry entry) throws InterruptedException, IOException { + public void appendEntry(Entry entry) throws InterruptedException, IOException { WALKey key = entry.getKey(); RegionEntryBuffer buffer; @@ -918,15 +925,16 @@ public class WALSplitter { } // If we crossed the chunk threshold, wait for more space to be available - synchronized (dataAvailable) { + synchronized (controller.dataAvailable) { totalBuffered += incrHeap; - while (totalBuffered > maxHeapUsage && thrown.get() == null) { - LOG.debug("Used " + totalBuffered + " bytes of buffered edits, waiting for IO threads..."); - dataAvailable.wait(2000); + while (totalBuffered > maxHeapUsage && controller.thrown.get() == null) { + LOG.debug("Used " + totalBuffered + + " bytes of buffered edits, waiting for IO threads..."); + controller.dataAvailable.wait(2000); } - dataAvailable.notifyAll(); + controller.dataAvailable.notifyAll(); } - checkForErrors(); + controller.checkForErrors(); } /** @@ -959,16 +967,30 @@ public class WALSplitter { } long size = buffer.heapSize(); - synchronized (dataAvailable) { + synchronized (controller.dataAvailable) { totalBuffered -= size; // We may unblock writers - dataAvailable.notifyAll(); + controller.dataAvailable.notifyAll(); } } synchronized boolean isRegionCurrentlyWriting(byte[] region) { return currentlyWriting.contains(region); } + + public void waitUntilDrained() { + synchronized (controller.dataAvailable) { + while (totalBuffered > 0) { + try { + controller.dataAvailable.wait(2000); + } catch (InterruptedException e) { + LOG.warn("Got intrerrupted while waiting for EntryBuffers is drained"); + Thread.interrupted(); + break; + } + } + } + } } /** @@ -977,7 +999,7 @@ public class WALSplitter { * share a single byte array instance for the table and region name. * Also tracks memory usage of the accumulated edits. */ - static class RegionEntryBuffer implements HeapSize { + public static class RegionEntryBuffer implements HeapSize { long heapInBuffer = 0; List entryBuffer; TableName tableName; @@ -1009,14 +1031,30 @@ public class WALSplitter { public long heapSize() { return heapInBuffer; } + + public byte[] getEncodedRegionName() { + return encodedRegionName; + } + + public List getEntryBuffer() { + return entryBuffer; + } + + public TableName getTableName() { + return tableName; + } } - class WriterThread extends Thread { + public static class WriterThread extends Thread { private volatile boolean shouldStop = false; + private PipelineController controller; + private EntryBuffers entryBuffers; private OutputSink outputSink = null; - WriterThread(OutputSink sink, int i) { + WriterThread(PipelineController controller, EntryBuffers entryBuffers, OutputSink sink, int i){ super(Thread.currentThread().getName() + "-Writer-" + i); + this.controller = controller; + this.entryBuffers = entryBuffers; outputSink = sink; } @@ -1026,7 +1064,7 @@ public class WALSplitter { doRun(); } catch (Throwable t) { LOG.error("Exiting thread", t); - writerThreadError(t); + controller.writerThreadError(t); } } @@ -1036,12 +1074,12 @@ public class WALSplitter { RegionEntryBuffer buffer = entryBuffers.getChunkToWrite(); if (buffer == null) { // No data currently available, wait on some more to show up - synchronized (dataAvailable) { + synchronized (controller.dataAvailable) { if (shouldStop && !this.outputSink.flush()) { return; } try { - dataAvailable.wait(500); + controller.dataAvailable.wait(500); } catch (InterruptedException ie) { if (!shouldStop) { throw new RuntimeException(ie); @@ -1065,9 +1103,9 @@ public class WALSplitter { } void finish() { - synchronized (dataAvailable) { + synchronized (controller.dataAvailable) { shouldStop = true; - dataAvailable.notifyAll(); + controller.dataAvailable.notifyAll(); } } } @@ -1076,7 +1114,10 @@ public class WALSplitter { * The following class is an abstraction class to provide a common interface to support both * existing recovered edits file sink and region server WAL edits replay sink */ - abstract class OutputSink { + public static abstract class OutputSink { + + protected PipelineController controller; + protected EntryBuffers entryBuffers; protected Map writers = Collections .synchronizedMap(new TreeMap(Bytes.BYTES_COMPARATOR));; @@ -1102,8 +1143,10 @@ public class WALSplitter { protected List splits = null; - public OutputSink(int numWriters) { + public OutputSink(PipelineController controller, EntryBuffers entryBuffers, int numWriters) { numThreads = numWriters; + this.controller = controller; + this.entryBuffers = entryBuffers; } void setReporter(CancelableProgressable reporter) { @@ -1113,9 +1156,9 @@ public class WALSplitter { /** * Start the threads that will pump data from the entryBuffers to the output files. */ - synchronized void startWriterThreads() { + public synchronized void startWriterThreads() { for (int i = 0; i < numThreads; i++) { - WriterThread t = new WriterThread(this, i); + WriterThread t = new WriterThread(controller, entryBuffers, this, i); t.start(); writerThreads.add(t); } @@ -1174,34 +1217,34 @@ public class WALSplitter { throw iie; } } - checkForErrors(); + controller.checkForErrors(); LOG.info("Split writers finished"); return (!progress_failed); } - abstract List finishWritingAndClose() throws IOException; + public abstract List finishWritingAndClose() throws IOException; /** * @return a map from encoded region ID to the number of edits written out for that region. */ - abstract Map getOutputCounts(); + public abstract Map getOutputCounts(); /** * @return number of regions we've recovered */ - abstract int getNumberOfRecoveredRegions(); + public abstract int getNumberOfRecoveredRegions(); /** * @param buffer A WAL Edit Entry * @throws IOException */ - abstract void append(RegionEntryBuffer buffer) throws IOException; + public abstract void append(RegionEntryBuffer buffer) throws IOException; /** * WriterThread call this function to help flush internal remaining edits in buffer before close * @return true when underlying sink has something to flush */ - protected boolean flush() throws IOException { + public boolean flush() throws IOException { return false; } } @@ -1211,13 +1254,14 @@ public class WALSplitter { */ class LogRecoveredEditsOutputSink extends OutputSink { - public LogRecoveredEditsOutputSink(int numWriters) { + public LogRecoveredEditsOutputSink(PipelineController controller, EntryBuffers entryBuffers, + int numWriters) { // More threads could potentially write faster at the expense // of causing more disk seeks as the logs are split. // 3. After a certain setting (probably around 3) the // process will be bound on the reader in the current // implementation anyway. - super(numWriters); + super(controller, entryBuffers, numWriters); } /** @@ -1225,7 +1269,7 @@ public class WALSplitter { * @throws IOException */ @Override - List finishWritingAndClose() throws IOException { + public List finishWritingAndClose() throws IOException { boolean isSuccessful = false; List result = null; try { @@ -1443,7 +1487,7 @@ public class WALSplitter { } @Override - void append(RegionEntryBuffer buffer) throws IOException { + public void append(RegionEntryBuffer buffer) throws IOException { List entries = buffer.entryBuffer; if (entries.isEmpty()) { LOG.warn("got an empty buffer, skipping"); @@ -1474,7 +1518,8 @@ public class WALSplitter { wap.incrementEdits(editsCount); wap.incrementNanoTime(System.nanoTime() - startTime); } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? + ((RemoteException)e).unwrapRemoteException() : e; LOG.fatal(" Got while writing log entry to log", e); throw e; } @@ -1484,7 +1529,7 @@ public class WALSplitter { * @return a map from encoded region ID to the number of edits written out for that region. */ @Override - Map getOutputCounts() { + public Map getOutputCounts() { TreeMap ret = new TreeMap(Bytes.BYTES_COMPARATOR); synchronized (writers) { for (Map.Entry entry : writers.entrySet()) { @@ -1495,7 +1540,7 @@ public class WALSplitter { } @Override - int getNumberOfRecoveredRegions() { + public int getNumberOfRecoveredRegions() { return writers.size(); } } @@ -1503,7 +1548,7 @@ public class WALSplitter { /** * Class wraps the actual writer which writes data out and related statistics */ - private abstract static class SinkWriter { + public abstract static class SinkWriter { /* Count of edits written to this path */ long editsWritten = 0; /* Number of nanos spent writing to this log */ @@ -1564,17 +1609,19 @@ public class WALSplitter { private LogRecoveredEditsOutputSink logRecoveredEditsOutputSink; private boolean hasEditsInDisablingOrDisabledTables = false; - public LogReplayOutputSink(int numWriters) { - super(numWriters); + public LogReplayOutputSink(PipelineController controller, EntryBuffers entryBuffers, + int numWriters) { + super(controller, entryBuffers, numWriters); this.waitRegionOnlineTimeOut = conf.getInt(HConstants.HBASE_SPLITLOG_MANAGER_TIMEOUT, ZKSplitLogManagerCoordination.DEFAULT_TIMEOUT); - this.logRecoveredEditsOutputSink = new LogRecoveredEditsOutputSink(numWriters); + this.logRecoveredEditsOutputSink = new LogRecoveredEditsOutputSink(controller, + entryBuffers, numWriters); this.logRecoveredEditsOutputSink.setReporter(reporter); } @Override - void append(RegionEntryBuffer buffer) throws IOException { + public void append(RegionEntryBuffer buffer) throws IOException { List entries = buffer.entryBuffer; if (entries.isEmpty()) { LOG.warn("got an empty buffer, skipping"); @@ -1824,7 +1871,7 @@ public class WALSplitter { rsw.incrementEdits(actions.size()); rsw.incrementNanoTime(System.nanoTime() - startTime); } catch (IOException e) { - e = RemoteExceptionHandler.checkIOException(e); + e = e instanceof RemoteException ? ((RemoteException) e).unwrapRemoteException() : e; LOG.fatal(" Got while writing log entry to log", e); throw e; } @@ -1890,7 +1937,7 @@ public class WALSplitter { } @Override - protected boolean flush() throws IOException { + public boolean flush() throws IOException { String curLoc = null; int curSize = 0; List> curQueue = null; @@ -1911,8 +1958,8 @@ public class WALSplitter { if (curSize > 0) { this.processWorkItems(curLoc, curQueue); // We should already have control of the monitor; ensure this is the case. - synchronized(dataAvailable) { - dataAvailable.notifyAll(); + synchronized(controller.dataAvailable) { + controller.dataAvailable.notifyAll(); } return true; } @@ -1924,7 +1971,7 @@ public class WALSplitter { } @Override - List finishWritingAndClose() throws IOException { + public List finishWritingAndClose() throws IOException { try { if (!finishWriting()) { return null; @@ -1999,7 +2046,7 @@ public class WALSplitter { } @Override - Map getOutputCounts() { + public Map getOutputCounts() { TreeMap ret = new TreeMap(Bytes.BYTES_COMPARATOR); synchronized (writers) { for (Map.Entry entry : writers.entrySet()) { @@ -2010,7 +2057,7 @@ public class WALSplitter { } @Override - int getNumberOfRecoveredRegions() { + public int getNumberOfRecoveredRegions() { return this.recoveredRegions.size(); } @@ -2120,7 +2167,8 @@ public class WALSplitter { * @throws IOException */ public static List getMutationsFromWALEntry(WALEntry entry, CellScanner cells, - Pair logEntry) throws IOException { + Pair logEntry, Durability durability) + throws IOException { if (entry == null) { // return an empty array @@ -2169,6 +2217,7 @@ public class WALSplitter { } else { ((Put) m).add(cell); } + m.setDurability(durability); previousCell = cell; } diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java index b63b68b..ccfdf1d 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ClusterStatusTracker.java @@ -28,9 +28,9 @@ import org.apache.zookeeper.KeeperException; /** * Tracker on cluster settings up in zookeeper. - * This is not related to {@link org.apache.hadoop.hbase.ClusterStatus}. - * That class is a data structure that holds snapshot of current view on cluster. - * This class is about tracking cluster attributes up in zookeeper. + * This is not related to {@link org.apache.hadoop.hbase.ClusterStatus}. That class + * is a data structure that holds snapshot of current view on cluster. This class + * is about tracking cluster attributes up in zookeeper. * */ @InterfaceAudience.Private diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKSplitLog.java hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKSplitLog.java index 325fe0d..78b3eed 100644 --- hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKSplitLog.java +++ hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKSplitLog.java @@ -35,9 +35,8 @@ import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.RegionStoreSeq import org.apache.zookeeper.KeeperException; /** - * Common methods and attributes used by - * {@link org.apache.hadoop.hbase.master.SplitLogManager} and - * {@link org.apache.hadoop.hbase.regionserver.SplitLogWorker} + * Common methods and attributes used by {@link org.apache.hadoop.hbase.master.SplitLogManager} + * and {@link org.apache.hadoop.hbase.regionserver.SplitLogWorker} * running distributed splitting of WAL logs. */ @InterfaceAudience.Private diff --git hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateManager.java hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateManager.java deleted file mode 100644 index 862ff8b..0000000 --- hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKTableStateManager.java +++ /dev/null @@ -1,330 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.zookeeper; - -import com.google.protobuf.InvalidProtocolBufferException; -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; -import org.apache.hadoop.hbase.CoordinatedStateException; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.TableStateManager; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.zookeeper.KeeperException; - -import java.io.InterruptedIOException; -import java.util.Arrays; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; - -/** - * Implementation of TableStateManager which reads, caches and sets state - * up in ZooKeeper. If multiple read/write clients, will make for confusion. - * Code running on client side without consensus context should use - * {@link ZKTableStateClientSideReader} instead. - * - *

          To save on trips to the zookeeper ensemble, internally we cache table - * state. - */ -@InterfaceAudience.Private -public class ZKTableStateManager implements TableStateManager { - // A znode will exist under the table directory if it is in any of the - // following states: {@link TableState#ENABLING} , {@link TableState#DISABLING}, - // or {@link TableState#DISABLED}. If {@link TableState#ENABLED}, there will - // be no entry for a table in zk. Thats how it currently works. - - private static final Log LOG = LogFactory.getLog(ZKTableStateManager.class); - private final ZooKeeperWatcher watcher; - - /** - * Cache of what we found in zookeeper so we don't have to go to zk ensemble - * for every query. Synchronize access rather than use concurrent Map because - * synchronization needs to span query of zk. - */ - private final Map cache = - new HashMap(); - - public ZKTableStateManager(final ZooKeeperWatcher zkw) throws KeeperException, - InterruptedException { - super(); - this.watcher = zkw; - populateTableStates(); - } - - /** - * Gets a list of all the tables set as disabled in zookeeper. - * @throws KeeperException, InterruptedException - */ - private void populateTableStates() throws KeeperException, InterruptedException { - synchronized (this.cache) { - List children = ZKUtil.listChildrenNoWatch(this.watcher, this.watcher.tableZNode); - if (children == null) return; - for (String child: children) { - TableName tableName = TableName.valueOf(child); - ZooKeeperProtos.Table.State state = getTableState(this.watcher, tableName); - if (state != null) this.cache.put(tableName, state); - } - } - } - - /** - * Sets table state in ZK. Sets no watches. - * - * {@inheritDoc} - */ - @Override - public void setTableState(TableName tableName, ZooKeeperProtos.Table.State state) - throws CoordinatedStateException { - synchronized (this.cache) { - LOG.warn("Moving table " + tableName + " state from " + this.cache.get(tableName) - + " to " + state); - try { - setTableStateInZK(tableName, state); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - } - } - - /** - * Checks and sets table state in ZK. Sets no watches. - * {@inheritDoc} - */ - @Override - public boolean setTableStateIfInStates(TableName tableName, - ZooKeeperProtos.Table.State newState, - ZooKeeperProtos.Table.State... states) - throws CoordinatedStateException { - synchronized (this.cache) { - // Transition ENABLED->DISABLING has to be performed with a hack, because - // we treat empty state as enabled in this case because 0.92- clusters. - if ( - (newState == ZooKeeperProtos.Table.State.DISABLING) && - this.cache.get(tableName) != null && !isTableState(tableName, states) || - (newState != ZooKeeperProtos.Table.State.DISABLING && - !isTableState(tableName, states) )) { - return false; - } - try { - setTableStateInZK(tableName, newState); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - return true; - } - } - - /** - * Checks and sets table state in ZK. Sets no watches. - * {@inheritDoc} - */ - @Override - public boolean setTableStateIfNotInStates(TableName tableName, - ZooKeeperProtos.Table.State newState, - ZooKeeperProtos.Table.State... states) - throws CoordinatedStateException { - synchronized (this.cache) { - if (isTableState(tableName, states)) { - return false; - } - try { - setTableStateInZK(tableName, newState); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - return true; - } - } - - private void setTableStateInZK(final TableName tableName, - final ZooKeeperProtos.Table.State state) - throws KeeperException { - String znode = ZKUtil.joinZNode(this.watcher.tableZNode, tableName.getNameAsString()); - if (ZKUtil.checkExists(this.watcher, znode) == -1) { - ZKUtil.createAndFailSilent(this.watcher, znode); - } - synchronized (this.cache) { - ZooKeeperProtos.Table.Builder builder = ZooKeeperProtos.Table.newBuilder(); - builder.setState(state); - byte [] data = ProtobufUtil.prependPBMagic(builder.build().toByteArray()); - ZKUtil.setData(this.watcher, znode, data); - this.cache.put(tableName, state); - } - } - - /** - * Checks if table is marked in specified state in ZK. - * - * {@inheritDoc} - */ - @Override - public boolean isTableState(final TableName tableName, - final ZooKeeperProtos.Table.State... states) { - synchronized (this.cache) { - ZooKeeperProtos.Table.State currentState = this.cache.get(tableName); - return isTableInState(Arrays.asList(states), currentState); - } - } - - /** - * Deletes the table in zookeeper. Fails silently if the - * table is not currently disabled in zookeeper. Sets no watches. - * - * {@inheritDoc} - */ - @Override - public void setDeletedTable(final TableName tableName) - throws CoordinatedStateException { - synchronized (this.cache) { - if (this.cache.remove(tableName) == null) { - LOG.warn("Moving table " + tableName + " state to deleted but was " + - "already deleted"); - } - try { - ZKUtil.deleteNodeFailSilent(this.watcher, - ZKUtil.joinZNode(this.watcher.tableZNode, tableName.getNameAsString())); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - } - } - - /** - * check if table is present. - * - * @param tableName table we're working on - * @return true if the table is present - */ - @Override - public boolean isTablePresent(final TableName tableName) { - synchronized (this.cache) { - ZooKeeperProtos.Table.State state = this.cache.get(tableName); - return !(state == null); - } - } - - /** - * Gets a list of all the tables set as disabling in zookeeper. - * @return Set of disabling tables, empty Set if none - * @throws CoordinatedStateException if error happened in underlying coordination engine - */ - @Override - public Set getTablesInStates(ZooKeeperProtos.Table.State... states) - throws InterruptedIOException, CoordinatedStateException { - try { - return getAllTables(states); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - } - - /** - * {@inheritDoc} - */ - @Override - public void checkAndRemoveTableState(TableName tableName, ZooKeeperProtos.Table.State states, - boolean deletePermanentState) - throws CoordinatedStateException { - synchronized (this.cache) { - if (isTableState(tableName, states)) { - this.cache.remove(tableName); - if (deletePermanentState) { - try { - ZKUtil.deleteNodeFailSilent(this.watcher, - ZKUtil.joinZNode(this.watcher.tableZNode, tableName.getNameAsString())); - } catch (KeeperException e) { - throw new CoordinatedStateException(e); - } - } - } - } - } - - /** - * Gets a list of all the tables of specified states in zookeeper. - * @return Set of tables of specified states, empty Set if none - * @throws KeeperException - */ - Set getAllTables(final ZooKeeperProtos.Table.State... states) - throws KeeperException, InterruptedIOException { - - Set allTables = new HashSet(); - List children = - ZKUtil.listChildrenNoWatch(watcher, watcher.tableZNode); - if(children == null) return allTables; - for (String child: children) { - TableName tableName = TableName.valueOf(child); - ZooKeeperProtos.Table.State state; - try { - state = getTableState(watcher, tableName); - } catch (InterruptedException e) { - throw new InterruptedIOException(); - } - for (ZooKeeperProtos.Table.State expectedState: states) { - if (state == expectedState) { - allTables.add(tableName); - break; - } - } - } - return allTables; - } - - /** - * Gets table state from ZK. - * @param zkw ZooKeeperWatcher instance to use - * @param tableName table we're checking - * @return Null or {@link ZooKeeperProtos.Table.State} found in znode. - * @throws KeeperException - */ - private ZooKeeperProtos.Table.State getTableState(final ZooKeeperWatcher zkw, - final TableName tableName) - throws KeeperException, InterruptedException { - String znode = ZKUtil.joinZNode(zkw.tableZNode, tableName.getNameAsString()); - byte [] data = ZKUtil.getData(zkw, znode); - if (data == null || data.length <= 0) return null; - try { - ProtobufUtil.expectPBMagicPrefix(data); - ZooKeeperProtos.Table.Builder builder = ZooKeeperProtos.Table.newBuilder(); - int magicLen = ProtobufUtil.lengthOfPBMagic(); - ZooKeeperProtos.Table t = builder.mergeFrom(data, magicLen, data.length - magicLen).build(); - return t.getState(); - } catch (InvalidProtocolBufferException e) { - KeeperException ke = new KeeperException.DataInconsistencyException(); - ke.initCause(e); - throw ke; - } catch (DeserializationException e) { - throw ZKUtil.convert(e); - } - } - - /** - * @return true if current state isn't null and is contained - * in the list of expected states. - */ - private boolean isTableInState(final List expectedStates, - final ZooKeeperProtos.Table.State currentState) { - return currentState != null && expectedStates.contains(currentState); - } -} diff --git hbase-server/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html hbase-server/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html index ea4a2af..2f2e24a 100644 --- hbase-server/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html +++ hbase-server/src/main/javadoc/org/apache/hadoop/hbase/replication/package.html @@ -99,7 +99,7 @@ to another.

        • Run the following command in the master's shell while it's running
          add_peer 'ID' 'CLUSTER_KEY'
          - The ID must be a short integer. To compose the CLUSTER_KEY, use the following template: + The ID is a string, which must not contain a hyphen. To compose the CLUSTER_KEY, use the following template:
          hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent
          This will show you the help to setup the replication stream between both clusters. If both clusters use the same Zookeeper cluster, you have diff --git hbase-server/src/main/resources/hbase-webapps/master/table.jsp hbase-server/src/main/resources/hbase-webapps/master/table.jsp index cd21fef..1f1871c 100644 --- hbase-server/src/main/resources/hbase-webapps/master/table.jsp +++ hbase-server/src/main/resources/hbase-webapps/master/table.jsp @@ -289,7 +289,8 @@ } %>
    - + <% if (addr != null) { String url = "//" + addr.getHostname() + ":" + master.getRegionServerInfoPort(addr) + "/"; @@ -304,8 +305,10 @@ <% } %> - - + + <% diff --git hbase-server/src/test/data/TestMetaMigrationConvertToPB.README hbase-server/src/test/data/TestMetaMigrationConvertToPB.README deleted file mode 100644 index e55a44c..0000000 --- hbase-server/src/test/data/TestMetaMigrationConvertToPB.README +++ /dev/null @@ -1,25 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -TestMetaMigrationConvertToPB uses the file TestMetaMigrationConvertToPB.tgz for testing -upgrade to 0.96 from 0.92/0.94 cluster data. The files are untarred to the local -filesystem, and copied over to a minidfscluster. However, since the directory -name hbase:meta causes problems on Windows, it has been renamed to -META- inside -the .tgz file. After untarring and copying the contents to minidfs, -TestMetaMigrationConvertToPB.setUpBeforeClass() renames the file back to hbase:meta -See https://issues.apache.org/jira/browse/HBASE-6821. diff --git hbase-server/src/test/data/TestMetaMigrationConvertToPB.tgz hbase-server/src/test/data/TestMetaMigrationConvertToPB.tgz deleted file mode 100644 index 8d6bff6..0000000 Binary files hbase-server/src/test/data/TestMetaMigrationConvertToPB.tgz and /dev/null differ diff --git hbase-server/src/test/data/TestNamespaceUpgrade.tgz hbase-server/src/test/data/TestNamespaceUpgrade.tgz deleted file mode 100644 index bd91ba2..0000000 Binary files hbase-server/src/test/data/TestNamespaceUpgrade.tgz and /dev/null differ diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseCluster.java index 9e7a0c4..adbdd69 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseCluster.java @@ -118,7 +118,7 @@ public abstract class HBaseCluster implements Closeable, Configurable { * @param hostname the hostname to start the regionserver on * @throws IOException if something goes wrong */ - public abstract void startRegionServer(String hostname) throws IOException; + public abstract void startRegionServer(String hostname, int port) throws IOException; /** * Kills the region server process if this is a distributed cluster, otherwise @@ -139,12 +139,12 @@ public abstract class HBaseCluster implements Closeable, Configurable { * @return whether the operation finished with success * @throws IOException if something goes wrong or timeout occurs */ - public void waitForRegionServerToStart(String hostname, long timeout) + public void waitForRegionServerToStart(String hostname, int port, long timeout) throws IOException { long start = System.currentTimeMillis(); while ((System.currentTimeMillis() - start) < timeout) { for (ServerName server : getClusterStatus().getServers()) { - if (server.getHostname().equals(hostname)) { + if (server.getHostname().equals(hostname) && server.getPort() == port) { return; } } @@ -169,7 +169,7 @@ public abstract class HBaseCluster implements Closeable, Configurable { * @return whether the operation finished with success * @throws IOException if something goes wrong */ - public abstract void startMaster(String hostname) throws IOException; + public abstract void startMaster(String hostname, int port) throws IOException; /** * Kills the master process if this is a distributed cluster, otherwise, diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java index 5b58bbe..3705f3b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java @@ -52,6 +52,7 @@ import org.apache.hadoop.hdfs.MiniDFSCluster; * like an HBaseConfiguration and filesystem. * @deprecated Write junit4 unit tests using {@link HBaseTestingUtility} */ +@Deprecated public abstract class HBaseTestCase extends TestCase { private static final Log LOG = LogFactory.getLog(HBaseTestCase.class); @@ -111,12 +112,12 @@ public abstract class HBaseTestCase extends TestCase { } try { if (localfs) { - this.testDir = getUnitTestdir(getName()); + testDir = getUnitTestdir(getName()); if (fs.exists(testDir)) { fs.delete(testDir, true); } } else { - this.testDir = FSUtils.getRootDir(conf); + testDir = FSUtils.getRootDir(conf); } } catch (Exception e) { LOG.fatal("error during setup", e); @@ -640,8 +641,8 @@ public abstract class HBaseTestCase extends TestCase { */ protected void createMetaRegion() throws IOException { FSTableDescriptors fsTableDescriptors = new FSTableDescriptors(conf); - meta = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, testDir, conf, - fsTableDescriptors.get(TableName.META_TABLE_NAME)); + meta = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, testDir, + conf, fsTableDescriptors.get(TableName.META_TABLE_NAME) ); } protected void closeRootAndMeta() throws IOException { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java index 5a72965..81e8d0b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase; +import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -104,7 +105,6 @@ import org.apache.hadoop.hbase.util.RetryCounter; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.EmptyWatcher; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.hadoop.hbase.zookeeper.ZKConfig; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.hdfs.DFSClient; @@ -114,8 +114,6 @@ import org.apache.hadoop.hdfs.server.namenode.EditLogFileOutputStream; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.MiniMRCluster; import org.apache.hadoop.mapred.TaskLog; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NodeExistsException; import org.apache.zookeeper.WatchedEvent; import org.apache.zookeeper.ZooKeeper; import org.apache.zookeeper.ZooKeeper.States; @@ -1086,7 +1084,7 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** - * Returns the path to the default root dir the minicluster uses. If getNewDirPathIfExists + * Returns the path to the default root dir the minicluster uses. If create * is true, a new root directory path is fetched irrespective of whether it has been fetched * before or not. If false, previous path is used. * Note: this does not cause the root dir to be created. @@ -1102,8 +1100,8 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** - * Same as {{@link HBaseTestingUtility#getDefaultRootDirPath(boolean getNewDirPathIfExists)} - * except that getNewDirPathIfExists flag is false. + * Same as {{@link HBaseTestingUtility#getDefaultRootDirPath(boolean create)} + * except that create flag is false. * Note: this does not cause the root dir to be created. * @return Fully qualified path for the default hbase root dir * @throws IOException @@ -1117,16 +1115,16 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { * version file. Normally you won't make use of this method. Root hbasedir * is created for you as part of mini cluster startup. You'd only use this * method if you were doing manual operation. - * @param getNewDirPathIfExists This flag decides whether to get a new + * @param create This flag decides whether to get a new * root or data directory path or not, if it has been fetched already. * Note : Directory will be made irrespective of whether path has been fetched or not. * If directory already exists, it will be overwritten * @return Fully qualified path to hbase root dir * @throws IOException */ - public Path createRootDir(boolean getNewDirPathIfExists) throws IOException { + public Path createRootDir(boolean create) throws IOException { FileSystem fs = FileSystem.get(this.conf); - Path hbaseRootdir = getDefaultRootDirPath(getNewDirPathIfExists); + Path hbaseRootdir = getDefaultRootDirPath(create); FSUtils.setRootDir(this.conf, hbaseRootdir); fs.mkdirs(hbaseRootdir); FSUtils.setVersion(fs, hbaseRootdir); @@ -1134,8 +1132,8 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** - * Same as {@link HBaseTestingUtility#createRootDir(boolean getNewDirPathIfExists)} - * except that getNewDirPathIfExists flag is false. + * Same as {@link HBaseTestingUtility#createRootDir(boolean create)} + * except that create flag is false. * @return Fully qualified path to hbase root dir * @throws IOException */ @@ -1663,6 +1661,18 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { getHBaseAdmin().deleteTable(tableName); } + /** + * Drop an existing table + * @param tableName existing table + */ + public void deleteTableIfAny(TableName tableName) throws IOException { + try { + deleteTable(tableName); + } catch (TableNotFoundException e) { + // ignore + } + } + // ========================================================================== // Canned table and table descriptor creation // TODO replace HBaseTestCase @@ -1781,22 +1791,24 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { // ========================================================================== /** - * Provide an existing table name to truncate + * Provide an existing table name to truncate. + * Scans the table and issues a delete for each row read. * @param tableName existing table * @return HTable to that new table * @throws IOException */ - public HTable truncateTable(byte[] tableName) throws IOException { - return truncateTable(TableName.valueOf(tableName)); + public HTable deleteTableData(byte[] tableName) throws IOException { + return deleteTableData(TableName.valueOf(tableName)); } /** - * Provide an existing table name to truncate + * Provide an existing table name to truncate. + * Scans the table and issues a delete for each row read. * @param tableName existing table * @return HTable to that new table * @throws IOException */ - public HTable truncateTable(TableName tableName) throws IOException { + public HTable deleteTableData(TableName tableName) throws IOException { HTable table = new HTable(getConfiguration(), tableName); Scan scan = new Scan(); ResultScanner resScan = table.getScanner(scan); @@ -1810,6 +1822,56 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** + * Truncate a table using the admin command. + * Effectively disables, deletes, and recreates the table. + * @param tableName table which must exist. + * @param preserveRegions keep the existing split points + * @return HTable for the new table + */ + public HTable truncateTable(final TableName tableName, final boolean preserveRegions) throws IOException { + Admin admin = getHBaseAdmin(); + admin.truncateTable(tableName, preserveRegions); + return new HTable(getConfiguration(), tableName); + } + + /** + * Truncate a table using the admin command. + * Effectively disables, deletes, and recreates the table. + * For previous behavior of issuing row deletes, see + * deleteTableData. + * Expressly does not preserve regions of existing table. + * @param tableName table which must exist. + * @return HTable for the new table + */ + public HTable truncateTable(final TableName tableName) throws IOException { + return truncateTable(tableName, false); + } + + /** + * Truncate a table using the admin command. + * Effectively disables, deletes, and recreates the table. + * @param tableName table which must exist. + * @param preserveRegions keep the existing split points + * @return HTable for the new table + */ + public HTable truncateTable(final byte[] tableName, final boolean preserveRegions) throws IOException { + return truncateTable(TableName.valueOf(tableName), preserveRegions); + } + + /** + * Truncate a table using the admin command. + * Effectively disables, deletes, and recreates the table. + * For previous behavior of issuing row deletes, see + * deleteTableData. + * Expressly does not preserve regions of existing table. + * @param tableName table which must exist. + * @return HTable for the new table + */ + public HTable truncateTable(final byte[] tableName) throws IOException { + return truncateTable(tableName, false); + } + + /** * Load table with rows from 'aaa' to 'zzz'. * @param t Table * @param f Family @@ -1977,7 +2039,8 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { return rowCount; } - public void loadNumericRows(final Table t, final byte[] f, int startRow, int endRow) throws IOException { + public void loadNumericRows(final Table t, final byte[] f, int startRow, int endRow) + throws IOException { for (int i = startRow; i < endRow; i++) { byte[] data = Bytes.toBytes(String.valueOf(i)); Put put = new Put(data); @@ -1986,7 +2049,23 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } } - public void deleteNumericRows(final Table t, final byte[] f, int startRow, int endRow) throws IOException { + public void verifyNumericRows(HRegion region, final byte[] f, int startRow, int endRow) + throws IOException { + for (int i = startRow; i < endRow; i++) { + String failMsg = "Failed verification of row :" + i; + byte[] data = Bytes.toBytes(String.valueOf(i)); + Result result = region.get(new Get(data)); + assertTrue(failMsg, result.containsColumn(f, null)); + assertEquals(failMsg, result.getColumnCells(f, null).size(), 1); + Cell cell = result.getColumnLatestCell(f, null); + assertTrue(failMsg, + Bytes.equals(data, 0, data.length, cell.getValueArray(), cell.getValueOffset(), + cell.getValueLength())); + } + } + + public void deleteNumericRows(final Table t, final byte[] f, int startRow, int endRow) + throws IOException { for (int i = startRow; i < endRow; i++) { byte[] data = Bytes.toBytes(String.valueOf(i)); Delete delete = new Delete(data); @@ -2130,56 +2209,55 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { final byte[] columnFamily, byte [][] startKeys) throws IOException { Arrays.sort(startKeys, Bytes.BYTES_COMPARATOR); - Table meta = new HTable(c, TableName.META_TABLE_NAME); - HTableDescriptor htd = table.getTableDescriptor(); - if(!htd.hasFamily(columnFamily)) { - HColumnDescriptor hcd = new HColumnDescriptor(columnFamily); - htd.addFamily(hcd); - } - // remove empty region - this is tricky as the mini cluster during the test - // setup already has the ",,123456789" row with an empty start - // and end key. Adding the custom regions below adds those blindly, - // including the new start region from empty to "bbb". lg - List rows = getMetaTableRows(htd.getTableName()); - String regionToDeleteInFS = table - .getRegionsInRange(Bytes.toBytes(""), Bytes.toBytes("")).get(0) - .getRegionInfo().getEncodedName(); - List newRegions = new ArrayList(startKeys.length); - // add custom ones - int count = 0; - for (int i = 0; i < startKeys.length; i++) { - int j = (i + 1) % startKeys.length; - HRegionInfo hri = new HRegionInfo(table.getName(), - startKeys[i], startKeys[j]); - MetaTableAccessor.addRegionToMeta(meta, hri); - newRegions.add(hri); - count++; - } - // see comment above, remove "old" (or previous) single region - for (byte[] row : rows) { - LOG.info("createMultiRegions: deleting meta row -> " + - Bytes.toStringBinary(row)); - meta.delete(new Delete(row)); - } - // remove the "old" region from FS - Path tableDir = new Path(getDefaultRootDirPath().toString() - + System.getProperty("file.separator") + htd.getTableName() - + System.getProperty("file.separator") + regionToDeleteInFS); - FileSystem.get(c).delete(tableDir, true); - // flush cache of regions - HConnection conn = table.getConnection(); - conn.clearRegionCache(); - // assign all the new regions IF table is enabled. - Admin admin = getHBaseAdmin(); - if (admin.isTableEnabled(table.getName())) { - for(HRegionInfo hri : newRegions) { - admin.assign(hri.getRegionName()); + try (Table meta = new HTable(c, TableName.META_TABLE_NAME)) { + HTableDescriptor htd = table.getTableDescriptor(); + if(!htd.hasFamily(columnFamily)) { + HColumnDescriptor hcd = new HColumnDescriptor(columnFamily); + htd.addFamily(hcd); + } + // remove empty region - this is tricky as the mini cluster during the test + // setup already has the ",,123456789" row with an empty start + // and end key. Adding the custom regions below adds those blindly, + // including the new start region from empty to "bbb". lg + List rows = getMetaTableRows(htd.getTableName()); + String regionToDeleteInFS = table + .getRegionsInRange(Bytes.toBytes(""), Bytes.toBytes("")).get(0) + .getRegionInfo().getEncodedName(); + List newRegions = new ArrayList(startKeys.length); + // add custom ones + int count = 0; + for (int i = 0; i < startKeys.length; i++) { + int j = (i + 1) % startKeys.length; + HRegionInfo hri = new HRegionInfo(table.getName(), + startKeys[i], startKeys[j]); + MetaTableAccessor.addRegionToMeta(meta, hri); + newRegions.add(hri); + count++; + } + // see comment above, remove "old" (or previous) single region + for (byte[] row : rows) { + LOG.info("createMultiRegions: deleting meta row -> " + + Bytes.toStringBinary(row)); + meta.delete(new Delete(row)); + } + // remove the "old" region from FS + Path tableDir = new Path(getDefaultRootDirPath().toString() + + System.getProperty("file.separator") + htd.getTableName() + + System.getProperty("file.separator") + regionToDeleteInFS); + FileSystem.get(c).delete(tableDir, true); + // flush cache of regions + HConnection conn = table.getConnection(); + conn.clearRegionCache(); + // assign all the new regions IF table is enabled. + Admin admin = conn.getAdmin(); + if (admin.isTableEnabled(table.getName())) { + for(HRegionInfo hri : newRegions) { + admin.assign(hri.getRegionName()); + } } - } - - meta.close(); - return count; + return count; + } } /** @@ -2587,7 +2665,7 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { * Get a Connection to the cluster. * Not thread-safe (This class needs a lot of work to make it thread-safe). * @return A Connection that can be shared. Don't close. Will be closed on shutdown of cluster. - * @throws IOException + * @throws IOException */ public Connection getConnection() throws IOException { if (this.connection == null) { @@ -2862,6 +2940,48 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** + * Waits for a table to be 'disabled'. Disabled means that table is set as 'disabled' + * Will timeout after default period (30 seconds) + * @param table Table to wait on. + * @throws InterruptedException + * @throws IOException + */ + public void waitTableDisabled(byte[] table) + throws InterruptedException, IOException { + waitTableDisabled(getHBaseAdmin(), table, 30000); + } + + public void waitTableDisabled(Admin admin, byte[] table) + throws InterruptedException, IOException { + waitTableDisabled(admin, table, 30000); + } + + /** + * Waits for a table to be 'disabled'. Disabled means that table is set as 'disabled' + * @see #waitTableAvailable(byte[]) + * @param table Table to wait on. + * @param timeoutMillis Time to wait on it being marked disabled. + * @throws InterruptedException + * @throws IOException + */ + public void waitTableDisabled(byte[] table, long timeoutMillis) + throws InterruptedException, IOException { + waitTableDisabled(getHBaseAdmin(), table, timeoutMillis); + } + + public void waitTableDisabled(Admin admin, byte[] table, long timeoutMillis) + throws InterruptedException, IOException { + TableName tableName = TableName.valueOf(table); + long startWait = System.currentTimeMillis(); + while (!admin.isTableDisabled(tableName)) { + assertTrue("Timed out waiting for table to become disabled " + + Bytes.toStringBinary(table), + System.currentTimeMillis() - startWait < timeoutMillis); + Thread.sleep(200); + } + } + + /** * Make sure that at least the specified number of region servers * are running * @param num minimum number of region servers that should be running @@ -3146,30 +3266,6 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { return zkw; } - /** - * Creates a znode with OPENED state. - * @param TEST_UTIL - * @param region - * @param serverName - * @return - * @throws IOException - * @throws org.apache.hadoop.hbase.ZooKeeperConnectionException - * @throws KeeperException - * @throws NodeExistsException - */ - public static ZooKeeperWatcher createAndForceNodeToOpenedState( - HBaseTestingUtility TEST_UTIL, HRegion region, - ServerName serverName) throws ZooKeeperConnectionException, - IOException, KeeperException, NodeExistsException { - ZooKeeperWatcher zkw = getZooKeeperWatcher(TEST_UTIL); - ZKAssign.createNodeOffline(zkw, region.getRegionInfo(), serverName); - int version = ZKAssign.transitionNodeOpening(zkw, region - .getRegionInfo(), serverName); - ZKAssign.transitionNodeOpened(zkw, region.getRegionInfo(), serverName, - version); - return zkw; - } - public static void assertKVListsEqual(String additionalMsg, final List expected, final List actual) { @@ -3451,10 +3547,10 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } public static int getMetaRSPort(Configuration conf) throws IOException { - RegionLocator table = new HTable(conf, TableName.META_TABLE_NAME); - HRegionLocation hloc = table.getRegionLocation(Bytes.toBytes("")); - table.close(); - return hloc.getPort(); + try (Connection c = ConnectionFactory.createConnection(); + RegionLocator locator = c.getRegionLocator(TableName.META_TABLE_NAME)) { + return locator.getRegionLocation(Bytes.toBytes("")).getPort(); + } } /** @@ -3578,6 +3674,16 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } /** + * Wait until no regions in transition. + * @param timeout How long to wait. + * @throws Exception + */ + public void waitUntilNoRegionsInTransition( + final long timeout) throws Exception { + waitFor(timeout, predicateNoRegionsInTransition()); + } + + /** * Create a set of column descriptors with the combination of compression, * encoding, bloom codecs available. * @return the list of column descriptors @@ -3629,13 +3735,4 @@ public class HBaseTestingUtility extends HBaseCommonTestingUtility { } return supportedAlgos.toArray(new Algorithm[supportedAlgos.size()]); } - - /** - * Wait until no regions in transition. - * @param timeout How long to wait. - * @throws Exception - */ - public void waitUntilNoRegionsInTransition(final long timeout) throws Exception { - waitFor(timeout, predicateNoRegionsInTransition()); - } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java index 7672ac1..24b6e71 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java @@ -233,7 +233,7 @@ public class MiniHBaseCluster extends HBaseCluster { } @Override - public void startRegionServer(String hostname) throws IOException { + public void startRegionServer(String hostname, int port) throws IOException { this.startRegionServer(); } @@ -260,7 +260,7 @@ public class MiniHBaseCluster extends HBaseCluster { } @Override - public void startMaster(String hostname) throws IOException { + public void startMaster(String hostname, int port) throws IOException { this.startMaster(); } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java index 013f0ef..11f3a7a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/MockRegionServerServices.java @@ -22,6 +22,7 @@ import java.net.InetSocketAddress; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.concurrent.ConcurrentSkipListMap; import org.apache.hadoop.conf.Configuration; @@ -33,6 +34,7 @@ import org.apache.hadoop.hbase.ipc.RpcServerInterface; import org.apache.hadoop.hbase.master.TableLockManager; import org.apache.hadoop.hbase.master.TableLockManager.NullTableLockManager; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; +import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager; import org.apache.hadoop.hbase.regionserver.CompactionRequestor; import org.apache.hadoop.hbase.regionserver.FlushRequester; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -85,11 +87,17 @@ class MockRegionServerServices implements RegionServerServices { return this.regions.get(encodedRegionName); } + @Override public List getOnlineRegions(TableName tableName) throws IOException { return null; } @Override + public Set getOnlineTables() { + return null; + } + + @Override public void addToOnlineRegions(HRegion r) { this.regions.put(r.getRegionInfo().getEncodedName(), r); } @@ -149,6 +157,7 @@ class MockRegionServerServices implements RegionServerServices { return null; } + @Override public RegionServerAccounting getRegionServerAccounting() { return null; } @@ -159,6 +168,11 @@ class MockRegionServerServices implements RegionServerServices { } @Override + public RegionServerQuotaManager getRegionServerQuotaManager() { + return null; + } + + @Override public ServerName getServerName() { return this.serverName; } @@ -252,4 +266,4 @@ class MockRegionServerServices implements RegionServerServices { // TODO Auto-generated method stub return false; } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java index 943d67d..7524d5c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java @@ -41,9 +41,6 @@ import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; -import com.google.common.base.Objects; -import com.google.common.util.concurrent.ThreadFactoryBuilder; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; @@ -89,16 +86,17 @@ import org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; import org.codehaus.jackson.map.ObjectMapper; - -import com.yammer.metrics.core.Histogram; -import com.yammer.metrics.stats.UniformSample; -import com.yammer.metrics.stats.Snapshot; - import org.htrace.Sampler; import org.htrace.Trace; import org.htrace.TraceScope; import org.htrace.impl.ProbabilitySampler; +import com.google.common.base.Objects; +import com.google.common.util.concurrent.ThreadFactoryBuilder; +import com.yammer.metrics.core.Histogram; +import com.yammer.metrics.stats.Snapshot; +import com.yammer.metrics.stats.UniformSample; + /** * Script used evaluating HBase performance and scalability. Runs a HBase * client that steps through one of a set of hardcoded tests or 'experiments' @@ -489,34 +487,47 @@ public class PerformanceEvaluation extends Configured implements Tool { return job; } + /** + * Per client, how many tasks will we run? We divide number of rows by this number and have the + * client do the resulting count in a map task. + */ + static int TASKS_PER_CLIENT = 10; + + static String JOB_INPUT_FILENAME = "input.txt"; + /* * Write input file of offsets-per-client for the mapreduce job. * @param c Configuration - * @return Directory that contains file written. + * @return Directory that contains file written whose name is JOB_INPUT_FILENAME * @throws IOException */ - private static Path writeInputFile(final Configuration c, final TestOptions opts) throws IOException { + static Path writeInputFile(final Configuration c, final TestOptions opts) throws IOException { + return writeInputFile(c, opts, new Path(".")); + } + + static Path writeInputFile(final Configuration c, final TestOptions opts, final Path basedir) + throws IOException { SimpleDateFormat formatter = new SimpleDateFormat("yyyyMMddHHmmss"); - Path jobdir = new Path(PERF_EVAL_DIR, formatter.format(new Date())); + Path jobdir = new Path(new Path(basedir, PERF_EVAL_DIR), formatter.format(new Date())); Path inputDir = new Path(jobdir, "inputs"); FileSystem fs = FileSystem.get(c); fs.mkdirs(inputDir); - Path inputFile = new Path(inputDir, "input.txt"); + Path inputFile = new Path(inputDir, JOB_INPUT_FILENAME); PrintStream out = new PrintStream(fs.create(inputFile)); // Make input random. Map m = new TreeMap(); Hash h = MurmurHash.getInstance(); int perClientRows = (opts.totalRows / opts.numClientThreads); try { - for (int i = 0; i < 10; i++) { + for (int i = 0; i < TASKS_PER_CLIENT; i++) { for (int j = 0; j < opts.numClientThreads; j++) { TestOptions next = new TestOptions(opts); next.startRow = (j * perClientRows) + (i * (perClientRows/10)); next.perClientRunRows = perClientRows / 10; String s = MAPPER.writeValueAsString(next); - LOG.info("maptask input=" + s); + LOG.info("Client=" + j + ", maptask=" + i + ", input=" + s); int hash = h.hash(Bytes.toBytes(s)); m.put(hash, s); } @@ -573,6 +584,7 @@ public class PerformanceEvaluation extends Configured implements Tool { int perClientRunRows = DEFAULT_ROWS_PER_GB; int numClientThreads = 1; int totalRows = DEFAULT_ROWS_PER_GB; + int measureAfter = 0; float sampleRate = 1.0f; double traceRate = 0.0; String tableName = TABLE_NAME; @@ -596,6 +608,7 @@ public class PerformanceEvaluation extends Configured implements Tool { boolean valueZipf = false; int valueSize = DEFAULT_VALUE_LENGTH; int period = (this.perClientRunRows / 10) == 0? perClientRunRows: perClientRunRows / 10; + int cycles = 1; public TestOptions() {} @@ -605,6 +618,7 @@ public class PerformanceEvaluation extends Configured implements Tool { */ public TestOptions(TestOptions that) { this.cmdName = that.cmdName; + this.cycles = that.cycles; this.nomapred = that.nomapred; this.startRow = that.startRow; this.size = that.size; @@ -635,6 +649,15 @@ public class PerformanceEvaluation extends Configured implements Tool { this.valueSize = that.valueSize; this.period = that.period; this.randomSleep = that.randomSleep; + this.measureAfter = that.measureAfter; + } + + public int getCycles() { + return this.cycles; + } + + public void setCycles(final int cycles) { + this.cycles = cycles; } public boolean isValueZipf() { @@ -884,6 +907,14 @@ public class PerformanceEvaluation extends Configured implements Tool { public boolean isOneCon() { return oneCon; } + + public int getMeasureAfter() { + return measureAfter; + } + + public void setMeasureAfter(int measureAfter) { + this.measureAfter = measureAfter; + } } /* @@ -921,11 +952,11 @@ public class PerformanceEvaluation extends Configured implements Tool { */ Test(final Connection con, final TestOptions options, final Status status) { this.connection = con; - this.conf = con.getConfiguration(); + this.conf = con == null? null: this.connection.getConfiguration(); + this.receiverHost = this.conf == null? null: SpanReceiverHost.getInstance(conf); this.opts = options; this.status = status; this.testName = this.getClass().getSimpleName(); - receiverHost = SpanReceiverHost.getInstance(conf); if (options.traceRate >= 1.0) { this.traceSampler = Sampler.ALWAYS; } else if (options.traceRate > 0.0) { @@ -935,7 +966,7 @@ public class PerformanceEvaluation extends Configured implements Tool { } everyN = (int) (opts.totalRows / (opts.totalRows * opts.sampleRate)); if (options.isValueZipf()) { - this.zipf = new RandomDistribution.Zipf(this.rand, 1, options.getValueSize(), 1.1); + this.zipf = new RandomDistribution.Zipf(this.rand, 1, options.getValueSize(), 1.2); } LOG.info("Sampling 1 every " + everyN + " out of " + opts.perClientRunRows + " total rows."); } @@ -1031,18 +1062,23 @@ public class PerformanceEvaluation extends Configured implements Tool { void testTimed() throws IOException, InterruptedException { int lastRow = opts.startRow + opts.perClientRunRows; // Report on completion of 1/10th of total. - for (int i = opts.startRow; i < lastRow; i++) { - if (i % everyN != 0) continue; - long startTime = System.nanoTime(); - TraceScope scope = Trace.startSpan("test row", traceSampler); - try { - testRow(i); - } finally { - scope.close(); - } - latency.update((System.nanoTime() - startTime) / 1000); - if (status != null && i > 0 && (i % getReportingPeriod()) == 0) { - status.setStatus(generateStatus(opts.startRow, i, lastRow)); + for (int ii = 0; ii < opts.cycles; ii++) { + if (opts.cycles > 1) LOG.info("Cycle=" + ii + " of " + opts.cycles); + for (int i = opts.startRow; i < lastRow; i++) { + if (i % everyN != 0) continue; + long startTime = System.nanoTime(); + TraceScope scope = Trace.startSpan("test row", traceSampler); + try { + testRow(i); + } finally { + scope.close(); + } + if ( (i - opts.startRow) > opts.measureAfter) { + latency.update((System.nanoTime() - startTime) / 1000); + if (status != null && i > 0 && (i % getReportingPeriod()) == 0) { + status.setStatus(generateStatus(opts.startRow, i, lastRow)); + } + } } } } @@ -1588,6 +1624,8 @@ public class PerformanceEvaluation extends Configured implements Tool { + " there by not returning any thing back to the client. Helps to check the server side" + " performance. Uses FilterAllFilter internally. "); System.err.println(" latency Set to report operation latencies. Default: False"); + System.err.println(" measureAfter Start to measure the latency once 'measureAfter'" + + " rows have been treated. Default: 0"); System.err.println(" bloomFilter Bloom filter type, one of " + Arrays.toString(BloomType.values())); System.err.println(" valueSize Pass value size to use: Default: 1024"); System.err.println(" valueRandom Set if we should vary value size between 0 and " + @@ -1599,6 +1637,7 @@ public class PerformanceEvaluation extends Configured implements Tool { System.err.println(" multiGet Batch gets together into groups of N. Only supported " + "by randomRead. Default: disabled"); System.err.println(" replicas Enable region replica testing. Defaults: 1."); + System.err.println(" cycles How many times to cycle the test. Defaults: 1."); System.err.println(" splitPolicy Specify a custom RegionSplitPolicy for the table."); System.err.println(" randomSleep Do a random sleep before each get between 0 and entered value. Defaults: 0"); System.err.println(); @@ -1651,6 +1690,12 @@ public class PerformanceEvaluation extends Configured implements Tool { continue; } + final String cycles = "--cycles="; + if (cmd.startsWith(cycles)) { + opts.cycles = Integer.parseInt(cmd.substring(cycles.length())); + continue; + } + final String sampleRate = "--sampleRate="; if (cmd.startsWith(sampleRate)) { opts.sampleRate = Float.parseFloat(cmd.substring(sampleRate.length())); @@ -1762,6 +1807,7 @@ public class PerformanceEvaluation extends Configured implements Tool { final String size = "--size="; if (cmd.startsWith(size)) { opts.size = Float.parseFloat(cmd.substring(size.length())); + if (opts.size <= 1.0f) throw new IllegalStateException("Size must be > 1; i.e. 1GB"); continue; } @@ -1777,6 +1823,12 @@ public class PerformanceEvaluation extends Configured implements Tool { continue; } + final String measureAfter = "--measureAfter="; + if (cmd.startsWith(measureAfter)) { + opts.measureAfter = Integer.parseInt(cmd.substring(measureAfter.length())); + continue; + } + final String bloomFilter = "--bloomFilter="; if (cmd.startsWith(bloomFilter)) { opts.bloomType = BloomType.valueOf(cmd.substring(bloomFilter.length())); @@ -1816,26 +1868,36 @@ public class PerformanceEvaluation extends Configured implements Tool { if (isCommandClass(cmd)) { opts.cmdName = cmd; opts.numClientThreads = Integer.parseInt(args.remove()); - int rowsPerGB = ONE_GB / (opts.valueRandom? opts.valueSize/2: opts.valueSize); if (opts.size != DEFAULT_OPTS.size && opts.perClientRunRows != DEFAULT_OPTS.perClientRunRows) { - throw new IllegalArgumentException(rows + " and " + size + " are mutually exclusive arguments."); - } - if (opts.size != DEFAULT_OPTS.size) { - // total size in GB specified - opts.totalRows = (int) opts.size * rowsPerGB; - opts.perClientRunRows = opts.totalRows / opts.numClientThreads; - } else if (opts.perClientRunRows != DEFAULT_OPTS.perClientRunRows) { - // number of rows specified - opts.totalRows = opts.perClientRunRows * opts.numClientThreads; - opts.size = opts.totalRows / rowsPerGB; + throw new IllegalArgumentException(rows + " and " + size + + " are mutually exclusive options"); } + opts = calculateRowsAndSize(opts); break; } } return opts; } + static TestOptions calculateRowsAndSize(final TestOptions opts) { + int rowsPerGB = getRowsPerGB(opts); + if (opts.size != DEFAULT_OPTS.size) { + // total size in GB specified + opts.totalRows = (int) opts.size * rowsPerGB; + opts.perClientRunRows = opts.totalRows / opts.numClientThreads; + } else if (opts.perClientRunRows != DEFAULT_OPTS.perClientRunRows) { + // number of rows specified + opts.totalRows = opts.perClientRunRows * opts.numClientThreads; + opts.size = opts.totalRows / rowsPerGB; + } + return opts; + } + + static int getRowsPerGB(final TestOptions opts) { + return ONE_GB / (opts.valueRandom? opts.valueSize/2: opts.valueSize); + } + @Override public int run(String[] args) throws Exception { // Process command-line args. TODO: Better cmd-line processing diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java index 36da08d..3068fbf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java @@ -37,11 +37,13 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.util.StringUtils; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; +import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -54,7 +56,7 @@ import com.google.common.collect.Lists; * This can run as a junit test, or with a main() function which runs against * a real cluster (eg for testing with failures, region movement, etc) */ -@Category(MediumTests.class) +@Category({FlakeyTests.class, MediumTests.class}) public class TestAcidGuarantees implements Tool { protected static final Log LOG = LogFactory.getLog(TestAcidGuarantees.class); public static final TableName TABLE_NAME = TableName.valueOf("TestAcidGuarantees"); @@ -87,10 +89,14 @@ public class TestAcidGuarantees implements Tool { conf.set(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, String.valueOf(128*1024)); // prevent aggressive region split conf.set(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, - ConstantSizeRegionSplitPolicy.class.getName()); + ConstantSizeRegionSplitPolicy.class.getName()); util = new HBaseTestingUtility(conf); } + public void setHBaseTestingUtil(HBaseTestingUtility util) { + this.util = util; + } + /** * Thread that does random full-row writes into a table. */ diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestCheckTestClasses.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestCheckTestClasses.java index d805755..06b98f7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestCheckTestClasses.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestCheckTestClasses.java @@ -22,6 +22,7 @@ import static org.junit.Assert.assertTrue; import java.util.List; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -30,7 +31,7 @@ import org.junit.experimental.categories.Category; /** * Checks tests are categorized. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCheckTestClasses { /** * Throws an assertion if we find a test class without category (small/medium/large/integration). diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java index f983f46..4097efb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestClusterBootOrder.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase; import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.junit.After; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; /** * Tests the boot order indifference between regionserver and master */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestClusterBootOrder { private static final long SLEEP_INTERVAL = 1000; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java index 4b42028..ed61350 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestCompare.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; @@ -27,7 +28,7 @@ import org.junit.experimental.categories.Category; /** * Test comparing HBase objects. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCompare extends TestCase { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java deleted file mode 100644 index 6690990..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestDrainingServer.java +++ /dev/null @@ -1,306 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase; - - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.executor.ExecutorService; -import org.apache.hadoop.hbase.executor.ExecutorType; -import org.apache.hadoop.hbase.master.AssignmentManager; -import org.apache.hadoop.hbase.master.HMaster; -import org.apache.hadoop.hbase.master.LoadBalancer; -import org.apache.hadoop.hbase.master.RegionPlan; -import org.apache.hadoop.hbase.master.RegionState; -import org.apache.hadoop.hbase.master.ServerManager; -import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory; -import org.apache.hadoop.hbase.regionserver.RegionOpeningState; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; -import org.mockito.Mockito; - -import java.util.ArrayList; -import java.util.HashMap; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Map.Entry; -import java.util.Set; - -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotEquals; -import static org.junit.Assert.assertTrue; - - -/** - * Test the draining servers feature. - */ -@Category(MediumTests.class) -public class TestDrainingServer { - private static final Log LOG = LogFactory.getLog(TestDrainingServer.class); - private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private Abortable abortable = new Abortable() { - @Override - public boolean isAborted() { - return false; - } - - @Override - public void abort(String why, Throwable e) { - } - }; - - @AfterClass - public static void afterClass() throws Exception { - TEST_UTIL.shutdownMiniZKCluster(); - } - - @BeforeClass - public static void beforeClass() throws Exception { - TEST_UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", true); - TEST_UTIL.startMiniZKCluster(); - } - - @Test - public void testAssignmentManagerDoesntUseDrainingServer() throws Exception { - AssignmentManager am; - Configuration conf = TEST_UTIL.getConfiguration(); - final HMaster master = Mockito.mock(HMaster.class); - final Server server = Mockito.mock(Server.class); - final ServerManager serverManager = Mockito.mock(ServerManager.class); - final ServerName SERVERNAME_A = ServerName.valueOf("mockserver_a.org", 1000, 8000); - final ServerName SERVERNAME_B = ServerName.valueOf("mockserver_b.org", 1001, 8000); - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(conf); - final HRegionInfo REGIONINFO = new HRegionInfo(TableName.valueOf("table_test"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - - ZooKeeperWatcher zkWatcher = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(), - "zkWatcher-Test", abortable, true); - - Map onlineServers = new HashMap(); - - onlineServers.put(SERVERNAME_A, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_B, ServerLoad.EMPTY_SERVERLOAD); - - Mockito.when(server.getConfiguration()).thenReturn(conf); - Mockito.when(server.getServerName()).thenReturn(ServerName.valueOf("masterMock,1,1")); - Mockito.when(server.getZooKeeper()).thenReturn(zkWatcher); - - CoordinatedStateManager cp = new ZkCoordinatedStateManager(); - cp.initialize(server); - cp.start(); - - Mockito.when(server.getCoordinatedStateManager()).thenReturn(cp); - - Mockito.when(serverManager.getOnlineServers()).thenReturn(onlineServers); - Mockito.when(serverManager.getOnlineServersList()) - .thenReturn(new ArrayList(onlineServers.keySet())); - - Mockito.when(serverManager.createDestinationServersList()) - .thenReturn(new ArrayList(onlineServers.keySet())); - Mockito.when(serverManager.createDestinationServersList(null)) - .thenReturn(new ArrayList(onlineServers.keySet())); - - for (ServerName sn : onlineServers.keySet()) { - Mockito.when(serverManager.isServerOnline(sn)).thenReturn(true); - Mockito.when(serverManager.sendRegionClose(sn, REGIONINFO, -1)).thenReturn(true); - Mockito.when(serverManager.sendRegionClose(sn, REGIONINFO, -1, null, false)).thenReturn(true); - Mockito.when(serverManager.sendRegionOpen(sn, REGIONINFO, -1, new ArrayList())) - .thenReturn(RegionOpeningState.OPENED); - Mockito.when(serverManager.sendRegionOpen(sn, REGIONINFO, -1, null)) - .thenReturn(RegionOpeningState.OPENED); - Mockito.when(serverManager.addServerToDrainList(sn)).thenReturn(true); - } - - Mockito.when(master.getServerManager()).thenReturn(serverManager); - - am = new AssignmentManager(server, serverManager, - balancer, startupMasterExecutor("mockExecutorService"), null, null); - - Mockito.when(master.getAssignmentManager()).thenReturn(am); - Mockito.when(master.getZooKeeper()).thenReturn(zkWatcher); - - am.addPlan(REGIONINFO.getEncodedName(), new RegionPlan(REGIONINFO, null, SERVERNAME_A)); - - zkWatcher.registerListenerFirst(am); - - addServerToDrainedList(SERVERNAME_A, onlineServers, serverManager); - - am.assign(REGIONINFO, true); - - setRegionOpenedOnZK(zkWatcher, SERVERNAME_A, REGIONINFO); - setRegionOpenedOnZK(zkWatcher, SERVERNAME_B, REGIONINFO); - - am.waitForAssignment(REGIONINFO); - - assertTrue(am.getRegionStates().isRegionOnline(REGIONINFO)); - assertNotEquals(am.getRegionStates().getRegionServerOfRegion(REGIONINFO), SERVERNAME_A); - } - - @Test - public void testAssignmentManagerDoesntUseDrainedServerWithBulkAssign() throws Exception { - Configuration conf = TEST_UTIL.getConfiguration(); - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(conf); - AssignmentManager am; - final HMaster master = Mockito.mock(HMaster.class); - final Server server = Mockito.mock(Server.class); - final ServerManager serverManager = Mockito.mock(ServerManager.class); - final ServerName SERVERNAME_A = ServerName.valueOf("mockserverbulk_a.org", 1000, 8000); - final ServerName SERVERNAME_B = ServerName.valueOf("mockserverbulk_b.org", 1001, 8000); - final ServerName SERVERNAME_C = ServerName.valueOf("mockserverbulk_c.org", 1002, 8000); - final ServerName SERVERNAME_D = ServerName.valueOf("mockserverbulk_d.org", 1003, 8000); - final ServerName SERVERNAME_E = ServerName.valueOf("mockserverbulk_e.org", 1004, 8000); - final Map bulk = new HashMap(); - - Set bunchServersAssigned = new HashSet(); - - HRegionInfo REGIONINFO_A = new HRegionInfo(TableName.valueOf("table_A"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - HRegionInfo REGIONINFO_B = new HRegionInfo(TableName.valueOf("table_B"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - HRegionInfo REGIONINFO_C = new HRegionInfo(TableName.valueOf("table_C"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - HRegionInfo REGIONINFO_D = new HRegionInfo(TableName.valueOf("table_D"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - HRegionInfo REGIONINFO_E = new HRegionInfo(TableName.valueOf("table_E"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - - Map onlineServers = new HashMap(); - List drainedServers = new ArrayList(); - - onlineServers.put(SERVERNAME_A, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_B, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_C, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_D, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_E, ServerLoad.EMPTY_SERVERLOAD); - - bulk.put(REGIONINFO_A, SERVERNAME_A); - bulk.put(REGIONINFO_B, SERVERNAME_B); - bulk.put(REGIONINFO_C, SERVERNAME_C); - bulk.put(REGIONINFO_D, SERVERNAME_D); - bulk.put(REGIONINFO_E, SERVERNAME_E); - - ZooKeeperWatcher zkWatcher = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(), - "zkWatcher-BulkAssignTest", abortable, true); - - Mockito.when(server.getConfiguration()).thenReturn(conf); - Mockito.when(server.getServerName()).thenReturn(ServerName.valueOf("masterMock,1,1")); - Mockito.when(server.getZooKeeper()).thenReturn(zkWatcher); - - CoordinatedStateManager cp = new ZkCoordinatedStateManager(); - cp.initialize(server); - cp.start(); - - Mockito.when(server.getCoordinatedStateManager()).thenReturn(cp); - - Mockito.when(serverManager.getOnlineServers()).thenReturn(onlineServers); - Mockito.when(serverManager.getOnlineServersList()).thenReturn( - new ArrayList(onlineServers.keySet())); - - Mockito.when(serverManager.createDestinationServersList()).thenReturn( - new ArrayList(onlineServers.keySet())); - Mockito.when(serverManager.createDestinationServersList(null)).thenReturn( - new ArrayList(onlineServers.keySet())); - - for (Entry entry : bulk.entrySet()) { - Mockito.when(serverManager.isServerOnline(entry.getValue())).thenReturn(true); - Mockito.when(serverManager.sendRegionClose(entry.getValue(), - entry.getKey(), -1)).thenReturn(true); - Mockito.when(serverManager.sendRegionOpen(entry.getValue(), - entry.getKey(), -1, null)).thenReturn(RegionOpeningState.OPENED); - Mockito.when(serverManager.addServerToDrainList(entry.getValue())).thenReturn(true); - } - - Mockito.when(master.getServerManager()).thenReturn(serverManager); - - drainedServers.add(SERVERNAME_A); - drainedServers.add(SERVERNAME_B); - drainedServers.add(SERVERNAME_C); - drainedServers.add(SERVERNAME_D); - - am = new AssignmentManager(server, serverManager, - balancer, startupMasterExecutor("mockExecutorServiceBulk"), null, null); - - Mockito.when(master.getAssignmentManager()).thenReturn(am); - - zkWatcher.registerListener(am); - - for (ServerName drained : drainedServers) { - addServerToDrainedList(drained, onlineServers, serverManager); - } - - am.assign(bulk); - - Map regionsInTransition = am.getRegionStates().getRegionsInTransition(); - for (Entry entry : regionsInTransition.entrySet()) { - setRegionOpenedOnZK(zkWatcher, entry.getValue().getServerName(), - entry.getValue().getRegion()); - } - - am.waitForAssignment(REGIONINFO_A); - am.waitForAssignment(REGIONINFO_B); - am.waitForAssignment(REGIONINFO_C); - am.waitForAssignment(REGIONINFO_D); - am.waitForAssignment(REGIONINFO_E); - - Map regionAssignments = am.getRegionStates().getRegionAssignments(); - for (Entry entry : regionAssignments.entrySet()) { - LOG.info("Region Assignment: " - + entry.getKey().getRegionNameAsString() + " Server: " + entry.getValue()); - bunchServersAssigned.add(entry.getValue()); - } - - for (ServerName sn : drainedServers) { - assertFalse(bunchServersAssigned.contains(sn)); - } - } - - private void addServerToDrainedList(ServerName serverName, - Map onlineServers, ServerManager serverManager) { - onlineServers.remove(serverName); - List availableServers = new ArrayList(onlineServers.keySet()); - Mockito.when(serverManager.createDestinationServersList()).thenReturn(availableServers); - Mockito.when(serverManager.createDestinationServersList(null)).thenReturn(availableServers); - } - - private void setRegionOpenedOnZK(final ZooKeeperWatcher zkWatcher, final ServerName serverName, - HRegionInfo hregionInfo) throws Exception { - int version = ZKAssign.getVersion(zkWatcher, hregionInfo); - int versionTransition = ZKAssign.transitionNode(zkWatcher, - hregionInfo, serverName, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, version); - ZKAssign.transitionNodeOpened(zkWatcher, hregionInfo, serverName, versionTransition); - } - - private ExecutorService startupMasterExecutor(final String name) { - ExecutorService executor = new ExecutorService(name); - executor.startExecutorService(ExecutorType.MASTER_OPEN_REGION, 3); - executor.startExecutorService(ExecutorType.MASTER_CLOSE_REGION, 3); - executor.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS, 3); - executor.startExecutorService(ExecutorType.MASTER_META_SERVER_OPERATIONS, 3); - return executor; - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java index f963461..07b9cbd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestFSTableDescriptorForceCreation.java @@ -25,12 +25,13 @@ import java.io.IOException; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.junit.*; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestFSTableDescriptorForceCreation { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java index f9ab86b..59ddfd7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestFullLogReconstruction.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertEquals; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -32,7 +33,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestFullLogReconstruction { private final static HBaseTestingUtility diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java index 6473350..7be5074 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestGlobalMemStoreSize.java @@ -31,6 +31,7 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.apache.hadoop.hbase.util.Threads; @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category; * Test HBASE-3694 whether the GlobalMemStoreSize is the same as the summary * of all the online region's MemStoreSize */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestGlobalMemStoreSize { private final Log LOG = LogFactory.getLog(this.getClass().getName()); private static int regionServerNum = 4; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java index ecd7480..abbcb4c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestHBaseTestingUtility.java @@ -32,6 +32,7 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.apache.hadoop.hdfs.MiniDFSCluster; @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category; /** * Test our testing utility class */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestHBaseTestingUtility { private final Log LOG = LogFactory.getLog(this.getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java index 17ad36d..4fa945a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestHColumnDescriptorDefaultVersions.java @@ -23,9 +23,10 @@ import static org.junit.Assert.assertEquals; import java.io.IOException; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.master.MasterFileSystem; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; @@ -42,7 +43,7 @@ import org.junit.rules.TestName; * Verify that the HColumnDescriptor version is set correctly by default, hbase-site.xml, and user * input */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestHColumnDescriptorDefaultVersions { @Rule @@ -147,8 +148,8 @@ public class TestHColumnDescriptorDefaultVersions { // Verify descriptor from HDFS MasterFileSystem mfs = TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterFileSystem(); Path tableDir = FSUtils.getTableDir(mfs.getRootDir(), tableName); - htd = FSTableDescriptors.getTableDescriptorFromFs(mfs.getFileSystem(), tableDir); - hcds = htd.getColumnFamilies(); + TableDescriptor td = FSTableDescriptors.getTableDescriptorFromFs(mfs.getFileSystem(), tableDir); + hcds = td.getHTableDescriptor().getColumnFamilies(); verifyHColumnDescriptor(expected, hcds, tableName, families); } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestHDFSBlocksDistribution.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestHDFSBlocksDistribution.java index 2e12321..2329fc2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestHDFSBlocksDistribution.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestHDFSBlocksDistribution.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -27,7 +28,7 @@ import java.util.Map; import static junit.framework.Assert.assertEquals; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHDFSBlocksDistribution { @Test public void testAddHostsAndBlockWeight() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java index f6488d0..2ad5f9a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestHRegionLocation.java @@ -23,11 +23,12 @@ import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotSame; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHRegionLocation { /** * HRegionLocations are equal if they have the same 'location' -- i.e. host and diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java index 48feb03..777ecea 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestIOFencing.java @@ -44,9 +44,10 @@ import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.StoreFile; import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.regionserver.wal.WALUtil; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.junit.Test; @@ -72,7 +73,7 @@ import com.google.common.collect.Lists; * has had some files removed because of a compaction. This sort of hurry's along and makes certain what is a chance * occurance. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestIOFencing { static final Log LOG = LogFactory.getLog(TestIOFencing.class); static { @@ -242,9 +243,10 @@ public class TestIOFencing { c.setClass(HConstants.REGION_IMPL, regionClass, HRegion.class); c.setBoolean("dfs.support.append", true); // Encourage plenty of flushes - c.setLong("hbase.hregion.memstore.flush.size", 200000); + c.setLong("hbase.hregion.memstore.flush.size", 100000); c.set(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, ConstantSizeRegionSplitPolicy.class.getName()); // Only run compaction when we tell it to + c.setInt("hbase.hstore.compaction.min",1); c.setInt("hbase.hstore.compactionThreshold", 1000); c.setLong("hbase.hstore.blockingStoreFiles", 1000); // Compact quickly after we tell it to! @@ -266,7 +268,7 @@ public class TestIOFencing { compactingRegion = (CompactionBlockerRegion)testRegions.get(0); LOG.info("Blocking compactions"); compactingRegion.stopCompactions(); - long lastFlushTime = compactingRegion.getLastFlushTime(); + long lastFlushTime = compactingRegion.getEarliestFlushTimeForAllStores(); // Load some rows TEST_UTIL.loadNumericRows(table, FAMILY, 0, FIRST_BATCH_COUNT); @@ -282,7 +284,7 @@ public class TestIOFencing { // Wait till flush has happened, otherwise there won't be multiple store files long startWaitTime = System.currentTimeMillis(); - while (compactingRegion.getLastFlushTime() <= lastFlushTime || + while (compactingRegion.getEarliestFlushTimeForAllStores() <= lastFlushTime || compactingRegion.countStoreFiles() <= 1) { LOG.info("Waiting for the region to flush " + compactingRegion.getRegionNameAsString()); Thread.sleep(1000); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestIPv6NIOServerSocketChannel.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestIPv6NIOServerSocketChannel.java index 0baf5de3..6b5ad98 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestIPv6NIOServerSocketChannel.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestIPv6NIOServerSocketChannel.java @@ -27,6 +27,7 @@ import java.nio.channels.ServerSocketChannel; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; @@ -43,7 +44,7 @@ import org.junit.experimental.categories.Category; * the test ensures that we are running with java.net.preferIPv4Stack=true, so * that ZK will not fail to bind to ipv6 address using ClientCnxnSocketNIO. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestIPv6NIOServerSocketChannel { private static final Log LOG = LogFactory.getLog(TestIPv6NIOServerSocketChannel.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java index bcac3de..62b00d8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java @@ -28,6 +28,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; * Testing, info servers are disabled. This test enables then and checks that * they serve pages. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestInfoServers { static final Log LOG = LogFactory.getLog(TestInfoServers.class); private final static HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java index af602a5..ed141a6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestJMXListener.java @@ -29,6 +29,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; @@ -39,7 +40,7 @@ import org.junit.rules.ExpectedException; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestJMXListener { private static final Log LOG = LogFactory.getLog(TestJMXListener.class); private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestLocalHBaseCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestLocalHBaseCluster.java index da5b8d8..bbf4f32 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestLocalHBaseCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestLocalHBaseCluster.java @@ -24,12 +24,13 @@ import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.zookeeper.KeeperException; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestLocalHBaseCluster { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaMigrationConvertingToPB.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaMigrationConvertingToPB.java deleted file mode 100644 index 3845bcd..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaMigrationConvertingToPB.java +++ /dev/null @@ -1,433 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hbase; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; - -import java.io.File; -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.List; - -import junit.framework.Assert; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.FileUtil; -import org.apache.hadoop.fs.FsShell; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.client.HConnection; -import org.apache.hadoop.hbase.migration.NamespaceUpgrade; -import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.client.Durability; -import org.apache.hadoop.hbase.master.HMaster; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.io.DataOutputBuffer; -import org.apache.hadoop.util.ToolRunner; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -/** - * Test migration that changes HRI serialization into PB. Tests by bringing up a cluster from actual - * data from a 0.92 cluster, as well as manually downgrading and then upgrading the hbase:meta info. - * @deprecated Remove after 0.96 - */ -@Category(MediumTests.class) -@Deprecated -public class TestMetaMigrationConvertingToPB { - static final Log LOG = LogFactory.getLog(TestMetaMigrationConvertingToPB.class); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - - private final static String TESTTABLE = "TestTable"; - - private final static int ROW_COUNT = 100; - private final static int REGION_COUNT = 9; //initial number of regions of the TestTable - - private static final int META_VERSION_092 = 0; - - /* - * This test uses a tgz file named "TestMetaMigrationConvertingToPB.tgz" under - * hbase-server/src/test/data which contains file data from a 0.92 cluster. - * The cluster has a table named "TestTable", which has 100 rows. 0.94 has same - * hbase:meta structure, so it should be the same. - * - * hbase(main):001:0> create 'TestTable', 'f1' - * hbase(main):002:0> for i in 1..100 - * hbase(main):003:1> put 'TestTable', "row#{i}", "f1:c1", i - * hbase(main):004:1> end - * - * There are 9 regions in the table - */ - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - // Start up our mini cluster on top of an 0.92 root.dir that has data from - // a 0.92 hbase run -- it has a table with 100 rows in it -- and see if - // we can migrate from 0.92 - TEST_UTIL.startMiniZKCluster(); - TEST_UTIL.startMiniDFSCluster(1); - Path testdir = TEST_UTIL.getDataTestDir("TestMetaMigrationConvertToPB"); - // Untar our test dir. - File untar = untar(new File(testdir.toString())); - // Now copy the untar up into hdfs so when we start hbase, we'll run from it. - Configuration conf = TEST_UTIL.getConfiguration(); - FsShell shell = new FsShell(conf); - FileSystem fs = FileSystem.get(conf); - // find where hbase will root itself, so we can copy filesystem there - Path hbaseRootDir = TEST_UTIL.getDefaultRootDirPath(); - if (!fs.isDirectory(hbaseRootDir.getParent())) { - // mkdir at first - fs.mkdirs(hbaseRootDir.getParent()); - } - doFsCommand(shell, - new String [] {"-put", untar.toURI().toString(), hbaseRootDir.toString()}); - - // windows fix: tgz file has hbase:meta directory renamed as -META- since the original - // is an illegal name under windows. So we rename it back. - // See src/test/data//TestMetaMigrationConvertingToPB.README and - // https://issues.apache.org/jira/browse/HBASE-6821 - doFsCommand(shell, new String [] {"-mv", new Path(hbaseRootDir, "-META-").toString(), - new Path(hbaseRootDir, ".META.").toString()}); - // See whats in minihdfs. - doFsCommand(shell, new String [] {"-lsr", "/"}); - - //upgrade to namespace as well - Configuration toolConf = TEST_UTIL.getConfiguration(); - conf.set(HConstants.HBASE_DIR, TEST_UTIL.getDefaultRootDirPath().toString()); - ToolRunner.run(toolConf, new NamespaceUpgrade(), new String[]{"--upgrade"}); - - TEST_UTIL.startMiniHBaseCluster(1, 1); - // Assert we are running against the copied-up filesystem. The copied-up - // rootdir should have had a table named 'TestTable' in it. Assert it - // present. - HTable t = new HTable(TEST_UTIL.getConfiguration(), TESTTABLE); - ResultScanner scanner = t.getScanner(new Scan()); - int count = 0; - while (scanner.next() != null) { - count++; - } - // Assert that we find all 100 rows that are in the data we loaded. If - // so then we must have migrated it from 0.90 to 0.92. - Assert.assertEquals(ROW_COUNT, count); - scanner.close(); - t.close(); - } - - private static File untar(final File testdir) throws IOException { - // Find the src data under src/test/data - final String datafile = "TestMetaMigrationConvertToPB"; - String srcTarFile = - System.getProperty("project.build.testSourceDirectory", "src/test") + - File.separator + "data" + File.separator + datafile + ".tgz"; - File homedir = new File(testdir.toString()); - File tgtUntarDir = new File(homedir, datafile); - if (tgtUntarDir.exists()) { - if (!FileUtil.fullyDelete(tgtUntarDir)) { - throw new IOException("Failed delete of " + tgtUntarDir.toString()); - } - } - LOG.info("Untarring " + srcTarFile + " into " + homedir.toString()); - FileUtil.unTar(new File(srcTarFile), homedir); - Assert.assertTrue(tgtUntarDir.exists()); - return tgtUntarDir; - } - - private static void doFsCommand(final FsShell shell, final String [] args) - throws Exception { - // Run the 'put' command. - int errcode = shell.run(args); - if (errcode != 0) throw new IOException("Failed put; errcode=" + errcode); - } - - /** - * @throws java.lang.Exception - */ - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniCluster(); - } - - @Test - public void testMetaUpdatedFlagInROOT() throws Exception { - HMaster master = TEST_UTIL.getMiniHBaseCluster().getMaster(); - boolean metaUpdated = MetaMigrationConvertingToPB. - isMetaTableUpdated(master.getConnection()); - assertEquals(true, metaUpdated); - verifyMetaRowsAreUpdated(master.getConnection()); - } - - @Test - public void testMetaMigration() throws Exception { - LOG.info("Starting testMetaMigration"); - final byte [] FAMILY = Bytes.toBytes("family"); - HTableDescriptor htd = new HTableDescriptor(TableName.valueOf("testMetaMigration")); - HColumnDescriptor hcd = new HColumnDescriptor(FAMILY); - htd.addFamily(hcd); - Configuration conf = TEST_UTIL.getConfiguration(); - byte[][] regionNames = new byte[][]{ - HConstants.EMPTY_START_ROW, - Bytes.toBytes("region_a"), - Bytes.toBytes("region_b")}; - createMultiRegionsWithWritableSerialization(conf, - htd.getTableName().getName(), - regionNames); - HConnection masterHConnection = - TEST_UTIL.getMiniHBaseCluster().getMaster().getConnection(); - // Erase the current version of root meta for this test. - undoVersionInRoot(); - MetaTableAccessor.fullScanMetaAndPrint(masterHConnection); - LOG.info("Meta Print completed.testMetaMigration"); - - long numMigratedRows = MetaMigrationConvertingToPB.updateMeta( - TEST_UTIL.getHBaseCluster().getMaster()); - MetaTableAccessor.fullScanMetaAndPrint(masterHConnection); - - // Should be one entry only and it should be for the table we just added. - assertEquals(regionNames.length, numMigratedRows); - - // Assert that the flag in ROOT is updated to reflect the correct status - boolean metaUpdated = MetaMigrationConvertingToPB.isMetaTableUpdated(masterHConnection); - assertEquals(true, metaUpdated); - verifyMetaRowsAreUpdated(masterHConnection); - } - - /** - * This test assumes a master crash/failure during the meta migration process - * and attempts to continue the meta migration process when a new master takes over. - * When a master dies during the meta migration we will have some rows of - * META.CatalogFamily updated with PB serialization and some - * still hanging with writable serialization. When the backup master/ or - * fresh start of master attempts the migration it will encounter some rows of META - * already updated with new HRI and some still legacy. This test will simulate this - * scenario and validates that the migration process can safely skip the updated - * rows and migrate any pending rows at startup. - * @throws Exception - */ - @Test - public void testMasterCrashDuringMetaMigration() throws Exception { - final byte[] FAMILY = Bytes.toBytes("family"); - HTableDescriptor htd = new HTableDescriptor(TableName.valueOf - ("testMasterCrashDuringMetaMigration")); - HColumnDescriptor hcd = new HColumnDescriptor(FAMILY); - htd.addFamily(hcd); - Configuration conf = TEST_UTIL.getConfiguration(); - // Create 10 New regions. - createMultiRegionsWithPBSerialization(conf, htd.getTableName().getName(), 10); - // Create 10 Legacy regions. - createMultiRegionsWithWritableSerialization(conf, - htd.getTableName().getName(), 10); - HConnection masterHConnection = - TEST_UTIL.getMiniHBaseCluster().getMaster().getConnection(); - // Erase the current version of root meta for this test. - undoVersionInRoot(); - - MetaTableAccessor.fullScanMetaAndPrint(masterHConnection); - LOG.info("Meta Print completed.testUpdatesOnMetaWithLegacyHRI"); - - long numMigratedRows = - MetaMigrationConvertingToPB.updateMetaIfNecessary( - TEST_UTIL.getHBaseCluster().getMaster()); - assertEquals(numMigratedRows, 10); - - // Assert that the flag in ROOT is updated to reflect the correct status - boolean metaUpdated = MetaMigrationConvertingToPB.isMetaTableUpdated(masterHConnection); - assertEquals(true, metaUpdated); - - verifyMetaRowsAreUpdated(masterHConnection); - - LOG.info("END testMasterCrashDuringMetaMigration"); - } - - /** - * Verify that every hbase:meta row is updated - */ - void verifyMetaRowsAreUpdated(HConnection hConnection) - throws IOException { - List results = MetaTableAccessor.fullScan(hConnection); - assertTrue(results.size() >= REGION_COUNT); - - for (Result result : results) { - byte[] hriBytes = result.getValue(HConstants.CATALOG_FAMILY, - HConstants.REGIONINFO_QUALIFIER); - assertTrue(hriBytes != null && hriBytes.length > 0); - assertTrue(MetaMigrationConvertingToPB.isMigrated(hriBytes)); - - byte[] splitA = result.getValue(HConstants.CATALOG_FAMILY, - HConstants.SPLITA_QUALIFIER); - if (splitA != null && splitA.length > 0) { - assertTrue(MetaMigrationConvertingToPB.isMigrated(splitA)); - } - - byte[] splitB = result.getValue(HConstants.CATALOG_FAMILY, - HConstants.SPLITB_QUALIFIER); - if (splitB != null && splitB.length > 0) { - assertTrue(MetaMigrationConvertingToPB.isMigrated(splitB)); - } - } - } - - /** Changes the version of hbase:meta to 0 to simulate 0.92 and 0.94 clusters*/ - private void undoVersionInRoot() throws IOException { - Put p = new Put(HRegionInfo.FIRST_META_REGIONINFO.getRegionName()); - - p.add(HConstants.CATALOG_FAMILY, HConstants.META_VERSION_QUALIFIER, - Bytes.toBytes(META_VERSION_092)); - - // TODO wire this MetaEditor.putToRootTable(ct, p); - LOG.info("Downgraded -ROOT- meta version=" + META_VERSION_092); - } - - /** - * Inserts multiple regions into hbase:meta using Writable serialization instead of PB - */ - public int createMultiRegionsWithWritableSerialization(final Configuration c, - final byte[] tableName, int numRegions) throws IOException { - if (numRegions < 3) throw new IOException("Must create at least 3 regions"); - byte [] startKey = Bytes.toBytes("aaaaa"); - byte [] endKey = Bytes.toBytes("zzzzz"); - byte [][] splitKeys = Bytes.split(startKey, endKey, numRegions - 3); - byte [][] regionStartKeys = new byte[splitKeys.length+1][]; - for (int i=0;i newRegions - = new ArrayList(startKeys.length); - int count = 0; - for (int i = 0; i < startKeys.length; i++) { - int j = (i + 1) % startKeys.length; - HRegionInfo hri = new HRegionInfo(tableName, startKeys[i], startKeys[j]); - Put put = new Put(hri.getRegionName()); - put.setDurability(Durability.SKIP_WAL); - put.add(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER, - getBytes(hri)); //this is the old Writable serialization - - //also add the region as it's daughters - put.add(HConstants.CATALOG_FAMILY, HConstants.SPLITA_QUALIFIER, - getBytes(hri)); //this is the old Writable serialization - - put.add(HConstants.CATALOG_FAMILY, HConstants.SPLITB_QUALIFIER, - getBytes(hri)); //this is the old Writable serialization - - meta.put(put); - LOG.info("createMultiRegionsWithWritableSerialization: PUT inserted " + hri.toString()); - - newRegions.add(hri); - count++; - } - meta.close(); - return count; - } - - @Deprecated - private byte[] getBytes(HRegionInfo hri) throws IOException { - DataOutputBuffer out = new DataOutputBuffer(); - try { - hri.write(out); - return out.getData(); - } finally { - if (out != null) { - out.close(); - } - } - } - - /** - * Inserts multiple regions into hbase:meta using PB serialization - */ - int createMultiRegionsWithPBSerialization(final Configuration c, - final byte[] tableName, int numRegions) - throws IOException { - if (numRegions < 3) throw new IOException("Must create at least 3 regions"); - byte [] startKey = Bytes.toBytes("aaaaa"); - byte [] endKey = Bytes.toBytes("zzzzz"); - byte [][] splitKeys = Bytes.split(startKey, endKey, numRegions - 3); - byte [][] regionStartKeys = new byte[splitKeys.length+1][]; - for (int i=0;i newRegions - = new ArrayList(startKeys.length); - int count = 0; - for (int i = 0; i < startKeys.length; i++) { - int j = (i + 1) % startKeys.length; - HRegionInfo hri = new HRegionInfo(tableName, startKeys[i], startKeys[j]); - Put put = MetaTableAccessor.makePutFromRegionInfo(hri); - put.setDurability(Durability.SKIP_WAL); - meta.put(put); - LOG.info("createMultiRegionsWithPBSerialization: PUT inserted " + hri.toString()); - - newRegions.add(hri); - count++; - } - meta.close(); - return count; - } - - -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java index 517316f..eadebd3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessor.java @@ -26,19 +26,23 @@ import static org.junit.Assert.assertTrue; import java.io.IOException; import java.util.List; import java.util.Random; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.client.Admin; -import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.HConnectionManager; +import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; @@ -48,7 +52,8 @@ import org.junit.experimental.categories.Category; /** * Test {@link org.apache.hadoop.hbase.MetaTableAccessor}. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) +@SuppressWarnings("deprecation") public class TestMetaTableAccessor { private static final Log LOG = LogFactory.getLog(TestMetaTableAccessor.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); @@ -62,10 +67,11 @@ public class TestMetaTableAccessor { // responsive. 1 second is default as is ten retries. c.setLong("hbase.client.pause", 1000); c.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 10); - connection = HConnectionManager.getConnection(c); + connection = ConnectionFactory.createConnection(c); } @AfterClass public static void afterClass() throws Exception { + connection.close(); UTIL.shutdownMiniCluster(); } @@ -195,14 +201,13 @@ public class TestMetaTableAccessor { abstract void metaTask() throws Throwable; } - @Test public void testGetRegionsCatalogTables() + @Test public void testGetRegionsFromMetaTable() throws IOException, InterruptedException { List regions = - MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), - connection, TableName.META_TABLE_NAME); + new MetaTableLocator().getMetaRegions(UTIL.getZooKeeperWatcher()); assertTrue(regions.size() >= 1); - assertTrue(MetaTableAccessor.getTableRegionsAndLocations(UTIL.getZooKeeperWatcher(), - connection,TableName.META_TABLE_NAME).size() >= 1); + assertTrue(new MetaTableLocator().getMetaRegionsAndLocations( + UTIL.getZooKeeperWatcher()).size() >= 1); } @Test public void testTableExists() throws IOException { @@ -249,17 +254,14 @@ public class TestMetaTableAccessor { // Now make sure we only get the regions from 1 of the tables at a time - assertEquals(1, MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), - connection, name).size()); - assertEquals(1, MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), - connection, greaterName).size()); + assertEquals(1, MetaTableAccessor.getTableRegions(connection, name).size()); + assertEquals(1, MetaTableAccessor.getTableRegions(connection, greaterName).size()); } private static List testGettingTableRegions(final Connection connection, final TableName name, final int regionCount) throws IOException, InterruptedException { - List regions = MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), - connection, name); + List regions = MetaTableAccessor.getTableRegions(connection, name); assertEquals(regionCount, regions.size()); Pair pair = MetaTableAccessor.getRegion(connection, regions.get(0).getRegionName()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessorNoCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessorNoCluster.java index 3bf91a4..f70a0d7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessorNoCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableAccessorNoCluster.java @@ -37,6 +37,7 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.After; @@ -54,7 +55,7 @@ import com.google.protobuf.ServiceException; * Test MetaTableAccessor but without spinning up a cluster. * We mock regionserver back and forth (we do spin up a zk cluster). */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestMetaTableAccessorNoCluster { private static final Log LOG = LogFactory.getLog(TestMetaTableAccessorNoCluster.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableLocator.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableLocator.java index e2bdc7b..9943749 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableLocator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestMetaTableLocator.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.GetRequest; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.GetResponse; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -57,7 +58,7 @@ import com.google.protobuf.ServiceException; /** * Test {@link org.apache.hadoop.hbase.zookeeper.MetaTableLocator} */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestMetaTableLocator { private static final Log LOG = LogFactory.getLog(TestMetaTableLocator.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java index 3a8f089..3491d72 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestMultiVersions.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; import java.io.IOException; +import java.util.Map; import java.util.NavigableMap; import org.apache.commons.logging.Log; @@ -42,6 +43,7 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -54,7 +56,7 @@ import org.junit.experimental.categories.Category; * Port of old TestScanMultipleVersions, TestTimestamp and TestGetRowVersions * from old testing framework to {@link HBaseTestingUtility}. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestMultiVersions { private static final Log LOG = LogFactory.getLog(TestMultiVersions.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java index 273fdfa..baa43fa 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestNamespace.java @@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -54,7 +55,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Sets; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestNamespace { protected static final Log LOG = LogFactory.getLog(TestNamespace.class); private static HMaster master; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestNodeHealthCheckChore.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestNodeHealthCheckChore.java index 642b5c5..9360b1f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestNodeHealthCheckChore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestNodeHealthCheckChore.java @@ -33,13 +33,14 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HealthChecker.HealthCheckerExitStatus; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.util.Shell; import org.junit.After; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestNodeHealthCheckChore { private static final Log LOG = LogFactory.getLog(TestNodeHealthCheckChore.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestPerformanceEvaluation.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestPerformanceEvaluation.java index 3414e0a..e35fc08 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestPerformanceEvaluation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestPerformanceEvaluation.java @@ -17,19 +17,37 @@ */ package org.apache.hadoop.hbase; -import static org.junit.Assert.assertTrue; +import static org.junit.Assert.*; +import java.io.BufferedReader; +import java.io.ByteArrayInputStream; import java.io.IOException; +import java.io.InputStreamReader; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationTargetException; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.PerformanceEvaluation.RandomReadTest; +import org.apache.hadoop.hbase.PerformanceEvaluation.TestOptions; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.codehaus.jackson.JsonGenerationException; import org.codehaus.jackson.map.JsonMappingException; import org.codehaus.jackson.map.ObjectMapper; +import org.junit.Ignore; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +import com.yammer.metrics.core.Histogram; +import com.yammer.metrics.stats.Snapshot; +import com.yammer.metrics.stats.UniformSample; + +@Category({MiscTests.class, SmallTests.class}) public class TestPerformanceEvaluation { + private static final HBaseTestingUtility HTU = new HBaseTestingUtility(); + @Test public void testSerialization() throws JsonGenerationException, JsonMappingException, IOException { @@ -42,4 +60,82 @@ public class TestPerformanceEvaluation { mapper.readValue(optionsString, PerformanceEvaluation.TestOptions.class); assertTrue(optionsDeserialized.isAutoFlush()); } + + /** + * Exercise the mr spec writing. Simple assertions to make sure it is basically working. + * @throws IOException + */ + @Ignore @Test + public void testWriteInputFile() throws IOException { + TestOptions opts = new PerformanceEvaluation.TestOptions(); + final int clients = 10; + opts.setNumClientThreads(clients); + opts.setPerClientRunRows(10); + Path dir = + PerformanceEvaluation.writeInputFile(HTU.getConfiguration(), opts, HTU.getDataTestDir()); + FileSystem fs = FileSystem.get(HTU.getConfiguration()); + Path p = new Path(dir, PerformanceEvaluation.JOB_INPUT_FILENAME); + long len = fs.getFileStatus(p).getLen(); + assertTrue(len > 0); + byte [] content = new byte[(int)len]; + FSDataInputStream dis = fs.open(p); + try { + dis.readFully(content); + BufferedReader br = + new BufferedReader(new InputStreamReader(new ByteArrayInputStream(content))); + int count = 0; + while (br.readLine() != null) { + count++; + } + assertEquals(clients * PerformanceEvaluation.TASKS_PER_CLIENT, count); + } finally { + dis.close(); + } + } + + @Test + public void testSizeCalculation() { + TestOptions opts = new PerformanceEvaluation.TestOptions(); + opts = PerformanceEvaluation.calculateRowsAndSize(opts); + int rows = opts.getPerClientRunRows(); + // Default row count + final int defaultPerClientRunRows = 1024 * 1024; + assertEquals(defaultPerClientRunRows, rows); + // If size is 2G, then twice the row count. + opts.setSize(2.0f); + opts = PerformanceEvaluation.calculateRowsAndSize(opts); + assertEquals(defaultPerClientRunRows * 2, opts.getPerClientRunRows()); + // If two clients, then they get half the rows each. + opts.setNumClientThreads(2); + opts = PerformanceEvaluation.calculateRowsAndSize(opts); + assertEquals(defaultPerClientRunRows, opts.getPerClientRunRows()); + // What if valueSize is 'random'? Then half of the valueSize so twice the rows. + opts.valueRandom = true; + opts = PerformanceEvaluation.calculateRowsAndSize(opts); + assertEquals(defaultPerClientRunRows * 2, opts.getPerClientRunRows()); + } + + @Test + public void testZipfian() + throws NoSuchMethodException, SecurityException, InstantiationException, IllegalAccessException, + IllegalArgumentException, InvocationTargetException { + TestOptions opts = new PerformanceEvaluation.TestOptions(); + opts.setValueZipf(true); + final int valueSize = 1024; + opts.setValueSize(valueSize); + RandomReadTest rrt = new RandomReadTest(null, opts, null); + Constructor ctor = + Histogram.class.getDeclaredConstructor(com.yammer.metrics.stats.Sample.class); + ctor.setAccessible(true); + Histogram histogram = (Histogram)ctor.newInstance(new UniformSample(1024 * 500)); + for (int i = 0; i < 100; i++) { + histogram.update(rrt.getValueLength(null)); + } + double stddev = histogram.stdDev(); + assertTrue(stddev != 0 && stddev != 1.0); + assertTrue(histogram.stdDev() != 0); + Snapshot snapshot = histogram.getSnapshot(); + double median = snapshot.getMedian(); + assertTrue(median != 0 && median != 1 && median != valueSize); + } } \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java index 1b94a06..93e74e8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestRegionRebalancing.java @@ -21,20 +21,16 @@ package org.apache.hadoop.hbase; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; -import java.io.IOException; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collection; -import java.util.List; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.client.HBaseAdmin; -import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.RegionLocator; import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; @@ -47,10 +43,16 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collection; +import java.util.List; + /** * Test whether region re-balancing works. (HBASE-71) */ -@Category(LargeTests.class) +@Category({FlakeyTests.class, LargeTests.class}) @RunWith(value = Parameterized.class) public class TestRegionRebalancing { @@ -65,7 +67,7 @@ public class TestRegionRebalancing { private static final byte[] FAMILY_NAME = Bytes.toBytes("col"); public static final Log LOG = LogFactory.getLog(TestRegionRebalancing.class); private final HBaseTestingUtility UTIL = new HBaseTestingUtility(); - private RegionLocator table; + private RegionLocator regionLocator; private HTableDescriptor desc; private String balancerName; @@ -97,58 +99,59 @@ public class TestRegionRebalancing { @SuppressWarnings("deprecation") public void testRebalanceOnRegionServerNumberChange() throws IOException, InterruptedException { - HBaseAdmin admin = new HBaseAdmin(UTIL.getConfiguration()); - admin.createTable(this.desc, Arrays.copyOfRange(HBaseTestingUtility.KEYS, - 1, HBaseTestingUtility.KEYS.length)); - this.table = new HTable(UTIL.getConfiguration(), this.desc.getTableName()); - - MetaTableAccessor.fullScanMetaAndPrint(admin.getConnection()); - - assertEquals("Test table should have right number of regions", - HBaseTestingUtility.KEYS.length, - this.table.getStartKeys().length); - - // verify that the region assignments are balanced to start out - assertRegionsAreBalanced(); - - // add a region server - total of 2 - LOG.info("Started second server=" + - UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); - UTIL.getHBaseCluster().getMaster().balance(); - assertRegionsAreBalanced(); - - // On a balanced cluster, calling balance() should return true - assert(UTIL.getHBaseCluster().getMaster().balance() == true); - - // if we add a server, then the balance() call should return true - // add a region server - total of 3 - LOG.info("Started third server=" + + try(Connection connection = ConnectionFactory.createConnection(UTIL.getConfiguration()); + Admin admin = connection.getAdmin()) { + admin.createTable(this.desc, Arrays.copyOfRange(HBaseTestingUtility.KEYS, + 1, HBaseTestingUtility.KEYS.length)); + this.regionLocator = connection.getRegionLocator(this.desc.getTableName()); + + MetaTableAccessor.fullScanMetaAndPrint(admin.getConnection()); + + assertEquals("Test table should have right number of regions", + HBaseTestingUtility.KEYS.length, + this.regionLocator.getStartKeys().length); + + // verify that the region assignments are balanced to start out + assertRegionsAreBalanced(); + + // add a region server - total of 2 + LOG.info("Started second server=" + UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); - assert(UTIL.getHBaseCluster().getMaster().balance() == true); - assertRegionsAreBalanced(); - - // kill a region server - total of 2 - LOG.info("Stopped third server=" + UTIL.getHBaseCluster().stopRegionServer(2, false)); - UTIL.getHBaseCluster().waitOnRegionServer(2); - UTIL.getHBaseCluster().getMaster().balance(); - assertRegionsAreBalanced(); - - // start two more region servers - total of 4 - LOG.info("Readding third server=" + - UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); - LOG.info("Added fourth server=" + - UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); - assert(UTIL.getHBaseCluster().getMaster().balance() == true); - assertRegionsAreBalanced(); - - for (int i = 0; i < 6; i++){ - LOG.info("Adding " + (i + 5) + "th region server"); - UTIL.getHBaseCluster().startRegionServer(); + UTIL.getHBaseCluster().getMaster().balance(); + assertRegionsAreBalanced(); + + // On a balanced cluster, calling balance() should return true + assert(UTIL.getHBaseCluster().getMaster().balance() == true); + + // if we add a server, then the balance() call should return true + // add a region server - total of 3 + LOG.info("Started third server=" + + UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); + assert(UTIL.getHBaseCluster().getMaster().balance() == true); + assertRegionsAreBalanced(); + + // kill a region server - total of 2 + LOG.info("Stopped third server=" + UTIL.getHBaseCluster().stopRegionServer(2, false)); + UTIL.getHBaseCluster().waitOnRegionServer(2); + UTIL.getHBaseCluster().getMaster().balance(); + assertRegionsAreBalanced(); + + // start two more region servers - total of 4 + LOG.info("Readding third server=" + + UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); + LOG.info("Added fourth server=" + + UTIL.getHBaseCluster().startRegionServer().getRegionServer().getServerName()); + assert(UTIL.getHBaseCluster().getMaster().balance() == true); + assertRegionsAreBalanced(); + + for (int i = 0; i < 6; i++){ + LOG.info("Adding " + (i + 5) + "th region server"); + UTIL.getHBaseCluster().startRegionServer(); + } + assert(UTIL.getHBaseCluster().getMaster().balance() == true); + assertRegionsAreBalanced(); + regionLocator.close(); } - assert(UTIL.getHBaseCluster().getMaster().balance() == true); - assertRegionsAreBalanced(); - table.close(); - admin.close(); } /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java index 510d672..c29a460 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestSerialization.java @@ -46,6 +46,7 @@ import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Writables; @@ -56,7 +57,7 @@ import org.junit.experimental.categories.Category; /** * Test HBase Writables serializations */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestSerialization { @Test public void testKeyValue() throws Exception { final String name = "testKeyValue2"; @@ -140,9 +141,8 @@ public class TestSerialization { @Test public void testTableDescriptor() throws Exception { final String name = "testTableDescriptor"; HTableDescriptor htd = createTableDescriptor(name); - byte [] mb = Writables.getBytes(htd); - HTableDescriptor deserializedHtd = - (HTableDescriptor)Writables.getWritable(mb, new HTableDescriptor()); + byte [] mb = htd.toByteArray(); + HTableDescriptor deserializedHtd = HTableDescriptor.parseFrom(mb); assertEquals(htd.getTableName(), deserializedHtd.getTableName()); } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerLoad.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerLoad.java index 4dcfe24..97b518a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerLoad.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerLoad.java @@ -24,13 +24,14 @@ import static org.junit.Assert.*; import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import com.google.protobuf.ByteString; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestServerLoad { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java index ec89f1c..e5125c6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestServerName.java @@ -24,13 +24,14 @@ import static org.junit.Assert.assertTrue; import java.util.regex.Pattern; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Addressing; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestServerName { @Test public void testGetHostNameMinusDomain() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestTableDescriptor.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestTableDescriptor.java new file mode 100644 index 0000000..a179c47 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestTableDescriptor.java @@ -0,0 +1,57 @@ +/** + * Copyright The Apache Software Foundation + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase; + +import java.io.IOException; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.client.Durability; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; + +/** + * Test setting values in the descriptor + */ +@Category(SmallTests.class) +public class TestTableDescriptor { + final static Log LOG = LogFactory.getLog(TestTableDescriptor.class); + + @Test + public void testPb() throws DeserializationException, IOException { + HTableDescriptor htd = new HTableDescriptor(TableName.META_TABLE_NAME); + final int v = 123; + htd.setMaxFileSize(v); + htd.setDurability(Durability.ASYNC_WAL); + htd.setReadOnly(true); + htd.setRegionReplication(2); + TableDescriptor td = new TableDescriptor(htd, TableState.State.ENABLED); + byte[] bytes = td.toByteArray(); + TableDescriptor deserializedTd = TableDescriptor.parseFrom(bytes); + assertEquals(td, deserializedTd); + assertEquals(td.getHTableDescriptor(), deserializedTd.getHTableDescriptor()); + assertEquals(td.getTableState(), deserializedTd.getTableState()); + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java index cdee735..1944b61 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/TestZooKeeper.java @@ -49,11 +49,11 @@ import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.LoadBalancer; import org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.EmptyWatcher; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.hadoop.hbase.zookeeper.ZKConfig; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -74,7 +74,7 @@ import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestZooKeeper { private final Log LOG = LogFactory.getLog(this.getClass()); @@ -503,8 +503,7 @@ public class TestZooKeeper { HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName)); htd.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); admin.createTable(htd, SPLIT_KEYS); - ZooKeeperWatcher zooKeeperWatcher = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); - ZKAssign.blockUntilNoRIT(zooKeeperWatcher); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); m.getZooKeeper().close(); MockLoadBalancer.retainAssignCalled = false; m.abort("Test recovery from zk session expired", @@ -527,8 +526,7 @@ public class TestZooKeeper { * RS goes down. */ @Test(timeout = 300000) - public void testLogSplittingAfterMasterRecoveryDueToZKExpiry() throws IOException, - KeeperException, InterruptedException { + public void testLogSplittingAfterMasterRecoveryDueToZKExpiry() throws Exception { MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); cluster.startRegionServer(); HMaster m = cluster.getMaster(); @@ -544,8 +542,7 @@ public class TestZooKeeper { HColumnDescriptor hcd = new HColumnDescriptor("col"); htd.addFamily(hcd); admin.createTable(htd, SPLIT_KEYS); - ZooKeeperWatcher zooKeeperWatcher = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); - ZKAssign.blockUntilNoRIT(zooKeeperWatcher); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); table = new HTable(TEST_UTIL.getConfiguration(), htd.getTableName()); Put p; int numberOfPuts; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java index ae78a3e..8af6016 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/backup/TestHFileArchiving.java @@ -36,6 +36,7 @@ import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; @@ -60,7 +61,7 @@ import org.junit.experimental.categories.Category; * Test that the {@link HFileArchiver} correctly removes all the parts of a region when cleaning up * a region */ -@Category(MediumTests.class) +@Category({MediumTests.class, MiscTests.class}) public class TestHFileArchiving { private static final Log LOG = LogFactory.getLog(TestHFileArchiving.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java index 52b0f40..1757804 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/backup/example/TestZooKeeperTableArchiveClient.java @@ -35,7 +35,6 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.ConnectionFactory; @@ -44,6 +43,8 @@ import org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate; import org.apache.hadoop.hbase.master.cleaner.HFileCleaner; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.Store; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HFileArchiveUtil; @@ -64,7 +65,7 @@ import org.mockito.stubbing.Answer; * Spin up a small cluster and check that the hfiles of region are properly long-term archived as * specified via the {@link ZKTableArchiveClient}. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZooKeeperTableArchiveClient { private static final Log LOG = LogFactory.getLog(TestZooKeeperTableArchiveClient.class); @@ -418,4 +419,4 @@ public class TestZooKeeperTableArchiveClient { // stop the cleaner stop.stop(""); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java index 86c8e7a..998cdf0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/HConnectionTestingUtility.java @@ -134,9 +134,11 @@ public class HConnectionTestingUtility { Mockito.doNothing().when(c).decCount(); Mockito.when(c.getNewRpcRetryingCallerFactory(conf)).thenReturn( RpcRetryingCallerFactory.instantiate(conf, - RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR)); + RetryingCallerInterceptorFactory.NO_OP_INTERCEPTOR, null)); HTableInterface t = Mockito.mock(HTableInterface.class); Mockito.when(c.getTable((TableName)Mockito.any())).thenReturn(t); + ResultScanner rs = Mockito.mock(ResultScanner.class); + Mockito.when(t.getScanner((Scan)Mockito.any())).thenReturn(rs); return c; } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java index 62fe4df..bc36fa9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin1.java @@ -30,7 +30,6 @@ import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; -import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.logging.Log; @@ -41,16 +40,25 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.InvalidFamilyOperationException; -import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.MasterNotRunningException; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.TableNotEnabledException; import org.apache.hadoop.hbase.TableNotFoundException; -import org.apache.hadoop.hbase.executor.EventHandler; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.ZooKeeperConnectionException; +import org.apache.hadoop.hbase.exceptions.MergeRegionException; +import org.apache.hadoop.hbase.master.HMaster; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.RequestConverter; +import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; +import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.DispatchMergingRegionsRequest; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateClientSideReader; +import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.After; import org.junit.AfterClass; @@ -59,12 +67,14 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; +import com.google.protobuf.ServiceException; + /** * Class to test HBaseAdmin. * Spins up the minicluster once at test start and then takes it down afterward. * Add any testing of HBaseAdmin functionality here. */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestAdmin1 { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -238,7 +248,7 @@ public class TestAdmin1 { this.admin.disableTable(ht.getName()); assertTrue("Table must be disabled.", TEST_UTIL.getHBaseCluster() .getMaster().getAssignmentManager().getTableStateManager().isTableState( - ht.getName(), ZooKeeperProtos.Table.State.DISABLED)); + ht.getName(), TableState.State.DISABLED)); // Test that table is disabled get = new Get(row); @@ -265,7 +275,7 @@ public class TestAdmin1 { this.admin.enableTable(table); assertTrue("Table must be enabled.", TEST_UTIL.getHBaseCluster() .getMaster().getAssignmentManager().getTableStateManager().isTableState( - ht.getName(), ZooKeeperProtos.Table.State.ENABLED)); + ht.getName(), TableState.State.ENABLED)); // Test that table is enabled try { @@ -337,7 +347,7 @@ public class TestAdmin1 { assertEquals(numTables + 1, tables.length); assertTrue("Table must be enabled.", TEST_UTIL.getHBaseCluster() .getMaster().getAssignmentManager().getTableStateManager().isTableState( - TableName.valueOf("testCreateTable"), ZooKeeperProtos.Table.State.ENABLED)); + TableName.valueOf("testCreateTable"), TableState.State.ENABLED)); } @Test (timeout=300000) @@ -542,32 +552,6 @@ public class TestAdmin1 { "hbase.online.schema.update.enable", true); } - /** - * Listens for when an event is done in Master. - */ - static class DoneListener implements EventHandler.EventHandlerListener { - private final AtomicBoolean done; - - DoneListener(final AtomicBoolean done) { - super(); - this.done = done; - } - - @Override - public void afterProcess(EventHandler event) { - this.done.set(true); - synchronized (this.done) { - // Wake anyone waiting on this value to change. - this.done.notifyAll(); - } - } - - @Override - public void beforeProcess(EventHandler event) { - // continue - } - } - @SuppressWarnings("deprecation") protected void verifyRoundRobinDistribution(HTable ht, int expectedRegions) throws IOException { int numRS = ht.getConnection().getCurrentNrHRS(); @@ -582,6 +566,11 @@ public class TestAdmin1 { } regs.add(entry.getKey()); } + if (numRS >= 2) { + // Ignore the master region server, + // which contains less regions by intention. + numRS--; + } float average = (float) expectedRegions/numRS; int min = (int)Math.floor(average); int max = (int)Math.ceil(average); @@ -1093,6 +1082,126 @@ public class TestAdmin1 { table.close(); } + @Test + public void testSplitAndMergeWithReplicaTable() throws Exception { + // The test tries to directly split replica regions and directly merge replica regions. These + // are not allowed. The test validates that. Then the test does a valid split/merge of allowed + // regions. + // Set up a table with 3 regions and replication set to 3 + TableName tableName = TableName.valueOf("testSplitAndMergeWithReplicaTable"); + HTableDescriptor desc = new HTableDescriptor(tableName); + desc.setRegionReplication(3); + byte[] cf = "f".getBytes(); + HColumnDescriptor hcd = new HColumnDescriptor(cf); + desc.addFamily(hcd); + byte[][] splitRows = new byte[2][]; + splitRows[0] = new byte[]{(byte)'4'}; + splitRows[1] = new byte[]{(byte)'7'}; + TEST_UTIL.getHBaseAdmin().createTable(desc, splitRows); + List oldRegions; + do { + oldRegions = TEST_UTIL.getHBaseCluster().getRegions(tableName); + Thread.sleep(10); + } while (oldRegions.size() != 9); //3 regions * 3 replicas + // write some data to the table + HTable ht = new HTable(TEST_UTIL.getConfiguration(), tableName); + List puts = new ArrayList(); + byte[] qualifier = "c".getBytes(); + Put put = new Put(new byte[]{(byte)'1'}); + put.add(cf, qualifier, "100".getBytes()); + puts.add(put); + put = new Put(new byte[]{(byte)'6'}); + put.add(cf, qualifier, "100".getBytes()); + puts.add(put); + put = new Put(new byte[]{(byte)'8'}); + put.add(cf, qualifier, "100".getBytes()); + puts.add(put); + ht.put(puts); + ht.flushCommits(); + ht.close(); + List> regions = + MetaTableAccessor.getTableRegionsAndLocations(TEST_UTIL.getConnection(), tableName); + boolean gotException = false; + // the element at index 1 would be a replica (since the metareader gives us ordered + // regions). Try splitting that region via the split API . Should fail + try { + TEST_UTIL.getHBaseAdmin().split(regions.get(1).getFirst().getRegionName()); + } catch (IllegalArgumentException ex) { + gotException = true; + } + assertTrue(gotException); + gotException = false; + // the element at index 1 would be a replica (since the metareader gives us ordered + // regions). Try splitting that region via a different split API (the difference is + // this API goes direct to the regionserver skipping any checks in the admin). Should fail + try { + TEST_UTIL.getHBaseAdmin().split(regions.get(1).getSecond(), regions.get(1).getFirst(), + new byte[]{(byte)'1'}); + } catch (IOException ex) { + gotException = true; + } + assertTrue(gotException); + gotException = false; + // Try merging a replica with another. Should fail. + try { + TEST_UTIL.getHBaseAdmin().mergeRegions(regions.get(1).getFirst().getEncodedNameAsBytes(), + regions.get(2).getFirst().getEncodedNameAsBytes(), true); + } catch (IllegalArgumentException m) { + gotException = true; + } + assertTrue(gotException); + // Try going to the master directly (that will skip the check in admin) + try { + DispatchMergingRegionsRequest request = RequestConverter + .buildDispatchMergingRegionsRequest(regions.get(1).getFirst().getEncodedNameAsBytes(), + regions.get(2).getFirst().getEncodedNameAsBytes(), true); + TEST_UTIL.getHBaseAdmin().getConnection().getMaster().dispatchMergingRegions(null, request); + } catch (ServiceException m) { + Throwable t = m.getCause(); + do { + if (t instanceof MergeRegionException) { + gotException = true; + break; + } + t = t.getCause(); + } while (t != null); + } + assertTrue(gotException); + gotException = false; + // Try going to the regionservers directly + // first move the region to the same regionserver + if (!regions.get(2).getSecond().equals(regions.get(1).getSecond())) { + moveRegionAndWait(regions.get(2).getFirst(), regions.get(1).getSecond()); + } + try { + AdminService.BlockingInterface admin = TEST_UTIL.getHBaseAdmin().getConnection() + .getAdmin(regions.get(1).getSecond()); + ProtobufUtil.mergeRegions(admin, regions.get(1).getFirst(), regions.get(2).getFirst(), true); + } catch (MergeRegionException mm) { + gotException = true; + } + assertTrue(gotException); + } + + private void moveRegionAndWait(HRegionInfo destRegion, ServerName destServer) + throws InterruptedException, MasterNotRunningException, + ZooKeeperConnectionException, IOException { + HMaster master = TEST_UTIL.getMiniHBaseCluster().getMaster(); + TEST_UTIL.getHBaseAdmin().move( + destRegion.getEncodedNameAsBytes(), + Bytes.toBytes(destServer.getServerName())); + while (true) { + ServerName serverName = master.getAssignmentManager() + .getRegionStates().getRegionServerOfRegion(destRegion); + if (serverName != null && serverName.equals(destServer)) { + TEST_UTIL.assertRegionOnServer( + destRegion, serverName, 200); + break; + } + Thread.sleep(10); + } + } + /** * HADOOP-2156 * @throws IOException @@ -1113,8 +1222,7 @@ public class TestAdmin1 { ZooKeeperWatcher zkw = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); TableName tableName = TableName.valueOf("testMasterAdmin"); TEST_UTIL.createTable(tableName, HConstants.CATALOG_FAMILY).close(); - while (!ZKTableStateClientSideReader.isEnabledTable(zkw, - TableName.valueOf("testMasterAdmin"))) { + while (!this.admin.isTableEnabled(TableName.valueOf("testMasterAdmin"))) { Thread.sleep(10); } this.admin.disableTable(tableName); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java index f4739a7..9daf685 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java @@ -37,7 +37,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.NotServingRegionException; @@ -54,6 +53,8 @@ import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.wal.DefaultWALProvider; @@ -73,7 +74,7 @@ import com.google.protobuf.ServiceException; * Spins up the minicluster once at test start and then takes it down afterward. * Add any testing of HBaseAdmin functionality here. */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestAdmin2 { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -302,7 +303,6 @@ public class TestAdmin2 { ht.get(new Get("e".getBytes())); } - @Test (timeout=300000) public void testShouldCloseTheRegionBasedOnTheEncodedRegionName() throws Exception { @@ -415,7 +415,6 @@ public class TestAdmin2 { } } - @Test (timeout=300000) public void testCloseRegionWhenServerNameIsEmpty() throws Exception { byte[] TABLENAME = Bytes.toBytes("TestHBACloseRegionWhenServerNameIsEmpty"); @@ -516,10 +515,26 @@ public class TestAdmin2 { assertEquals("Tried to create " + expectedRegions + " regions " + "but only found " + RegionInfos.size(), expectedRegions, RegionInfos.size()); - } @Test (timeout=300000) + public void testMoveToPreviouslyAssignedRS() throws IOException, InterruptedException { + byte[] tableName = Bytes.toBytes("testMoveToPreviouslyAssignedRS"); + MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); + HMaster master = cluster.getMaster(); + HBaseAdmin localAdmin = createTable(tableName); + List tableRegions = localAdmin.getTableRegions(tableName); + HRegionInfo hri = tableRegions.get(0); + AssignmentManager am = master.getAssignmentManager(); + assertTrue("Region " + hri.getRegionNameAsString() + + " should be assigned properly", am.waitForAssignment(hri)); + ServerName server = am.getRegionStates().getRegionServerOfRegion(hri); + localAdmin.move(hri.getEncodedNameAsBytes(), Bytes.toBytes(server.getServerName())); + assertEquals("Current region server and region server before move should be same.", server, + am.getRegionStates().getRegionServerOfRegion(hri)); + } + + @Test (timeout=300000) public void testWALRollWriting() throws Exception { setUpforLogRolling(); String className = this.getClass().getName(); @@ -546,24 +561,6 @@ public class TestAdmin2 { assertTrue(("actual count: " + count), count <= 2); } - @Test (timeout=300000) - public void testMoveToPreviouslyAssignedRS() throws IOException, InterruptedException { - byte[] tableName = Bytes.toBytes("testMoveToPreviouslyAssignedRS"); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - HMaster master = cluster.getMaster(); - HBaseAdmin localAdmin = createTable(tableName); - List tableRegions = localAdmin.getTableRegions(tableName); - HRegionInfo hri = tableRegions.get(0); - AssignmentManager am = master.getAssignmentManager(); - assertTrue("Region " + hri.getRegionNameAsString() - + " should be assigned properly", am.waitForAssignment(hri)); - ServerName server = am.getRegionStates().getRegionServerOfRegion(hri); - localAdmin.move(hri.getEncodedNameAsBytes(), Bytes.toBytes(server.getServerName())); - assertEquals("Current region server and region server before move should be same.", server, - am.getRegionStates().getRegionServerOfRegion(hri)); - } - - private void setUpforLogRolling() { // Force a region split after every 768KB TEST_UTIL.getConfiguration().setLong(HConstants.HREGION_MAX_FILESIZE, diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCheckAndMutate.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCheckAndMutate.java index 2e48aba..a8c4abd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCheckAndMutate.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCheckAndMutate.java @@ -19,9 +19,9 @@ package org.apache.hadoop.hbase.client; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.filter.CompareFilter; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientOperationInterrupt.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientOperationInterrupt.java index 72b74fb..6d58d03 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientOperationInterrupt.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientOperationInterrupt.java @@ -25,12 +25,13 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.junit.AfterClass; @@ -46,7 +47,7 @@ import java.util.ArrayList; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestClientOperationInterrupt { private static final Log LOG = LogFactory.getLog(TestClientOperationInterrupt.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java new file mode 100644 index 0000000..d8bf57b --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientPushback.java @@ -0,0 +1,105 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.client; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.client.backoff.ServerStatistics; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; + +/** + * Test that we can actually send and use region metrics to slowdown client writes + */ +@Category(MediumTests.class) +public class TestClientPushback { + + private static final Log LOG = LogFactory.getLog(TestClientPushback.class); + private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); + + private static final byte[] tableName = Bytes.toBytes("client-pushback"); + private static final byte[] family = Bytes.toBytes("f"); + private static final byte[] qualifier = Bytes.toBytes("q"); + private static long flushSizeBytes = 1024; + + @BeforeClass + public static void setupCluster() throws Exception{ + Configuration conf = UTIL.getConfiguration(); + // enable backpressure + conf.setBoolean(HConstants.ENABLE_CLIENT_BACKPRESSURE, true); + // turn the memstore size way down so we don't need to write a lot to see changes in memstore + // load + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, flushSizeBytes); + // ensure we block the flushes when we are double that flushsize + conf.setLong("hbase.hregion.memstore.block.multiplier", 2); + + UTIL.startMiniCluster(1); + UTIL.createTable(tableName, family); + } + + @AfterClass + public static void teardownCluster() throws Exception{ + UTIL.shutdownMiniCluster(); + } + + @Test + public void testClientTracksServerPushback() throws Exception{ + Configuration conf = UTIL.getConfiguration(); + TableName tablename = TableName.valueOf(tableName); + Connection conn = ConnectionFactory.createConnection(conf); + HTable table = (HTable) conn.getTable(tablename); + + HRegionServer rs = UTIL.getHBaseCluster().getRegionServer(0); + HRegion region = rs.getOnlineRegions(tablename).get(0); + + LOG.debug("Writing some data to "+tablename); + // write some data + Put p = new Put(Bytes.toBytes("row")); + p.add(family, qualifier, Bytes.toBytes("value1")); + table.put(p); + table.flushCommits(); + + // get the current load on RS. Hopefully memstore isn't flushed since we wrote the the data + int load = (int)((region.addAndGetGlobalMemstoreSize(0) * 100) / flushSizeBytes); + LOG.debug("Done writing some data to "+tablename); + + // get the stats for the region hosting our table + ClusterConnection connection = table.connection; + ServerStatisticTracker stats = connection.getStatisticsTracker(); + assertNotNull( "No stats configured for the client!", stats); + // get the names so we can query the stats + ServerName server = rs.getServerName(); + byte[] regionName = region.getRegionName(); + + // check to see we found some load on the memstore + ServerStatistics serverStats = stats.getServerStatsForTesting(server); + ServerStatistics.RegionStatistics regionStats = serverStats.getStatsForRegion(regionName); + assertEquals(load, regionStats.getMemstoreLoadPercent()); + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientScannerRPCTimeout.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientScannerRPCTimeout.java index eafac63..65483c9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientScannerRPCTimeout.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientScannerRPCTimeout.java @@ -27,7 +27,6 @@ import org.apache.commons.logging.impl.Log4JLogger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster.MiniHBaseClusterRegionServer; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.CoordinatedStateManager; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.RSRpcServices; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.log4j.Level; import org.junit.AfterClass; @@ -51,7 +52,7 @@ import com.google.protobuf.ServiceException; * Test the scenario where a HRegionServer#scan() call, while scanning, timeout at client side and * getting retried. This scenario should not result in some data being skipped at RS side. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestClientScannerRPCTimeout { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java index bf48e02..c04c4f2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java @@ -34,13 +34,14 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.MasterNotRunningException; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.ipc.AbstractRpcClient; import org.apache.hadoop.hbase.ipc.RpcClient; import org.apache.hadoop.hbase.ipc.RpcClientFactory; import org.apache.hadoop.hbase.ipc.RpcClientImpl; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -52,7 +53,7 @@ import com.google.protobuf.Message; import com.google.protobuf.RpcController; import com.google.protobuf.ServiceException; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestClientTimeouts { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClient.java index b6502c5..1e87e6e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClient.java @@ -17,21 +17,20 @@ */ package org.apache.hadoop.hbase.client; -import static org.junit.Assert.assertEquals; - import java.io.IOException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.NamespaceNotFoundException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.snapshot.SnapshotDoesNotExistException; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -43,7 +42,7 @@ import org.junit.experimental.categories.Category; /** * Test clone snapshots from the client */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestCloneSnapshotFromClient { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClientWithRegionReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClientWithRegionReplicas.java index a426005..5c2eca9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClientWithRegionReplicas.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestCloneSnapshotFromClientWithRegionReplicas.java @@ -17,10 +17,11 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestCloneSnapshotFromClientWithRegionReplicas extends TestCloneSnapshotFromClient { @Override diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java index 649d674..ac0a0bd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestConnectionUtils.java @@ -19,6 +19,7 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -28,7 +29,7 @@ import java.util.TreeSet; import static org.junit.Assert.assertTrue; -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestConnectionUtils { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java index 2bc72b9..709e94b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFastFail.java @@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.exceptions.PreemptiveFastFailException; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.test.LoadTestKVGenerator; @@ -51,7 +52,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category({MediumTests.class}) +@Category({MediumTests.class, ClientTests.class}) public class TestFastFail { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java index 8511f88..fa69c47 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java @@ -26,24 +26,6 @@ import static org.junit.Assert.assertNull; import static org.junit.Assert.assertSame; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; -import static org.mockito.Mockito.spy; -import static org.mockito.Mockito.when; - -import java.io.IOException; -import java.lang.reflect.Method; -import java.util.ArrayList; -import java.util.Arrays; -import java.util.Collections; -import java.util.HashSet; -import java.util.Iterator; -import java.util.List; -import java.util.Map; -import java.util.NavigableMap; -import java.util.UUID; -import java.util.concurrent.Callable; -import java.util.concurrent.ExecutorService; -import java.util.concurrent.Executors; -import java.util.concurrent.atomic.AtomicReference; import org.apache.commons.lang.ArrayUtils; import org.apache.commons.logging.Log; @@ -60,7 +42,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; @@ -95,6 +76,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException; import org.apache.hadoop.hbase.regionserver.Store; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Pair; @@ -107,12 +90,28 @@ import org.junit.Ignore; import org.junit.Test; import org.junit.experimental.categories.Category; +import java.io.IOException; +import java.lang.reflect.Method; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.NavigableMap; +import java.util.UUID; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; +import java.util.concurrent.atomic.AtomicReference; + /** * Run tests that use the HBase clients; {@link HTable}. * Sets up the HBase mini cluster once at start and runs through all client tests. * Each creates a table named for the method and does its stuff against that. */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) @SuppressWarnings ("deprecation") public class TestFromClientSide { final Log LOG = LogFactory.getLog(getClass()); @@ -4179,7 +4178,7 @@ public class TestFromClientSide { // HBaseAdmin and can connect to the new master; HBaseAdmin newAdmin = new HBaseAdmin(conn); assertTrue(newAdmin.tableExists(tableName)); - assertTrue(newAdmin.getClusterStatus().getServersSize() == SLAVES); + assertTrue(newAdmin.getClusterStatus().getServersSize() == SLAVES + 1); } @Test @@ -4276,7 +4275,7 @@ public class TestFromClientSide { new byte[][] { HConstants.CATALOG_FAMILY, Bytes.toBytes("info2") }, 1, 1024); // set block size to 64 to making 2 kvs into one block, bypassing the walkForwardInSingleRow // in Store.rowAtOrBeforeFromStoreFile - table.setAutoFlush(true); + table.setAutoFlushTo(true); String regionName = table.getRegionLocations().firstKey().getEncodedName(); HRegion region = TEST_UTIL.getRSForFirstRegionInTable(tableAname).getFromOnlineRegions(regionName); @@ -4460,8 +4459,8 @@ public class TestFromClientSide { assertEquals(0, Bytes.compareTo(Bytes.add(v2,v1), r.getValue(FAMILY, QUALIFIERS[1]))); // QUALIFIERS[2] previously not exist, verify both value and timestamp are correct assertEquals(0, Bytes.compareTo(v2, r.getValue(FAMILY, QUALIFIERS[2]))); - assertEquals(r.getColumnLatest(FAMILY, QUALIFIERS[0]).getTimestamp(), - r.getColumnLatest(FAMILY, QUALIFIERS[2]).getTimestamp()); + assertEquals(r.getColumnLatestCell(FAMILY, QUALIFIERS[0]).getTimestamp(), + r.getColumnLatestCell(FAMILY, QUALIFIERS[2]).getTimestamp()); } @Test @@ -5203,40 +5202,41 @@ public class TestFromClientSide { TableName TABLE = TableName.valueOf("testNonCachedGetRegionLocation"); byte [] family1 = Bytes.toBytes("f1"); byte [] family2 = Bytes.toBytes("f2"); - HTable table = TEST_UTIL.createTable(TABLE, new byte[][] {family1, family2}, 10); - Admin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); - Map regionsMap = table.getRegionLocations(); - assertEquals(1, regionsMap.size()); - HRegionInfo regionInfo = regionsMap.keySet().iterator().next(); - ServerName addrBefore = regionsMap.get(regionInfo); - // Verify region location before move. - HRegionLocation addrCache = table.getRegionLocation(regionInfo.getStartKey(), false); - HRegionLocation addrNoCache = table.getRegionLocation(regionInfo.getStartKey(), true); - - assertEquals(addrBefore.getPort(), addrCache.getPort()); - assertEquals(addrBefore.getPort(), addrNoCache.getPort()); - - ServerName addrAfter = null; - // Now move the region to a different server. - for (int i = 0; i < SLAVES; i++) { - HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(i); - ServerName addr = regionServer.getServerName(); - if (addr.getPort() != addrBefore.getPort()) { - admin.move(regionInfo.getEncodedNameAsBytes(), - Bytes.toBytes(addr.toString())); - // Wait for the region to move. - Thread.sleep(5000); - addrAfter = addr; - break; + try (HTable table = TEST_UTIL.createTable(TABLE, new byte[][] {family1, family2}, 10); + Admin admin = new HBaseAdmin(TEST_UTIL.getConfiguration())) { + Map regionsMap = table.getRegionLocations(); + assertEquals(1, regionsMap.size()); + HRegionInfo regionInfo = regionsMap.keySet().iterator().next(); + ServerName addrBefore = regionsMap.get(regionInfo); + // Verify region location before move. + HRegionLocation addrCache = table.getRegionLocation(regionInfo.getStartKey(), false); + HRegionLocation addrNoCache = table.getRegionLocation(regionInfo.getStartKey(), true); + + assertEquals(addrBefore.getPort(), addrCache.getPort()); + assertEquals(addrBefore.getPort(), addrNoCache.getPort()); + + ServerName addrAfter = null; + // Now move the region to a different server. + for (int i = 0; i < SLAVES; i++) { + HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(i); + ServerName addr = regionServer.getServerName(); + if (addr.getPort() != addrBefore.getPort()) { + admin.move(regionInfo.getEncodedNameAsBytes(), + Bytes.toBytes(addr.toString())); + // Wait for the region to move. + Thread.sleep(5000); + addrAfter = addr; + break; + } } + + // Verify the region was moved. + addrCache = table.getRegionLocation(regionInfo.getStartKey(), false); + addrNoCache = table.getRegionLocation(regionInfo.getStartKey(), true); + assertNotNull(addrAfter); + assertTrue(addrAfter.getPort() != addrCache.getPort()); + assertEquals(addrAfter.getPort(), addrNoCache.getPort()); } - - // Verify the region was moved. - addrCache = table.getRegionLocation(regionInfo.getStartKey(), false); - addrNoCache = table.getRegionLocation(regionInfo.getStartKey(), true); - assertNotNull(addrAfter); - assertTrue(addrAfter.getPort() != addrCache.getPort()); - assertEquals(addrAfter.getPort(), addrNoCache.getPort()); } @Test @@ -5684,8 +5684,8 @@ public class TestFromClientSide { int expectedIndex = 5; for (Result result : scanner) { assertEquals(result.size(), 1); - assertTrue(Bytes.equals(result.raw()[0].getRow(), ROWS[expectedIndex])); - assertTrue(Bytes.equals(result.raw()[0].getQualifier(), + assertTrue(Bytes.equals(result.rawCells()[0].getRow(), ROWS[expectedIndex])); + assertTrue(Bytes.equals(result.rawCells()[0].getQualifier(), QUALIFIERS[expectedIndex])); expectedIndex--; } @@ -5723,8 +5723,8 @@ public class TestFromClientSide { int count = 0; for (Result result : ht.getScanner(scan)) { assertEquals(result.size(), 1); - assertEquals(result.raw()[0].getValueLength(), Bytes.SIZEOF_INT); - assertEquals(Bytes.toInt(result.raw()[0].getValue()), VALUE.length); + assertEquals(result.rawCells()[0].getValueLength(), Bytes.SIZEOF_INT); + assertEquals(Bytes.toInt(result.rawCells()[0].getValue()), VALUE.length); count++; } assertEquals(count, 10); @@ -6006,15 +6006,15 @@ public class TestFromClientSide { result = scanner.next(); assertTrue("Expected 2 keys but received " + result.size(), result.size() == 2); - assertTrue(Bytes.equals(result.raw()[0].getRow(), ROWS[4])); - assertTrue(Bytes.equals(result.raw()[1].getRow(), ROWS[4])); - assertTrue(Bytes.equals(result.raw()[0].getValue(), VALUES[1])); - assertTrue(Bytes.equals(result.raw()[1].getValue(), VALUES[2])); + assertTrue(Bytes.equals(result.rawCells()[0].getRow(), ROWS[4])); + assertTrue(Bytes.equals(result.rawCells()[1].getRow(), ROWS[4])); + assertTrue(Bytes.equals(result.rawCells()[0].getValue(), VALUES[1])); + assertTrue(Bytes.equals(result.rawCells()[1].getValue(), VALUES[2])); result = scanner.next(); assertTrue("Expected 1 key but received " + result.size(), result.size() == 1); - assertTrue(Bytes.equals(result.raw()[0].getRow(), ROWS[3])); - assertTrue(Bytes.equals(result.raw()[0].getValue(), VALUES[0])); + assertTrue(Bytes.equals(result.rawCells()[0].getRow(), ROWS[3])); + assertTrue(Bytes.equals(result.rawCells()[0].getValue(), VALUES[0])); scanner.close(); ht.close(); } @@ -6247,10 +6247,13 @@ public class TestFromClientSide { HColumnDescriptor fam = new HColumnDescriptor(FAMILY); htd.addFamily(fam); byte[][] KEYS = HBaseTestingUtility.KEYS_FOR_HBA_CREATE_TABLE; - TEST_UTIL.getHBaseAdmin().createTable(htd, KEYS); - List regions = TEST_UTIL.getHBaseAdmin().getTableRegions(htd.getTableName()); + HBaseAdmin admin = TEST_UTIL.getHBaseAdmin(); + admin.createTable(htd, KEYS); + List regions = admin.getTableRegions(htd.getTableName()); - for (int regionReplication = 1; regionReplication < 4 ; regionReplication++) { + HRegionLocator locator = + (HRegionLocator) admin.getConnection().getRegionLocator(htd.getTableName()); + for (int regionReplication = 1; regionReplication < 4; regionReplication++) { List regionLocations = new ArrayList(); // mock region locations coming from meta with multiple replicas @@ -6262,10 +6265,7 @@ public class TestFromClientSide { regionLocations.add(new RegionLocations(arr)); } - HTable table = spy(new HTable(TEST_UTIL.getConfiguration(), htd.getTableName())); - when(table.listRegionLocations()).thenReturn(regionLocations); - - Pair startEndKeys = table.getStartEndKeys(); + Pair startEndKeys = locator.getStartEndKeys(regionLocations); assertEquals(KEYS.length + 1, startEndKeys.getFirst().length); @@ -6275,9 +6275,6 @@ public class TestFromClientSide { assertArrayEquals(startKey, startEndKeys.getFirst()[i]); assertArrayEquals(endKey, startEndKeys.getSecond()[i]); } - - table.close(); } } - } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java index 75cfb3a..0e97684 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide3.java @@ -36,9 +36,10 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.junit.After; @@ -48,7 +49,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestFromClientSide3 { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideNoCodec.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideNoCodec.java index ae96849..f5807c2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideNoCodec.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideNoCodec.java @@ -25,8 +25,9 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ipc.AbstractRpcClient; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -37,7 +38,7 @@ import org.junit.experimental.categories.Category; * Do some ops and prove that client and server can work w/o codecs; that we can pb all the time. * Good for third-party clients or simple scripts that want to talk direct to hbase. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestFromClientSideNoCodec { protected final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); /** @@ -99,4 +100,4 @@ public class TestFromClientSideNoCodec { String codec = AbstractRpcClient.getDefaultCodec(c); assertTrue(codec == null || codec.length() == 0); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java index 2671af7..e832590 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideWithCoprocessor.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.client; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint; @@ -29,7 +30,7 @@ import org.junit.experimental.categories.Category; * Test all client operations with a coprocessor that * just implements the default flush/compact/scan policy */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestFromClientSideWithCoprocessor extends TestFromClientSide { @BeforeClass public static void setUpBeforeClass() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHBaseAdminNoCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHBaseAdminNoCluster.java index ecbf885..fbca881 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHBaseAdminNoCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHBaseAdminNoCluster.java @@ -25,11 +25,12 @@ import java.io.IOException; import java.util.ArrayList; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.PleaseHoldException; import org.apache.hadoop.hbase.TableName; @@ -47,6 +48,7 @@ import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.RunCatalogScanReq import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.SetBalancerRunningRequest; import org.junit.Test; import org.junit.experimental.categories.Category; +import org.mockito.Matchers; import org.mockito.Mockito; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; @@ -55,7 +57,7 @@ import org.mortbay.log.Log; import com.google.protobuf.RpcController; import com.google.protobuf.ServiceException; -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestHBaseAdminNoCluster { /** * Verify that PleaseHoldException gets retried. @@ -259,7 +261,6 @@ public class TestHBaseAdminNoCluster { (IsCatalogJanitorEnabledRequest)Mockito.any()); } }); - // Admin.mergeRegions() testMasterOperationIsRetried(new MethodCaller() { @Override @@ -303,8 +304,10 @@ public class TestHBaseAdminNoCluster { Admin admin = null; try { - admin = new HBaseAdmin(connection); - + admin = Mockito.spy(new HBaseAdmin(connection)); + // mock the call to getRegion since in the absence of a cluster (which means the meta + // is not assigned), getRegion can't function + Mockito.doReturn(null).when(((HBaseAdmin)admin)).getRegion(Matchers.any()); try { caller.call(admin); // invoke the HBaseAdmin method fail(); @@ -317,4 +320,4 @@ public class TestHBaseAdminNoCluster { if (admin != null) {admin.close();} } } -} \ No newline at end of file +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java index 1a7866f..82a5c76 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHCM.java @@ -52,6 +52,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; @@ -89,7 +90,7 @@ import com.google.common.collect.Lists; /** * This class is for testing HBaseConnectionManager features */ -@Category(MediumTests.class) +@Category({MediumTests.class, FlakeyTests.class}) public class TestHCM { private static final Log LOG = LogFactory.getLog(TestHCM.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java index c2353c9..26fe485 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java @@ -28,15 +28,16 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestHTableMultiplexer { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexerFlushCache.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexerFlushCache.java index e907549..2898369 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexerFlushCache.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexerFlushCache.java @@ -26,16 +26,17 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ LargeTests.class, ClientTests.class }) public class TestHTableMultiplexerFlushCache { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java deleted file mode 100644 index a34144e..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTablePool.java +++ /dev/null @@ -1,363 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.client; - -import java.io.IOException; - -import org.apache.hadoop.hbase.*; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.PoolMap.PoolType; -import org.junit.*; -import org.junit.experimental.categories.Category; -import org.junit.runner.RunWith; -import org.junit.runners.Suite; - -/** - * Tests HTablePool. - */ -@RunWith(Suite.class) -@Suite.SuiteClasses({TestHTablePool.TestHTableReusablePool.class, TestHTablePool.TestHTableThreadLocalPool.class}) -@Category(MediumTests.class) -public class TestHTablePool { - private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private final static String TABLENAME = "TestHTablePool"; - - public abstract static class TestHTablePoolType { - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - TEST_UTIL.startMiniCluster(1); - TEST_UTIL.createTable(TableName.valueOf(TABLENAME), HConstants.CATALOG_FAMILY); - } - - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniCluster(); - } - - protected abstract PoolType getPoolType(); - - @Test - public void testTableWithStringName() throws Exception { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE, getPoolType()); - String tableName = TABLENAME; - - // Request a table from an empty pool - Table table = pool.getTable(tableName); - Assert.assertNotNull(table); - - // Close table (returns table to the pool) - table.close(); - - // Request a table of the same name - Table sameTable = pool.getTable(tableName); - Assert.assertSame( - ((HTablePool.PooledHTable) table).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable).getWrappedTable()); - } - - @Test - public void testTableWithByteArrayName() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE, getPoolType()); - - // Request a table from an empty pool - Table table = pool.getTable(TABLENAME); - Assert.assertNotNull(table); - - // Close table (returns table to the pool) - table.close(); - - // Request a table of the same name - Table sameTable = pool.getTable(TABLENAME); - Assert.assertSame( - ((HTablePool.PooledHTable) table).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable).getWrappedTable()); - } - - @Test - public void testTablesWithDifferentNames() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE, getPoolType()); - // We add the class to the table name as the HBase cluster is reused - // during the tests: this gives naming unicity. - byte[] otherTable = Bytes.toBytes( - "OtherTable_" + getClass().getSimpleName() - ); - TEST_UTIL.createTable(otherTable, HConstants.CATALOG_FAMILY); - - // Request a table from an empty pool - Table table1 = pool.getTable(TABLENAME); - Table table2 = pool.getTable(otherTable); - Assert.assertNotNull(table2); - - // Close tables (returns tables to the pool) - table1.close(); - table2.close(); - - // Request tables of the same names - Table sameTable1 = pool.getTable(TABLENAME); - Table sameTable2 = pool.getTable(otherTable); - Assert.assertSame( - ((HTablePool.PooledHTable) table1).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); - Assert.assertSame( - ((HTablePool.PooledHTable) table2).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); - } - @Test - public void testProxyImplementationReturned() { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE); - String tableName = TABLENAME;// Request a table from - // an - // empty pool - Table table = pool.getTable(tableName); - - // Test if proxy implementation is returned - Assert.assertTrue(table instanceof HTablePool.PooledHTable); - } - - @Test - public void testDeprecatedUsagePattern() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE); - String tableName = TABLENAME;// Request a table from - // an - // empty pool - - // get table will return proxy implementation - HTableInterface table = pool.getTable(tableName); - - // put back the proxy implementation instead of closing it - pool.putTable(table); - - // Request a table of the same name - Table sameTable = pool.getTable(tableName); - - // test no proxy over proxy created - Assert.assertSame(((HTablePool.PooledHTable) table).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable).getWrappedTable()); - } - - @Test - public void testReturnDifferentTable() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE); - String tableName = TABLENAME;// Request a table from - // an - // empty pool - - // get table will return proxy implementation - final Table table = pool.getTable(tableName); - HTableInterface alienTable = new HTable(TEST_UTIL.getConfiguration(), - TableName.valueOf(TABLENAME)) { - // implementation doesn't matter as long the table is not from - // pool - }; - try { - // put the wrong table in pool - pool.putTable(alienTable); - Assert.fail("alien table accepted in pool"); - } catch (IllegalArgumentException e) { - Assert.assertTrue("alien table rejected", true); - } - } - - @Test - public void testHTablePoolCloseTwice() throws Exception { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), - Integer.MAX_VALUE, getPoolType()); - String tableName = TABLENAME; - - // Request a table from an empty pool - Table table = pool.getTable(tableName); - Assert.assertNotNull(table); - Assert.assertTrue(((HTablePool.PooledHTable) table).isOpen()); - // Close table (returns table to the pool) - table.close(); - // check if the table is closed - Assert.assertFalse(((HTablePool.PooledHTable) table).isOpen()); - try { - table.close(); - Assert.fail("Should not allow table to be closed twice"); - } catch (IllegalStateException ex) { - Assert.assertTrue("table cannot be closed twice", true); - } finally { - pool.close(); - } - - } - - } - - @Category(MediumTests.class) - public static class TestHTableReusablePool extends TestHTablePoolType { - @Override - protected PoolType getPoolType() { - return PoolType.Reusable; - } - - @Test - public void testTableWithMaxSize() throws Exception { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 2, - getPoolType()); - - // Request tables from an empty pool - Table table1 = pool.getTable(TABLENAME); - Table table2 = pool.getTable(TABLENAME); - Table table3 = pool.getTable(TABLENAME); - - // Close tables (returns tables to the pool) - table1.close(); - table2.close(); - // The pool should reject this one since it is already full - table3.close(); - - // Request tables of the same name - Table sameTable1 = pool.getTable(TABLENAME); - Table sameTable2 = pool.getTable(TABLENAME); - Table sameTable3 = pool.getTable(TABLENAME); - Assert.assertSame( - ((HTablePool.PooledHTable) table1).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); - Assert.assertSame( - ((HTablePool.PooledHTable) table2).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); - Assert.assertNotSame( - ((HTablePool.PooledHTable) table3).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable3).getWrappedTable()); - } - - @Test - public void testCloseTablePool() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 4, - getPoolType()); - HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); - - if (admin.tableExists(TABLENAME)) { - admin.disableTable(TABLENAME); - admin.deleteTable(TABLENAME); - } - - HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(TABLENAME)); - tableDescriptor.addFamily(new HColumnDescriptor("randomFamily")); - admin.createTable(tableDescriptor); - - // Request tables from an empty pool - Table[] tables = new Table[4]; - for (int i = 0; i < 4; ++i) { - tables[i] = pool.getTable(TABLENAME); - } - - pool.closeTablePool(TABLENAME); - - for (int i = 0; i < 4; ++i) { - tables[i].close(); - } - - Assert.assertEquals(4, - pool.getCurrentPoolSize(TABLENAME)); - - pool.closeTablePool(TABLENAME); - - Assert.assertEquals(0, - pool.getCurrentPoolSize(TABLENAME)); - } - } - - @Category(MediumTests.class) - public static class TestHTableThreadLocalPool extends TestHTablePoolType { - @Override - protected PoolType getPoolType() { - return PoolType.ThreadLocal; - } - - @Test - public void testTableWithMaxSize() throws Exception { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 2, - getPoolType()); - - // Request tables from an empty pool - Table table1 = pool.getTable(TABLENAME); - Table table2 = pool.getTable(TABLENAME); - Table table3 = pool.getTable(TABLENAME); - - // Close tables (returns tables to the pool) - table1.close(); - table2.close(); - // The pool should not reject this one since the number of threads - // <= 2 - table3.close(); - - // Request tables of the same name - Table sameTable1 = pool.getTable(TABLENAME); - Table sameTable2 = pool.getTable(TABLENAME); - Table sameTable3 = pool.getTable(TABLENAME); - Assert.assertSame( - ((HTablePool.PooledHTable) table3).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); - Assert.assertSame( - ((HTablePool.PooledHTable) table3).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); - Assert.assertSame( - ((HTablePool.PooledHTable) table3).getWrappedTable(), - ((HTablePool.PooledHTable) sameTable3).getWrappedTable()); - } - - @Test - public void testCloseTablePool() throws IOException { - HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 4, - getPoolType()); - HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); - - if (admin.tableExists(TABLENAME)) { - admin.disableTable(TABLENAME); - admin.deleteTable(TABLENAME); - } - - HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(TABLENAME)); - tableDescriptor.addFamily(new HColumnDescriptor("randomFamily")); - admin.createTable(tableDescriptor); - - // Request tables from an empty pool - Table[] tables = new Table[4]; - for (int i = 0; i < 4; ++i) { - tables[i] = pool.getTable(TABLENAME); - } - - pool.closeTablePool(TABLENAME); - - for (int i = 0; i < 4; ++i) { - tables[i].close(); - } - - Assert.assertEquals(1, - pool.getCurrentPoolSize(TABLENAME)); - - pool.closeTablePool(TABLENAME); - - Assert.assertEquals(0, - pool.getCurrentPoolSize(TABLENAME)); - } - } - -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableUtil.java deleted file mode 100644 index 409a609..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableUtil.java +++ /dev/null @@ -1,130 +0,0 @@ -/* - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.client; - -import static org.junit.Assert.assertEquals; - -import java.util.ArrayList; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -/** - * This class provides tests for the {@link HTableUtil} class - * - */ -@Category(MediumTests.class) -public class TestHTableUtil { - final Log LOG = LogFactory.getLog(getClass()); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private static byte [] FAMILY = Bytes.toBytes("testFamily"); - private static byte [] QUALIFIER = Bytes.toBytes("testQualifier"); - private static byte [] VALUE = Bytes.toBytes("testValue"); - - /** - * @throws java.lang.Exception - */ - @BeforeClass - public static void setUpBeforeClass() throws Exception { - TEST_UTIL.startMiniCluster(); - } - - /** - * @throws java.lang.Exception - */ - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniCluster(); - } - - /** - * - * @throws Exception - */ - @Test - public void testBucketPut() throws Exception { - byte [] TABLE = Bytes.toBytes("testBucketPut"); - HTable ht = TEST_UTIL.createTable(TABLE, FAMILY); - ht.setAutoFlushTo(false); - - List puts = new ArrayList(); - puts.add( createPut("row1") ); - puts.add( createPut("row2") ); - puts.add( createPut("row3") ); - puts.add( createPut("row4") ); - - HTableUtil.bucketRsPut( ht, puts ); - - Scan scan = new Scan(); - scan.addColumn(FAMILY, QUALIFIER); - int count = 0; - for(Result result : ht.getScanner(scan)) { - count++; - } - LOG.info("bucket put count=" + count); - assertEquals(count, puts.size()); - ht.close(); - } - - private Put createPut(String row) { - Put put = new Put( Bytes.toBytes(row)); - put.add(FAMILY, QUALIFIER, VALUE); - return put; - } - - /** - * - * @throws Exception - */ - @Test - public void testBucketBatch() throws Exception { - byte [] TABLE = Bytes.toBytes("testBucketBatch"); - HTable ht = TEST_UTIL.createTable(TABLE, FAMILY); - - List rows = new ArrayList(); - rows.add( createPut("row1") ); - rows.add( createPut("row2") ); - rows.add( createPut("row3") ); - rows.add( createPut("row4") ); - - HTableUtil.bucketRsBatch( ht, rows ); - - Scan scan = new Scan(); - scan.addColumn(FAMILY, QUALIFIER); - - int count = 0; - for(Result result : ht.getScanner(scan)) { - count++; - } - LOG.info("bucket batch count=" + count); - assertEquals(count, rows.size()); - ht.close(); - } - - -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java index c459a20..add8221 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestIntraRowPagination.java @@ -26,17 +26,18 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.HTestConst; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionScanner; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; /** * Test scan/get offset and limit settings within one row through HRegion API. */ -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestIntraRowPagination { private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java index c5e0570..70e2c33 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaScanner.java @@ -38,8 +38,9 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.StoppableImplementation; import org.apache.hadoop.hbase.util.Threads; @@ -49,13 +50,15 @@ import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestMetaScanner { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + private Connection connection; public void setUp() throws Exception { TEST_UTIL.startMiniCluster(1); + this.connection = TEST_UTIL.getConnection(); } @After @@ -66,13 +69,13 @@ public class TestMetaScanner { @Test public void testMetaScanner() throws Exception { LOG.info("Starting testMetaScanner"); + setUp(); - final TableName TABLENAME = - TableName.valueOf("testMetaScanner"); + final TableName TABLENAME = TableName.valueOf("testMetaScanner"); final byte[] FAMILY = Bytes.toBytes("family"); TEST_UTIL.createTable(TABLENAME, FAMILY); Configuration conf = TEST_UTIL.getConfiguration(); - HTable table = new HTable(conf, TABLENAME); + HTable table = (HTable) connection.getTable(TABLENAME); TEST_UTIL.createMultiRegions(conf, table, FAMILY, new byte[][]{ HConstants.EMPTY_START_ROW, @@ -86,28 +89,28 @@ public class TestMetaScanner { doReturn(true).when(visitor).processRow((Result)anyObject()); // Scanning the entire table should give us three rows - MetaScanner.metaScan(conf, null, visitor, TABLENAME); + MetaScanner.metaScan(connection, visitor, TABLENAME); verify(visitor, times(3)).processRow((Result)anyObject()); // Scanning the table with a specified empty start row should also // give us three hbase:meta rows reset(visitor); doReturn(true).when(visitor).processRow((Result)anyObject()); - MetaScanner.metaScan(conf, visitor, TABLENAME, HConstants.EMPTY_BYTE_ARRAY, 1000); + MetaScanner.metaScan(connection, visitor, TABLENAME, HConstants.EMPTY_BYTE_ARRAY, 1000); verify(visitor, times(3)).processRow((Result)anyObject()); // Scanning the table starting in the middle should give us two rows: // region_a and region_b reset(visitor); doReturn(true).when(visitor).processRow((Result)anyObject()); - MetaScanner.metaScan(conf, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1000); + MetaScanner.metaScan(connection, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1000); verify(visitor, times(2)).processRow((Result)anyObject()); // Scanning with a limit of 1 should only give us one row reset(visitor); - doReturn(true).when(visitor).processRow((Result)anyObject()); - MetaScanner.metaScan(conf, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1); - verify(visitor, times(1)).processRow((Result)anyObject()); + doReturn(true).when(visitor).processRow((Result) anyObject()); + MetaScanner.metaScan(connection, visitor, TABLENAME, Bytes.toBytes("region_ac"), 1); + verify(visitor, times(1)).processRow((Result) anyObject()); table.close(); } @@ -134,8 +137,8 @@ public class TestMetaScanner { public void run() { while (!isStopped()) { try { - List regions = MetaScanner.listAllRegions( - TEST_UTIL.getConfiguration(), false); + List regions = MetaScanner.listAllRegions(TEST_UTIL.getConfiguration(), + connection, false); //select a random region HRegionInfo parent = regions.get(random.nextInt(regions.size())); @@ -166,7 +169,7 @@ public class TestMetaScanner { Bytes.toBytes(midKey), end); - MetaTableAccessor.splitRegion(TEST_UTIL.getHBaseAdmin().getConnection(), + MetaTableAccessor.splitRegion(connection, parent, splita, splitb, ServerName.valueOf("fooserver", 1, 0)); Threads.sleep(random.nextInt(200)); @@ -189,7 +192,7 @@ public class TestMetaScanner { while(!isStopped()) { try { NavigableMap regions = - MetaScanner.allTableRegions(TEST_UTIL.getConfiguration(), null, TABLENAME); + MetaScanner.allTableRegions(connection, TABLENAME); LOG.info("-------"); byte[] lastEndKey = HConstants.EMPTY_START_ROW; @@ -242,4 +245,3 @@ public class TestMetaScanner { } } - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java index 428c637..47bb569 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiParallel.java @@ -36,11 +36,12 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.exceptions.OperationConflictException; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.apache.hadoop.hbase.util.Threads; @@ -51,7 +52,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MediumTests.class, FlakeyTests.class}) public class TestMultiParallel { private static final Log LOG = LogFactory.getLog(TestMultiParallel.class); @@ -261,7 +262,7 @@ public class TestMultiParallel { private void doTestFlushCommits(boolean doAbort) throws Exception { // Load the data LOG.info("get new table"); - Table table = new HTable(UTIL.getConfiguration(), TEST_TABLE); + HTable table = new HTable(UTIL.getConfiguration(), TEST_TABLE); table.setAutoFlushTo(false); table.setWriteBufferSize(10 * 1024 * 1024); @@ -315,8 +316,9 @@ public class TestMultiParallel { UTIL.waitFor(15 * 1000, new Waiter.Predicate() { @Override public boolean evaluate() throws Exception { + // Master is also a regionserver, so the count is liveRScount return UTIL.getMiniHBaseCluster().getMaster() - .getClusterStatus().getServersSize() == (liveRScount - 1); + .getClusterStatus().getServersSize() == liveRScount; } }); UTIL.waitFor(15 * 1000, UTIL.predicateNoRegionsInTransition()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java index 79c5912..abb919f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultipleTimestamps.java @@ -28,6 +28,7 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; @@ -42,7 +43,7 @@ import org.junit.experimental.categories.Category; * Sets up the HBase mini cluster once at start. Each creates a table * named for the method and does its stuff against that. */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestMultipleTimestamps { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java index 6f7d03e..c46056d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutDeleteEtcCellIteration.java @@ -28,6 +28,7 @@ import java.util.ConcurrentModificationException; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -36,7 +37,7 @@ import org.junit.experimental.categories.Category; /** * Test that I can Iterate Client Actions that hold Cells (Get does not have Cells). */ -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestPutDeleteEtcCellIteration { private static final byte [] ROW = new byte [] {'r'}; private static final long TIMESTAMP = System.currentTimeMillis(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutWithDelete.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutWithDelete.java index d006686..0e819bb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutWithDelete.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestPutWithDelete.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.client; import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.assertTrue; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestPutWithDelete { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java index 03b3dda..ca1254d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicaWithCluster.java @@ -30,7 +30,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; @@ -40,6 +39,8 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.BulkLoadHFileRequ import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.regionserver.StorefileRefresherChore; import org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; @@ -58,7 +59,7 @@ import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import java.util.concurrent.atomic.AtomicReference; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestReplicaWithCluster { private static final Log LOG = LogFactory.getLog(TestReplicaWithCluster.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java index 9697caa..bb2d4db 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestReplicasClient.java @@ -19,6 +19,18 @@ package org.apache.hadoop.hbase.client; +import java.io.IOException; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Random; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicLong; +import java.util.concurrent.atomic.AtomicReference; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.impl.Log4JLogger; @@ -28,7 +40,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.TableNotFoundException; @@ -42,8 +53,9 @@ import org.apache.hadoop.hbase.regionserver.InternalScanner; import org.apache.hadoop.hbase.regionserver.RegionScanner; import org.apache.hadoop.hbase.regionserver.StorefileRefresherChore; import org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.log4j.Level; import org.apache.zookeeper.KeeperException; import org.junit.After; @@ -54,28 +66,17 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.io.IOException; -import java.util.HashMap; -import java.util.Iterator; -import java.util.List; -import java.util.Random; -import java.util.concurrent.CountDownLatch; -import java.util.concurrent.TimeUnit; -import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicLong; -import java.util.concurrent.atomic.AtomicReference; - /** * Tests for region replicas. Sad that we cannot isolate these without bringing up a whole * cluster. See {@link org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster}. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) +@SuppressWarnings("deprecation") public class TestReplicasClient { private static final Log LOG = LogFactory.getLog(TestReplicasClient.class); static { - ((Log4JLogger)RpcRetryingCaller.LOG).getLogger().setLevel(Level.ALL); + ((Log4JLogger)RpcRetryingCallerImpl.LOG).getLogger().setLevel(Level.ALL); } private static final int NB_SERVERS = 1; @@ -97,7 +98,7 @@ public class TestReplicasClient { static final AtomicLong sleepTime = new AtomicLong(0); static final AtomicBoolean slowDownNext = new AtomicBoolean(false); static final AtomicInteger countOfNext = new AtomicInteger(0); - static final AtomicReference cdl = + private static final AtomicReference cdl = new AtomicReference(new CountDownLatch(0)); Random r = new Random(); public SlowMeCopro() { @@ -134,7 +135,7 @@ public class TestReplicasClient { private void slowdownCode(final ObserverContext e) { if (e.getEnvironment().getRegion().getRegionInfo().getReplicaId() == 0) { - CountDownLatch latch = cdl.get(); + CountDownLatch latch = getCdl().get(); try { if (sleepTime.get() > 0) { LOG.info("Sleeping for " + sleepTime.get() + " ms"); @@ -153,6 +154,10 @@ public class TestReplicasClient { LOG.info("We're not the primary replicas."); } } + + public static AtomicReference getCdl() { + return cdl; + } } @BeforeClass @@ -185,6 +190,7 @@ public class TestReplicasClient { @AfterClass public static void afterClass() throws Exception { + HRegionServer.TEST_SKIP_REPORTING_TRANSITION = false; if (table != null) table.close(); HTU.shutdownMiniCluster(); } @@ -212,8 +218,6 @@ public class TestReplicasClient { closeRegion(hriPrimary); } catch (Exception ignored) { } - ZKAssign.deleteNodeFailSilent(HTU.getZooKeeperWatcher(), hriPrimary); - ZKAssign.deleteNodeFailSilent(HTU.getZooKeeperWatcher(), hriSecondary); HTU.getHBaseAdmin().getConnection().clearRegionCache(); } @@ -226,10 +230,9 @@ public class TestReplicasClient { try { if (isRegionOpened(hri)) return; } catch (Exception e){} - ZKAssign.createNodeOffline(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); // first version is '0' AdminProtos.OpenRegionRequest orr = RequestConverter.buildOpenRegionRequest( - getRS().getServerName(), hri, 0, null, null); + getRS().getServerName(), hri, null, null); AdminProtos.OpenRegionResponse responseOpen = getRS().getRSRpcServices().openRegion(null, orr); Assert.assertEquals(responseOpen.getOpeningStateCount(), 1); Assert.assertEquals(responseOpen.getOpeningState(0), @@ -238,27 +241,19 @@ public class TestReplicasClient { } private void closeRegion(HRegionInfo hri) throws Exception { - ZKAssign.createNodeClosing(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - AdminProtos.CloseRegionRequest crr = RequestConverter.buildCloseRegionRequest( - getRS().getServerName(), hri.getEncodedName(), true); + getRS().getServerName(), hri.getEncodedName()); AdminProtos.CloseRegionResponse responseClose = getRS() .getRSRpcServices().closeRegion(null, crr); Assert.assertTrue(responseClose.getClosed()); checkRegionIsClosed(hri.getEncodedName()); - - ZKAssign.deleteClosedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), null); } private void checkRegionIsOpened(HRegionInfo hri) throws Exception { - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { Thread.sleep(1); } - - Assert.assertTrue( - ZKAssign.deleteOpenedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), null)); } private boolean isRegionOpened(HRegionInfo hri) throws Exception { @@ -288,7 +283,7 @@ public class TestReplicasClient { public void testUseRegionWithoutReplica() throws Exception { byte[] b1 = "testUseRegionWithoutReplica".getBytes(); openRegion(hriSecondary); - SlowMeCopro.cdl.set(new CountDownLatch(0)); + SlowMeCopro.getCdl().set(new CountDownLatch(0)); try { Get g = new Get(b1); Result r = table.get(g); @@ -344,14 +339,14 @@ public class TestReplicasClient { byte[] b1 = "testGetNoResultStaleRegionWithReplica".getBytes(); openRegion(hriSecondary); - SlowMeCopro.cdl.set(new CountDownLatch(1)); + SlowMeCopro.getCdl().set(new CountDownLatch(1)); try { Get g = new Get(b1); g.setConsistency(Consistency.TIMELINE); Result r = table.get(g); Assert.assertTrue(r.isStale()); } finally { - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); closeRegion(hriSecondary); } } @@ -462,13 +457,13 @@ public class TestReplicasClient { LOG.info("sleep and is not stale done"); // But if we ask for stale we will get it - SlowMeCopro.cdl.set(new CountDownLatch(1)); + SlowMeCopro.getCdl().set(new CountDownLatch(1)); g = new Get(b1); g.setConsistency(Consistency.TIMELINE); r = table.get(g); Assert.assertTrue(r.isStale()); Assert.assertTrue(r.getColumnCells(f, b1).isEmpty()); - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); LOG.info("stale done"); @@ -481,14 +476,14 @@ public class TestReplicasClient { LOG.info("exists not stale done"); // exists works on stale but don't see the put - SlowMeCopro.cdl.set(new CountDownLatch(1)); + SlowMeCopro.getCdl().set(new CountDownLatch(1)); g = new Get(b1); g.setCheckExistenceOnly(true); g.setConsistency(Consistency.TIMELINE); r = table.get(g); Assert.assertTrue(r.isStale()); Assert.assertFalse("The secondary has stale data", r.getExists()); - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); LOG.info("exists stale before flush done"); flushRegion(hriPrimary); @@ -497,28 +492,28 @@ public class TestReplicasClient { Thread.sleep(1000 + REFRESH_PERIOD * 2); // get works and is not stale - SlowMeCopro.cdl.set(new CountDownLatch(1)); + SlowMeCopro.getCdl().set(new CountDownLatch(1)); g = new Get(b1); g.setConsistency(Consistency.TIMELINE); r = table.get(g); Assert.assertTrue(r.isStale()); Assert.assertFalse(r.isEmpty()); - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); LOG.info("stale done"); // exists works on stale and we see the put after the flush - SlowMeCopro.cdl.set(new CountDownLatch(1)); + SlowMeCopro.getCdl().set(new CountDownLatch(1)); g = new Get(b1); g.setCheckExistenceOnly(true); g.setConsistency(Consistency.TIMELINE); r = table.get(g); Assert.assertTrue(r.isStale()); Assert.assertTrue(r.getExists()); - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); LOG.info("exists stale after flush done"); } finally { - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); SlowMeCopro.sleepTime.set(0); Delete d = new Delete(b1); table.delete(d); @@ -544,6 +539,54 @@ public class TestReplicasClient { runMultipleScansOfOneType(true, false); } + @Test + public void testCancelOfScan() throws Exception { + openRegion(hriSecondary); + int NUMROWS = 100; + try { + for (int i = 0; i < NUMROWS; i++) { + byte[] b1 = Bytes.toBytes("testUseRegionWithReplica" + i); + Put p = new Put(b1); + p.add(f, b1, b1); + table.put(p); + } + LOG.debug("PUT done"); + int caching = 20; + byte[] start; + start = Bytes.toBytes("testUseRegionWithReplica" + 0); + + flushRegion(hriPrimary); + LOG.info("flush done"); + Thread.sleep(1000 + REFRESH_PERIOD * 2); + + // now make some 'next' calls slow + SlowMeCopro.slowDownNext.set(true); + SlowMeCopro.countOfNext.set(0); + SlowMeCopro.sleepTime.set(5000); + + Scan scan = new Scan(start); + scan.setCaching(caching); + scan.setConsistency(Consistency.TIMELINE); + ResultScanner scanner = table.getScanner(scan); + Iterator iter = scanner.iterator(); + iter.next(); + Assert.assertTrue(((ClientScanner)scanner).isAnyRPCcancelled()); + SlowMeCopro.slowDownNext.set(false); + SlowMeCopro.countOfNext.set(0); + } finally { + SlowMeCopro.getCdl().get().countDown(); + SlowMeCopro.sleepTime.set(0); + SlowMeCopro.slowDownNext.set(false); + SlowMeCopro.countOfNext.set(0); + for (int i = 0; i < NUMROWS; i++) { + byte[] b1 = Bytes.toBytes("testUseRegionWithReplica" + i); + Delete d = new Delete(b1); + table.delete(d); + } + closeRegion(hriSecondary); + } + } + private void runMultipleScansOfOneType(boolean reversed, boolean small) throws Exception { openRegion(hriSecondary); int NUMROWS = 100; @@ -584,7 +627,7 @@ public class TestReplicasClient { SlowMeCopro.slowDownNext.set(false); SlowMeCopro.countOfNext.set(0); } finally { - SlowMeCopro.cdl.get().countDown(); + SlowMeCopro.getCdl().get().countDown(); SlowMeCopro.sleepTime.set(0); SlowMeCopro.slowDownNext.set(false); SlowMeCopro.countOfNext.set(0); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClient.java index 0eec477..5bbd8be 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClient.java @@ -33,12 +33,13 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException; import org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -51,7 +52,7 @@ import org.junit.experimental.categories.Category; /** * Test restore snapshots from the client */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestRestoreSnapshotFromClient { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClientWithRegionReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClientWithRegionReplicas.java index 27ff447..94cf44d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClientWithRegionReplicas.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRestoreSnapshotFromClientWithRegionReplicas.java @@ -17,10 +17,11 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestRestoreSnapshotFromClientWithRegionReplicas extends TestRestoreSnapshotFromClient { @Override diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResult.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResult.java index f631a20..fd4b01a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResult.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestResult.java @@ -34,11 +34,12 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestResult extends TestCase { private static final Log LOG = LogFactory.getLog(TestResult.class.getName()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java index 3db2d9f..1740cc8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java @@ -29,13 +29,14 @@ import org.apache.hadoop.hbase.CellScannable; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ProtobufCoprocessorService; import org.apache.hadoop.hbase.ipc.DelegatingPayloadCarryingRpcController; import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.ipc.RpcControllerFactory; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -44,7 +45,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestRpcControllerFactory { public static class StaticRpcControllerFactory extends RpcControllerFactory { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java index 6bbe2de..b46312f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java @@ -26,10 +26,11 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.Before; @@ -40,7 +41,7 @@ import org.junit.experimental.categories.Category; /** * Test various scanner timeout issues. */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestScannerTimeout { private final static HBaseTestingUtility @@ -64,8 +65,6 @@ public class TestScannerTimeout { Configuration c = TEST_UTIL.getConfiguration(); c.setInt(HConstants.HBASE_CLIENT_SCANNER_TIMEOUT_PERIOD, SCANNER_TIMEOUT); c.setInt(HConstants.THREAD_WAKE_FREQUENCY, THREAD_WAKE_FREQUENCY); - // Put meta on master to avoid meta server shutdown handling - c.set("hbase.balancer.tablesOnMaster", "hbase:meta"); // We need more than one region server for this test TEST_UTIL.startMiniCluster(2); Table table = TEST_UTIL.createTable(TABLE_NAME, SOME_BYTES); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java index 62b5a8b..a6c1cfe 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java @@ -30,7 +30,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTestConst; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.filter.ColumnPrefixFilter; @@ -40,11 +39,10 @@ import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.After; import org.junit.AfterClass; import org.junit.Before; @@ -55,7 +53,7 @@ import org.junit.experimental.categories.Category; /** * A client-side test, mostly testing scanners with various parameters. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestScannersFromClientSide { private static final Log LOG = LogFactory.getLog(TestScannersFromClientSide.class); @@ -475,7 +473,7 @@ public class TestScannersFromClientSide { int i = cluster.getServerWith(regionName); HRegionServer rs = cluster.getRegionServer(i); ProtobufUtil.closeRegion( - rs.getRSRpcServices(), rs.getServerName(), regionName, false); + rs.getRSRpcServices(), rs.getServerName(), regionName); long startTime = EnvironmentEdgeManager.currentTime(); long timeOut = 300000; while (true) { @@ -488,27 +486,19 @@ public class TestScannersFromClientSide { } // Now open the region again. - ZooKeeperWatcher zkw = TEST_UTIL.getZooKeeperWatcher(); - try { - HMaster master = cluster.getMaster(); - RegionStates states = master.getAssignmentManager().getRegionStates(); - states.regionOffline(hri); - states.updateRegionState(hri, State.OPENING); - if (ConfigUtil.useZKForAssignment(TEST_UTIL.getConfiguration())) { - ZKAssign.createNodeOffline(zkw, hri, loc.getServerName()); - } - ProtobufUtil.openRegion(rs.getRSRpcServices(), rs.getServerName(), hri); - startTime = EnvironmentEdgeManager.currentTime(); - while (true) { - if (rs.getOnlineRegion(regionName) != null) { - break; - } - assertTrue("Timed out in open the testing region", - EnvironmentEdgeManager.currentTime() < startTime + timeOut); - Thread.sleep(500); + HMaster master = cluster.getMaster(); + RegionStates states = master.getAssignmentManager().getRegionStates(); + states.regionOffline(hri); + states.updateRegionState(hri, State.OPENING); + ProtobufUtil.openRegion(rs.getRSRpcServices(), rs.getServerName(), hri); + startTime = EnvironmentEdgeManager.currentTime(); + while (true) { + if (rs.getOnlineRegion(regionName) != null) { + break; } - } finally { - ZKAssign.deleteNodeFailSilent(zkw, hri); + assertTrue("Timed out in open the testing region", + EnvironmentEdgeManager.currentTime() < startTime + timeOut); + Thread.sleep(500); } // c0:0, c1:1 diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java index d57654f..1d9ff1e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotCloneIndependence.java @@ -31,10 +31,11 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; /** * Test to verify that the cloned table is independent of the table from which it was cloned */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestSnapshotCloneIndependence { private static final Log LOG = LogFactory.getLog(TestSnapshotCloneIndependence.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java index c87305a..c17da6d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClient.java @@ -31,7 +31,6 @@ import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; @@ -39,6 +38,8 @@ import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; import org.apache.hadoop.hbase.snapshot.SnapshotCreationException; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; import org.apache.hadoop.hbase.snapshot.SnapshotManifestV1; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -55,7 +56,7 @@ import com.google.common.collect.Lists; *

    * This is an end-to-end test for the snapshot utility */ -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestSnapshotFromClient { private static final Log LOG = LogFactory.getLog(TestSnapshotFromClient.class); protected static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClientWithRegionReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClientWithRegionReplicas.java index cd4caff..9f8cc3e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClientWithRegionReplicas.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotFromClientWithRegionReplicas.java @@ -17,10 +17,11 @@ */ package org.apache.hadoop.hbase.client; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestSnapshotFromClientWithRegionReplicas extends TestSnapshotFromClient { @Override diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotMetadata.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotMetadata.java index 7aa2b32..6f39d3b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotMetadata.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSnapshotMetadata.java @@ -34,13 +34,14 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -52,7 +53,7 @@ import org.junit.experimental.categories.Category; /** * Test class to verify that metadata is consistent before and after a snapshot attempt. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestSnapshotMetadata { private static final Log LOG = LogFactory.getLog(TestSnapshotMetadata.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java index 3e915e1..1e3d1cf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTableSnapshotScanner.java @@ -29,11 +29,12 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.mapreduce.TestTableSnapshotInputFormat; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -41,7 +42,7 @@ import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({LargeTests.class, ClientTests.class}) public class TestTableSnapshotScanner { private static final Log LOG = LogFactory.getLog(TestTableSnapshotInputFormat.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java index 1a67bea..4843715 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestTimestampsFilter.java @@ -30,6 +30,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.TimestampsFilter; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; @@ -44,7 +45,7 @@ import org.junit.experimental.categories.Category; * Sets up the HBase mini cluster once at start. Each creates a table * named for the method and does its stuff against that. */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestTimestampsFilter { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java index e2af1ab..73e493b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestUpdateConfiguration.java @@ -40,7 +40,7 @@ import org.junit.experimental.categories.Category; public class TestUpdateConfiguration { private static final Log LOG = LogFactory.getLog(TestUpdateConfiguration.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - + @BeforeClass public static void setup() throws Exception { TEST_UTIL.startMiniCluster(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java index c0c662f..4db646e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/client/replication/TestReplicationAdmin.java @@ -26,6 +26,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.replication.ReplicationException; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.BeforeClass; import org.junit.Test; @@ -41,7 +42,7 @@ import static org.junit.Assert.assertFalse; /** * Unit testing of ReplicationAdmin */ -@Category(MediumTests.class) +@Category({MediumTests.class, ClientTests.class}) public class TestReplicationAdmin { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/codec/TestCellMessageCodec.java hbase-server/src/test/java/org/apache/hadoop/hbase/codec/TestCellMessageCodec.java index 2433268..b51de80 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/codec/TestCellMessageCodec.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/codec/TestCellMessageCodec.java @@ -32,6 +32,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellComparator; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -40,7 +41,7 @@ import org.junit.experimental.categories.Category; import com.google.common.io.CountingInputStream; import com.google.common.io.CountingOutputStream; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCellMessageCodec { public static final Log LOG = LogFactory.getLog(TestCellMessageCodec.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/conf/TestConfigurationManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/conf/TestConfigurationManager.java index 32f13bb..fe56344 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/conf/TestConfigurationManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/conf/TestConfigurationManager.java @@ -24,11 +24,12 @@ import static org.junit.Assert.assertTrue; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SmallTests.class, ClientTests.class}) public class TestConfigurationManager { public static final Log LOG = LogFactory.getLog(TestConfigurationManager.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java index ccd7eb2..f4ad44c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraint.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category; /** * Do the complex testing of constraints against a minicluster */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestConstraint { private static final Log LOG = LogFactory .getLog(TestConstraint.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraints.java hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraints.java index bb7fee8..afd55bb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraints.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/constraint/TestConstraints.java @@ -26,6 +26,7 @@ import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; /** * Test reading/writing the constraints into the {@link HTableDescriptor} */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestConstraints { @SuppressWarnings("unchecked") diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestAggregateProtocol.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestAggregateProtocol.java index 1175963..e1a7a8d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestAggregateProtocol.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestAggregateProtocol.java @@ -34,6 +34,7 @@ import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.PrefixFilter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.LongMsg; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; @@ -45,7 +46,7 @@ import org.junit.experimental.categories.Category; * A test class to cover aggregate functions, that can be implemented using * Coprocessors. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestAggregateProtocol { protected static Log myLog = LogFactory.getLog(TestAggregateProtocol.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBatchCoprocessorEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBatchCoprocessorEndpoint.java index 938446e..c99a1f6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBatchCoprocessorEndpoint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBatchCoprocessorEndpoint.java @@ -27,6 +27,8 @@ import java.util.TreeMap; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -34,7 +36,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; @@ -56,7 +57,7 @@ import com.google.protobuf.ServiceException; /** * TestEndpoint: test cases to verify the batch execution of coprocessor Endpoint */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestBatchCoprocessorEndpoint { private static final Log LOG = LogFactory.getLog(TestBatchCoprocessorEndpoint.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBigDecimalColumnInterpreter.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBigDecimalColumnInterpreter.java index d770250..7e2d96e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBigDecimalColumnInterpreter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestBigDecimalColumnInterpreter.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.PrefixFilter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.BigDecimalMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; @@ -43,7 +44,7 @@ import org.junit.experimental.categories.Category; /** * A test class to test BigDecimalColumnInterpreter for AggregationsProtocol */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestBigDecimalColumnInterpreter { protected static Log myLog = LogFactory.getLog(TestBigDecimalColumnInterpreter.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java index b2f6685..140c3b9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestClassLoading.java @@ -26,6 +26,7 @@ import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.TestServerCustomProtocol; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.ClassLoaderTestHelper; import org.apache.hadoop.hbase.util.CoprocessorClassLoader; @@ -49,7 +50,7 @@ import static org.junit.Assert.assertFalse; /** * Test coprocessors class loading. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestClassLoading { private static final Log LOG = LogFactory.getLog(TestClassLoading.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java index 2bc08d8..64732b0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorEndpoint.java @@ -32,6 +32,8 @@ import java.util.TreeMap; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -41,7 +43,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -66,7 +67,7 @@ import com.google.protobuf.ServiceException; /** * TestEndpoint: test cases to verify coprocessor Endpoint */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestCoprocessorEndpoint { private static final Log LOG = LogFactory.getLog(TestCoprocessorEndpoint.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java index 439782e..c3791c3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorInterface.java @@ -49,7 +49,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Scan; @@ -61,6 +60,8 @@ import org.apache.hadoop.hbase.regionserver.ScanType; import org.apache.hadoop.hbase.regionserver.SplitTransaction; import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.StoreFile; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.PairOfSameType; import org.junit.Rule; import org.junit.Test; @@ -68,7 +69,7 @@ import org.junit.experimental.categories.Category; import org.junit.rules.TestName; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({CoprocessorTests.class, SmallTests.class}) public class TestCoprocessorInterface { @Rule public TestName name = new TestName(); static final Log LOG = LogFactory.getLog(TestCoprocessorInterface.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorStop.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorStop.java index 6e6353d..2ef13f7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorStop.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestCoprocessorStop.java @@ -27,6 +27,7 @@ import org.apache.hadoop.hbase.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -39,7 +40,7 @@ import static org.junit.Assert.assertTrue; * Tests for master and regionserver coprocessor stop method * */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestCoprocessorStop { private static final Log LOG = LogFactory.getLog(TestCoprocessorStop.class); private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestDoubleColumnInterpreter.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestDoubleColumnInterpreter.java index 3ddda46..8669a6c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestDoubleColumnInterpreter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestDoubleColumnInterpreter.java @@ -25,7 +25,6 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.HTable; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.filter.Filter; import org.apache.hadoop.hbase.filter.PrefixFilter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.DoubleMsg; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.EmptyMsg; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category; /** * A test class to test DoubleColumnInterpreter for AggregateProtocol */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestDoubleColumnInterpreter { protected static Log myLog = LogFactory.getLog(TestDoubleColumnInterpreter.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java index 1c6bf8e..4649961 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestHTableWrapper.java @@ -27,7 +27,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.Delete; @@ -45,6 +44,8 @@ import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.client.coprocessor.Batch; import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.VersionInfo; import org.junit.After; @@ -60,7 +61,7 @@ import static org.junit.Assert.*; * Tests class {@link org.apache.hadoop.hbase.client.HTableWrapper} * by invoking its methods and briefly asserting the result is reasonable. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestHTableWrapper { private static final HBaseTestingUtility util = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java index 54544dd..061068c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java @@ -34,12 +34,13 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -54,7 +55,7 @@ import org.junit.experimental.categories.Category; * error message describing the set of its loaded coprocessors for crash diagnosis. * (HBASE-4014). */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestMasterCoprocessorExceptionWithAbort { public static class MasterTracker extends ZooKeeperNodeTracker { @@ -197,7 +198,7 @@ public class TestMasterCoprocessorExceptionWithAbort { // Test (part of the) output that should have be printed by master when it aborts: // (namely the part that shows the set of loaded coprocessors). // In this test, there is only a single coprocessor (BuggyMasterObserver). - assertTrue(master.getLoadedCoprocessors(). + assertTrue(HMaster.getLoadedCoprocessors(). contains(TestMasterCoprocessorExceptionWithAbort.BuggyMasterObserver.class.getName())); CreateTableThread createTableThread = new CreateTableThread(UTIL); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithRemove.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithRemove.java index 08d1131..5048c73 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithRemove.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithRemove.java @@ -32,12 +32,13 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -53,7 +54,7 @@ import org.junit.experimental.categories.Category; * back to the client. * (HBASE-4014). */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestMasterCoprocessorExceptionWithRemove { public static class MasterTracker extends ZooKeeperNodeTracker { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java index 419d7f4..3f5fe9c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java @@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; @@ -56,7 +55,10 @@ import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableDescriptorsRequest; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.GetTableNamesRequest; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.junit.AfterClass; @@ -68,7 +70,7 @@ import org.junit.experimental.categories.Category; * Tests invocation of the {@link org.apache.hadoop.hbase.coprocessor.MasterObserver} * interface hooks at all appropriate times during normal HMaster operations. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestMasterObserver { private static final Log LOG = LogFactory.getLog(TestMasterObserver.class); @@ -125,6 +127,8 @@ public class TestMasterObserver { private boolean stopCalled; private boolean preSnapshotCalled; private boolean postSnapshotCalled; + private boolean preListSnapshotCalled; + private boolean postListSnapshotCalled; private boolean preCloneSnapshotCalled; private boolean postCloneSnapshotCalled; private boolean preRestoreSnapshotCalled; @@ -201,6 +205,8 @@ public class TestMasterObserver { postBalanceSwitchCalled = false; preSnapshotCalled = false; postSnapshotCalled = false; + preListSnapshotCalled = false; + postListSnapshotCalled = false; preCloneSnapshotCalled = false; postCloneSnapshotCalled = false; preRestoreSnapshotCalled = false; @@ -757,6 +763,22 @@ public class TestMasterObserver { } @Override + public void preListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + preListSnapshotCalled = true; + } + + @Override + public void postListSnapshot(final ObserverContext ctx, + final SnapshotDescription snapshot) throws IOException { + postListSnapshotCalled = true; + } + + public boolean wasListSnapshotCalled() { + return preListSnapshotCalled && postListSnapshotCalled; + } + + @Override public void preCloneSnapshot(final ObserverContext ctx, final SnapshotDescription snapshot, final HTableDescriptor hTableDescriptor) throws IOException { @@ -1037,18 +1059,8 @@ public class TestMasterObserver { @Override public void preGetTableDescriptors(ObserverContext ctx, - List tableNamesList, List descriptors) throws IOException { - } - - @Override - public void postGetTableDescriptors(ObserverContext ctx, - List descriptors) throws IOException { - } - - @Override - public void preGetTableDescriptors(ObserverContext ctx, - List tableNamesList, List descriptors, - String regex) throws IOException { + List tableNamesList, List descriptors, String regex) + throws IOException { preGetTableDescriptorsCalled = true; } @@ -1088,6 +1100,56 @@ public class TestMasterObserver { public void postTableFlush(ObserverContext ctx, TableName tableName) throws IOException { } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetUserQuota(final ObserverContext ctx, + final String userName, final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void preSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void postSetTableQuota(final ObserverContext ctx, + final TableName tableName, final Quotas quotas) throws IOException { + } + + @Override + public void preSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } + + @Override + public void postSetNamespaceQuota(final ObserverContext ctx, + final String namespace, final Quotas quotas) throws IOException { + } } private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); @@ -1333,6 +1395,11 @@ public class TestMasterObserver { assertTrue("Coprocessor should have been called on snapshot", cp.wasSnapshotCalled()); + //Test list operation + admin.listSnapshots(); + assertTrue("Coprocessor should have been called on snapshot list", + cp.wasListSnapshotCalled()); + // Test clone operation admin.cloneSnapshot(TEST_SNAPSHOT, TEST_CLONE); assertTrue("Coprocessor should have been called on snapshot clone", diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestOpenTableInCoprocessor.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestOpenTableInCoprocessor.java index fc60c80..57db176 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestOpenTableInCoprocessor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestOpenTableInCoprocessor.java @@ -31,7 +31,6 @@ import java.util.concurrent.TimeUnit; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Durability; @@ -42,6 +41,8 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Threads; import org.junit.After; import org.junit.AfterClass; @@ -52,7 +53,7 @@ import org.junit.experimental.categories.Category; /** * Test that a coprocessor can open a connection and write to another table, inside a hook. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestOpenTableInCoprocessor { private static final TableName otherTable = TableName.valueOf("otherTable"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverBypass.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverBypass.java index 4da317b..3e41859 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverBypass.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverBypass.java @@ -29,7 +29,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Delete; @@ -40,6 +39,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; @@ -50,7 +51,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionObserverBypass { private static HBaseTestingUtility util; private static final TableName tableName = TableName.valueOf("test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java index 388a15f..882aece 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverInterface.java @@ -42,7 +42,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -71,6 +70,8 @@ import org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost; import org.apache.hadoop.hbase.regionserver.ScanType; import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.StoreFile; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.JVMClusterUtil; @@ -80,7 +81,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionObserverInterface { static final Log LOG = LogFactory.getLog(TestRegionObserverInterface.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java index ea1b660..76e3209 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java @@ -39,7 +39,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Get; @@ -62,11 +61,13 @@ import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.StoreScanner; import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionObserverScannerOpenHook { private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); static final Path DIR = UTIL.getDataTestDir(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java index 53c234e..126a2d2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverStacking.java @@ -31,17 +31,18 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({CoprocessorTests.class, SmallTests.class}) public class TestRegionObserverStacking extends TestCase { private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorEndpoint.java index ef75040..df85004 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorEndpoint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorEndpoint.java @@ -25,9 +25,9 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.CoprocessorEnvironment; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.protobuf.generated.DummyRegionServerEndpointProtos; import org.apache.hadoop.hbase.coprocessor.protobuf.generated.DummyRegionServerEndpointProtos.DummyRequest; import org.apache.hadoop.hbase.coprocessor.protobuf.generated.DummyRegionServerEndpointProtos.DummyResponse; @@ -35,6 +35,8 @@ import org.apache.hadoop.hbase.coprocessor.protobuf.generated.DummyRegionServerE import org.apache.hadoop.hbase.ipc.BlockingRpcCallback; import org.apache.hadoop.hbase.ipc.ServerRpcController; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -43,7 +45,7 @@ import com.google.protobuf.RpcCallback; import com.google.protobuf.RpcController; import com.google.protobuf.Service; -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionServerCoprocessorEndpoint { private static HBaseTestingUtility TEST_UTIL = null; private static Configuration CONF = null; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithAbort.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithAbort.java index b6bfa1a..469dd4e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithAbort.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithAbort.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; @@ -45,7 +46,7 @@ import static org.junit.Assert.*; * error message describing the set of its loaded coprocessors for crash * diagnosis. (HBASE-4014). */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionServerCoprocessorExceptionWithAbort { static final Log LOG = LogFactory.getLog(TestRegionServerCoprocessorExceptionWithAbort.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithRemove.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithRemove.java index b2de6d2..af1cd59 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithRemove.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerCoprocessorExceptionWithRemove.java @@ -27,6 +27,7 @@ import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; @@ -44,7 +45,7 @@ import static org.junit.Assert.*; * back to the client. * (HBASE-4014). */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionServerCoprocessorExceptionWithRemove { public static class BuggyRegionObserver extends SimpleRegionObserver { @SuppressWarnings("null") diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerObserver.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerObserver.java index 638321c..0c30bb2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerObserver.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionServerObserver.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.MetaTableAccessor; @@ -43,6 +42,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.RegionMergeTransaction; import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -51,7 +52,7 @@ import org.junit.experimental.categories.Category; * Tests invocation of the {@link org.apache.hadoop.hbase.coprocessor.RegionServerObserver} * interface hooks at all appropriate times during normal HMaster operations. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRegionServerObserver { private static final Log LOG = LogFactory.getLog(TestRegionServerObserver.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java index 29e96bb..2136c3c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRowProcessorEndpoint.java @@ -35,6 +35,8 @@ import java.util.concurrent.atomic.AtomicLong; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; @@ -42,7 +44,6 @@ import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.IsolationLevel; @@ -81,7 +82,7 @@ import org.apache.commons.logging.LogFactory; * Verifies ProcessEndpoint works. * The tested RowProcessor performs two scans and a read-modify-write. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestRowProcessorEndpoint { static final Log LOG = LogFactory.getLog(TestRowProcessorEndpoint.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java index aee1b1f..cdcdeed 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestWALObserver.java @@ -45,7 +45,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -58,6 +57,8 @@ import org.apache.hadoop.hbase.wal.WALFactory; import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.CoprocessorTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdge; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -76,7 +77,7 @@ import org.junit.experimental.categories.Category; * {@link org.apache.hadoop.hbase.coprocessor.MasterObserver} interface hooks at * all appropriate times during normal HMaster operations. */ -@Category(MediumTests.class) +@Category({CoprocessorTests.class, MediumTests.class}) public class TestWALObserver { private static final Log LOG = LogFactory.getLog(TestWALObserver.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionDispatcher.java hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionDispatcher.java index 812d81c..229b170 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionDispatcher.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionDispatcher.java @@ -22,6 +22,7 @@ import static org.junit.Assert.fail; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -31,7 +32,7 @@ import org.mockito.Mockito; * Test that we propagate errors through an dispatcher exactly once via different failure * injection mechanisms. */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestForeignExceptionDispatcher { private static final Log LOG = LogFactory.getLog(TestForeignExceptionDispatcher.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionSerialization.java hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionSerialization.java index 791e0b1..f893555 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionSerialization.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestForeignExceptionSerialization.java @@ -22,6 +22,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -31,7 +32,7 @@ import com.google.protobuf.InvalidProtocolBufferException; /** * Test that we correctly serialize exceptions from a remote source */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestForeignExceptionSerialization { private static final String srcName = "someNode"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java index a835e9e..49f6164 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/errorhandling/TestTimeoutExceptionInjector.java @@ -21,6 +21,7 @@ import static org.junit.Assert.fail; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import org.mockito.Mockito; /** * Test the {@link TimeoutExceptionInjector} to ensure we fulfill contracts */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestTimeoutExceptionInjector { private static final Log LOG = LogFactory.getLog(TestTimeoutExceptionInjector.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java hbase-server/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java index acb7ecf..0561ac4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/executor/TestExecutorService.java @@ -32,13 +32,14 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.executor.ExecutorService.Executor; import org.apache.hadoop.hbase.executor.ExecutorService.ExecutorStatus; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import static org.mockito.Mockito.*; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestExecutorService { private static final Log LOG = LogFactory.getLog(TestExecutorService.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterAllFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterAllFilter.java index 27b6590..a104def 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterAllFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterAllFilter.java @@ -32,13 +32,6 @@ public class FilterAllFilter extends FilterBase { return ReturnCode.SKIP; } - // Override here explicitly as the method in super class FilterBase might do a KeyValue recreate. - // See HBASE-12068 - @Override - public Cell transformCell(Cell v) { - return v; - } - @Override public boolean hasFilterRow() { return true; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterTestingCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterTestingCluster.java index 4f94599..76290fb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterTestingCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/FilterTestingCluster.java @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.client.ScannerCallable; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.ipc.AbstractRpcClient; import org.apache.hadoop.hbase.ipc.RpcServer; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.log4j.Level; @@ -54,7 +55,7 @@ import org.junit.experimental.categories.Category; * By using this class as the super class of a set of tests you will have a HBase testing * cluster available that is very suitable for writing tests for scanning and filtering against. */ -@Category({MediumTests.class}) +@Category({FilterTests.class, MediumTests.class}) public class FilterTestingCluster { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static Configuration conf = null; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java index 8018371..21414f0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java @@ -15,6 +15,7 @@ */ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -24,7 +25,7 @@ import static org.junit.Assert.assertEquals; /** * Tests for the bit comparator */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestBitComparator { private static byte[] zeros = new byte[]{0, 0, 0, 0, 0, 0}; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java index f4accb3..4d0329b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPaginationFilter.java @@ -21,9 +21,10 @@ package org.apache.hadoop.hbase.filter; import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.FilterProtos; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; import org.junit.Test; @@ -33,7 +34,7 @@ import org.junit.experimental.categories.Category; * Test for the ColumnPaginationFilter, used mainly to test the successful serialization of the filter. * More test functionality can be found within {@link org.apache.hadoop.hbase.filter.TestFilter#testColumnPaginationFilter()} */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestColumnPaginationFilter { private static final byte[] ROW = Bytes.toBytes("row_1_test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java index bc66d7d..0fbad42 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnPrefixFilter.java @@ -33,12 +33,13 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestColumnPrefixFilter { private final static HBaseTestingUtility TEST_UTIL = new diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnRangeFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnRangeFilter.java index 88b4308..1c81adf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnRangeFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestColumnRangeFilter.java @@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -116,7 +117,7 @@ class StringRange { } -@Category(MediumTests.class) +@Category({FilterTests.class, MediumTests.class}) public class TestColumnRangeFilter { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestComparatorSerialization.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestComparatorSerialization.java index 693f041..223416f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestComparatorSerialization.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestComparatorSerialization.java @@ -24,13 +24,14 @@ import static org.junit.Assert.assertTrue; import java.util.regex.Pattern; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestComparatorSerialization { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java index bd1f7ab..06e0260 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestDependentColumnFilter.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.filter.Filter.ReturnCode; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; @@ -45,7 +46,7 @@ import static org.junit.Assert.assertTrue; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestDependentColumnFilter { private final Log LOG = LogFactory.getLog(this.getClass()); private static final byte[][] ROWS = { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java index ed6e6de..3396587 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java @@ -38,7 +38,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; @@ -50,6 +49,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; import org.apache.hadoop.hbase.regionserver.RegionScanner; import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.Assert; @@ -62,7 +63,7 @@ import com.google.common.base.Throwables; /** * Test filters at the HRegion doorstep. */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestFilter { private final static Log LOG = LogFactory.getLog(TestFilter.class); private HRegion region; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java index e719a2a..759435b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterList.java @@ -18,26 +18,26 @@ */ package org.apache.hadoop.hbase.filter; +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertNull; +import static org.junit.Assert.assertTrue; + import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.List; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertNull; - import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.filter.Filter.ReturnCode; import org.apache.hadoop.hbase.filter.FilterList.Operator; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -48,7 +48,7 @@ import com.google.common.collect.Lists; * Tests filter sets * */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestFilterList { static final int MAX_PAGES = 2; static final char FIRST_CHAR = 'a'; @@ -445,7 +445,7 @@ public class TestFilterList { } @Override - public Cell getNextCellHint(Cell currentKV) { + public Cell getNextCellHint(Cell cell) { return new KeyValue(Bytes.toBytes(Long.MAX_VALUE), null, null); } @@ -458,22 +458,22 @@ public class TestFilterList { // Should take the min if given two hints FilterList filterList = new FilterList(Operator.MUST_PASS_ONE, Arrays.asList(new Filter [] { filterMinHint, filterMaxHint } )); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), minKeyValue)); // Should have no hint if any filter has no hint filterList = new FilterList(Operator.MUST_PASS_ONE, Arrays.asList( new Filter [] { filterMinHint, filterMaxHint, filterNoHint } )); - assertNull(filterList.getNextKeyHint(null)); + assertNull(filterList.getNextCellHint(null)); filterList = new FilterList(Operator.MUST_PASS_ONE, Arrays.asList(new Filter [] { filterNoHint, filterMaxHint } )); - assertNull(filterList.getNextKeyHint(null)); + assertNull(filterList.getNextCellHint(null)); // Should give max hint if its the only one filterList = new FilterList(Operator.MUST_PASS_ONE, Arrays.asList(new Filter [] { filterMaxHint, filterMaxHint } )); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), maxKeyValue)); // MUST PASS ALL @@ -482,13 +482,13 @@ public class TestFilterList { filterList = new FilterList(Operator.MUST_PASS_ALL, Arrays.asList(new Filter [] { filterMinHint, filterMaxHint } )); filterList.filterKeyValue(null); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), minKeyValue)); filterList = new FilterList(Operator.MUST_PASS_ALL, Arrays.asList(new Filter [] { filterMaxHint, filterMinHint } )); filterList.filterKeyValue(null); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), maxKeyValue)); // Should have first hint even if a filter has no hint @@ -496,17 +496,17 @@ public class TestFilterList { Arrays.asList( new Filter [] { filterNoHint, filterMinHint, filterMaxHint } )); filterList.filterKeyValue(null); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), minKeyValue)); filterList = new FilterList(Operator.MUST_PASS_ALL, Arrays.asList(new Filter [] { filterNoHint, filterMaxHint } )); filterList.filterKeyValue(null); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), maxKeyValue)); filterList = new FilterList(Operator.MUST_PASS_ALL, Arrays.asList(new Filter [] { filterNoHint, filterMinHint } )); filterList.filterKeyValue(null); - assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextKeyHint(null), + assertEquals(0, KeyValue.COMPARATOR.compare(filterList.getNextCellHint(null), minKeyValue)); } @@ -539,12 +539,12 @@ public class TestFilterList { // Value for fam:qual1 should be stripped: assertEquals(Filter.ReturnCode.INCLUDE, flist.filterKeyValue(kvQual1)); - final KeyValue transformedQual1 = KeyValueUtil.ensureKeyValue(flist.transform(kvQual1)); + final KeyValue transformedQual1 = KeyValueUtil.ensureKeyValue(flist.transformCell(kvQual1)); assertEquals(0, transformedQual1.getValue().length); // Value for fam:qual2 should not be stripped: assertEquals(Filter.ReturnCode.INCLUDE, flist.filterKeyValue(kvQual2)); - final KeyValue transformedQual2 = KeyValueUtil.ensureKeyValue(flist.transform(kvQual2)); + final KeyValue transformedQual2 = KeyValueUtil.ensureKeyValue(flist.transformCell(kvQual2)); assertEquals("value", Bytes.toString(transformedQual2.getValue())); // Other keys should be skipped: diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterSerialization.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterSerialization.java index 9afbc5d..08ce3d5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterSerialization.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterSerialization.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertTrue; import java.util.LinkedList; import java.util.TreeSet; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; @@ -32,7 +33,7 @@ import org.apache.hadoop.hbase.util.Pair; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestFilterSerialization { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWithScanLimits.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWithScanLimits.java index de428a8..142b15a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWithScanLimits.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWithScanLimits.java @@ -30,12 +30,13 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.BeforeClass; import org.junit.Test; @@ -44,7 +45,7 @@ import org.junit.experimental.categories.Category; /** * Test if Filter is incompatible with scan-limits */ -@Category(MediumTests.class) +@Category({FilterTests.class, MediumTests.class}) public class TestFilterWithScanLimits extends FilterTestingCluster { private static final Log LOG = LogFactory .getLog(TestFilterWithScanLimits.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWrapper.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWrapper.java index 93bf75c..1cffe1d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWrapper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilterWrapper.java @@ -43,6 +43,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; @@ -50,14 +52,13 @@ import org.junit.BeforeClass; import org.junit.Test; import static org.junit.Assert.*; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.experimental.categories.Category; /** * Test if the FilterWrapper retains the same semantics defined in the * {@link org.apache.hadoop.hbase.filter.Filter} */ -@Category(MediumTests.class) +@Category({FilterTests.class, MediumTests.class}) public class TestFilterWrapper { private static final Log LOG = LogFactory.getLog(TestFilterWrapper.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFirstKeyValueMatchingQualifiersFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFirstKeyValueMatchingQualifiersFilter.java index b51adb9..fb384a7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFirstKeyValueMatchingQualifiersFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFirstKeyValueMatchingQualifiersFilter.java @@ -23,11 +23,12 @@ import java.util.TreeSet; import junit.framework.TestCase; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestFirstKeyValueMatchingQualifiersFilter extends TestCase { private static final byte[] ROW = Bytes.toBytes("test"); private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java index 51167fa..565c7db 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java @@ -27,7 +27,6 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; @@ -35,6 +34,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.junit.After; @@ -48,7 +49,7 @@ import com.google.common.collect.Lists; /** */ -@Category(MediumTests.class) +@Category({FilterTests.class, MediumTests.class}) public class TestFuzzyRowAndColumnRangeFilter { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final Log LOG = LogFactory.getLog(this.getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java index 395c4ad..3ec1351 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java @@ -17,13 +17,14 @@ */ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestFuzzyRowFilter { @Test public void testSatisfiesForward() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java index e5e7317..e527ca8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInclusiveStopFilter.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; @@ -26,12 +27,11 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; /** * Tests the inclusive stop row filter */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestInclusiveStopFilter { private final byte [] STOP_ROW = Bytes.toBytes("stop_row"); private final byte [] GOOD_ROW = Bytes.toBytes("good_row"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java index 946f948..70ef51f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java @@ -28,7 +28,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; @@ -36,6 +35,8 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.Assert; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; * Test the invocation logic of the filters. A filter must be invoked only for * the columns that are requested for. */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestInvocationRecordFilter { private static final byte[] TABLE_NAME_BYTES = Bytes diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java index 90d6991..0db5ecf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultipleColumnPrefixFilter.java @@ -33,12 +33,13 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestMultipleColumnPrefixFilter { private final static HBaseTestingUtility TEST_UTIL = new diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java index 2263979..2f13da1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java @@ -16,12 +16,13 @@ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestNullComparator { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java index 087b148..139bf6f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPageFilter.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertTrue; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; /** * Tests for the page filter */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestPageFilter { static final int ROW_LIMIT = 3; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java index 9a4b386..4b2df33 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestParseFilter.java @@ -24,6 +24,7 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; @@ -36,7 +37,7 @@ import org.junit.experimental.categories.Category; * It tests the entire work flow from when a string is given by the user * and how it is parsed to construct the corresponding Filter object */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestParseFilter { ParseFilter f; @@ -614,7 +615,7 @@ public class TestParseFilter { @Test public void testUnescapedQuote3 () throws IOException { - String filterString = " InclusiveStopFilter ('''')"; + String filterString = " InclusiveStopFilter ('''')"; InclusiveStopFilter inclusiveStopFilter = doTestFilter(filterString, InclusiveStopFilter.class); byte [] stopRowKey = inclusiveStopFilter.getStopRowKey(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java index a9f218b..02a55ba 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestPrefixFilter.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; @@ -27,7 +28,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.*; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestPrefixFilter { Filter mainFilter; static final char FIRST_CHAR = 'a'; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRandomRowFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRandomRowFilter.java index 2351498..8effca5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRandomRowFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRandomRowFilter.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.filter; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; @@ -27,7 +28,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.*; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestRandomRowFilter { protected RandomRowFilter quarterChanceFilter; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRegexComparator.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRegexComparator.java index a8ca243..9dbe432 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRegexComparator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestRegexComparator.java @@ -22,13 +22,14 @@ import static org.junit.Assert.*; import java.util.regex.Pattern; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.filter.RegexStringComparator.EngineType; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestRegexComparator { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestScanRowPrefix.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestScanRowPrefix.java index 2a0e085..100f26d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestScanRowPrefix.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestScanRowPrefix.java @@ -27,6 +27,7 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.FilterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.Assert; import org.junit.Test; @@ -40,7 +41,7 @@ import java.util.List; /** * Test if Scan.setRowPrefixFilter works as intended. */ -@Category({MediumTests.class}) +@Category({FilterTests.class, MediumTests.class}) public class TestScanRowPrefix extends FilterTestingCluster { private static final Log LOG = LogFactory .getLog(TestScanRowPrefix.class); @@ -80,7 +81,7 @@ public class TestScanRowPrefix extends FilterTestingCluster { byte[] prefix0 = {}; List expected0 = new ArrayList<>(16); expected0.addAll(Arrays.asList(rowIds)); // Expect all rows - + byte[] prefix1 = {(byte) 0x12, (byte) 0x23}; List expected1 = new ArrayList<>(16); expected1.add(rowIds[2]); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java index 170fbe2..7aa298c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueExcludeFilter.java @@ -20,8 +20,9 @@ package org.apache.hadoop.hbase.filter; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -37,7 +38,7 @@ import java.util.ArrayList; * tested. That is, method filterKeyValue(KeyValue). * */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestSingleColumnValueExcludeFilter { private static final byte[] ROW = Bytes.toBytes("test"); private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java index 18f3c19..b4e364d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestSingleColumnValueFilter.java @@ -25,8 +25,9 @@ import java.io.IOException; import java.util.regex.Pattern; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; +import org.apache.hadoop.hbase.testclassification.FilterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; import org.junit.Test; @@ -35,7 +36,7 @@ import org.junit.experimental.categories.Category; /** * Tests the value filter */ -@Category(SmallTests.class) +@Category({FilterTests.class, SmallTests.class}) public class TestSingleColumnValueFilter { private static final byte[] ROW = Bytes.toBytes("test"); private static final byte[] COLUMN_FAMILY = Bytes.toBytes("test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java index 3379dff..db751b2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/fs/TestBlockReorder.java @@ -41,7 +41,6 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.ipc.RemoteException; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; @@ -50,6 +49,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.wal.DefaultWALProvider; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hdfs.DFSClient; import org.apache.hadoop.hdfs.DistributedFileSystem; @@ -71,7 +72,7 @@ import org.junit.experimental.categories.Category; /** * Tests for the hdfs fix from HBASE-6435. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestBlockReorder { private static final Log LOG = LogFactory.getLog(TestBlockReorder.class); @@ -252,8 +253,7 @@ public class TestBlockReorder { MiniHBaseCluster hbm = htu.startMiniHBaseCluster(1, 1); hbm.waitForActiveAndReadyMaster(); - hbm.getRegionServer(0).waitForServerOnline(); - HRegionServer targetRs = hbm.getRegionServer(0); + HRegionServer targetRs = hbm.getMaster(); // We want to have a datanode with the same name as the region server, so // we're going to get the regionservername, and start a new datanode with this name. diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestGlobalFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestGlobalFilter.java index 0effaa8..b06dea1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestGlobalFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestGlobalFilter.java @@ -36,12 +36,13 @@ import javax.servlet.http.HttpServletRequest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.net.NetUtils; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestGlobalFilter extends HttpServerFunctionalTest { static final Log LOG = LogFactory.getLog(HttpServer.class); static final Set RECORDS = new TreeSet(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHtmlQuoting.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHtmlQuoting.java index fd378cd..82fbe04 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHtmlQuoting.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHtmlQuoting.java @@ -21,12 +21,13 @@ import static org.junit.Assert.*; import javax.servlet.http.HttpServletRequest; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHtmlQuoting { @Test public void testNeedsQuoting() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLog.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLog.java index de24aa9..8fea254 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLog.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLog.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hbase.http; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.log4j.Logger; import org.junit.Test; @@ -28,7 +29,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHttpRequestLog { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLogAppender.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLogAppender.java index 88a3a98..a17b9e9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLogAppender.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpRequestLogAppender.java @@ -17,13 +17,14 @@ */ package org.apache.hadoop.hbase.http; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import static org.junit.Assert.assertEquals; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHttpRequestLogAppender { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServer.java index bec59df..ffb924c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServer.java @@ -54,6 +54,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.http.HttpServer.QuotingInputFilter.RequestQuoter; import org.apache.hadoop.hbase.http.resource.JerseyResource; @@ -72,7 +73,7 @@ import org.mockito.internal.util.reflection.Whitebox; import org.mortbay.jetty.Connector; import org.mortbay.util.ajax.JSON; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHttpServer extends HttpServerFunctionalTest { static final Log LOG = LogFactory.getLog(TestHttpServer.class); private static HttpServer server; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerLifecycle.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerLifecycle.java index b71db0e..2fb51ea 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerLifecycle.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerLifecycle.java @@ -17,11 +17,13 @@ */ package org.apache.hadoop.hbase.http; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.log4j.Logger; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHttpServerLifecycle extends HttpServerFunctionalTest { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerWebapps.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerWebapps.java index 51fd845..db394a8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerWebapps.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestHttpServerWebapps.java @@ -17,18 +17,19 @@ */ package org.apache.hadoop.hbase.http; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import org.apache.commons.logging.LogFactory; import org.apache.commons.logging.Log; -import org.apache.hadoop.hbase.testclassification.SmallTests; import java.io.FileNotFoundException; /** * Test webapp loading */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHttpServerWebapps extends HttpServerFunctionalTest { private static final Log log = LogFactory.getLog(TestHttpServerWebapps.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestPathFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestPathFilter.java index 3da9d98..5854ea2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestPathFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestPathFilter.java @@ -36,12 +36,13 @@ import javax.servlet.http.HttpServletRequest; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.net.NetUtils; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestPathFilter extends HttpServerFunctionalTest { static final Log LOG = LogFactory.getLog(HttpServer.class); static final Set RECORDS = new TreeSet(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestSSLHttpServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestSSLHttpServer.java index bdfeb68..1b79aff 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestSSLHttpServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestSSLHttpServer.java @@ -29,6 +29,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.http.ssl.KeyStoreTestUtil; import org.apache.hadoop.io.IOUtils; @@ -44,7 +45,7 @@ import org.junit.experimental.categories.Category; * HTTPS using the created certficates and calls an echo servlet using the * corresponding HTTPS URL. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestSSLHttpServer extends HttpServerFunctionalTest { private static final String BASEDIR = System.getProperty("test.build.dir", "target/test-dir") + "/" + TestSSLHttpServer.class.getSimpleName(); @@ -96,7 +97,7 @@ public class TestSSLHttpServer extends HttpServerFunctionalTest { @AfterClass public static void cleanup() throws Exception { - if (server != null) server.stop(); + server.stop(); FileUtil.fullyDelete(new File(BASEDIR)); KeyStoreTestUtil.cleanupSSLConfig(keystoresDir, sslConfDir); clientSslFactory.destroy(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestServletFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestServletFilter.java index 7ffaeb3..f9857e4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestServletFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/TestServletFilter.java @@ -36,13 +36,14 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.GenericTestUtils; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.net.NetUtils; import org.junit.Ignore; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestServletFilter extends HttpServerFunctionalTest { static final Log LOG = LogFactory.getLog(HttpServer.class); static volatile String uri = null; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/conf/TestConfServlet.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/conf/TestConfServlet.java index 84103ea..0385355 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/conf/TestConfServlet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/conf/TestConfServlet.java @@ -27,6 +27,7 @@ import javax.xml.parsers.DocumentBuilderFactory; import junit.framework.TestCase; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -41,7 +42,7 @@ import org.xml.sax.InputSource; * Basic test case that the ConfServlet can write configuration * to its output in XML and JSON format. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestConfServlet extends TestCase { private static final String TEST_KEY = "testconfservlet.key"; private static final String TEST_VAL = "testval"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/jmx/TestJMXJsonServlet.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/jmx/TestJMXJsonServlet.java index dd345fb..031ddce 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/jmx/TestJMXJsonServlet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/jmx/TestJMXJsonServlet.java @@ -23,6 +23,7 @@ import java.util.regex.Pattern; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.http.HttpServer; import org.apache.hadoop.hbase.http.HttpServerFunctionalTest; @@ -31,7 +32,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestJMXJsonServlet extends HttpServerFunctionalTest { private static final Log LOG = LogFactory.getLog(TestJMXJsonServlet.class); private static HttpServer server; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/lib/TestStaticUserWebFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/lib/TestStaticUserWebFilter.java index 62c8696..81bcbd5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/lib/TestStaticUserWebFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/lib/TestStaticUserWebFilter.java @@ -28,6 +28,7 @@ import javax.servlet.http.HttpServletRequestWrapper; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.http.ServerConfigurationKeys; import org.apache.hadoop.hbase.http.lib.StaticUserWebFilter.StaticUserFilter; @@ -36,7 +37,7 @@ import org.junit.experimental.categories.Category; import org.mockito.ArgumentCaptor; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestStaticUserWebFilter { private FilterConfig mockConfig(String username) { FilterConfig mock = Mockito.mock(FilterConfig.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/http/log/TestLogLevel.java hbase-server/src/test/java/org/apache/hadoop/hbase/http/log/TestLogLevel.java index 15efb71..d7942d1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/http/log/TestLogLevel.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/http/log/TestLogLevel.java @@ -22,6 +22,7 @@ import static org.junit.Assert.assertTrue; import java.io.*; import java.net.*; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.http.HttpServer; import org.apache.hadoop.net.NetUtils; @@ -31,7 +32,7 @@ import org.apache.log4j.*; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestLogLevel { static final PrintStream out = System.out; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestFileLink.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestFileLink.java index c0d62fd..777b3cd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestFileLink.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestFileLink.java @@ -33,6 +33,7 @@ import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hdfs.MiniDFSCluster; @@ -43,7 +44,7 @@ import org.junit.experimental.categories.Category; * Test that FileLink switches between alternate locations * when the current location moves or gets deleted. */ -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestFileLink { /** * Test, on HDFS, that the FileLink is still readable diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java index 044975d..f2b26c1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHFileLink.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.io; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -38,7 +39,7 @@ import static org.junit.Assert.assertTrue; * Test that FileLink switches between alternate locations * when the current location moves or gets deleted. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileLink { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java index dab5ac0..18595a8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHalfStoreFileReader.java @@ -34,19 +34,20 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.io.hfile.HFileContext; import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder; import org.apache.hadoop.hbase.io.hfile.HFileScanner; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHalfStoreFileReader { private static HBaseTestingUtility TEST_UTIL; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java index 687f9b1..d6423e8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java @@ -38,8 +38,8 @@ import java.util.concurrent.locks.ReentrantReadWriteLock; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; import org.apache.hadoop.hbase.io.hfile.LruCachedBlock; @@ -49,6 +49,8 @@ import org.apache.hadoop.hbase.regionserver.DefaultMemStore; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HStore; import org.apache.hadoop.hbase.regionserver.TimeRangeTracker; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ClassSize; import org.junit.BeforeClass; import org.junit.Test; @@ -60,7 +62,7 @@ import static org.junit.Assert.assertEquals; * Testing the sizing that HeapSize offers and compares to the size given by * ClassSize. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHeapSize { static final Log LOG = LogFactory.getLog(TestHeapSize.class); // List of classes implementing HeapSize @@ -371,7 +373,7 @@ public class TestHeapSize { byte[] row = new byte[] { 0 }; cl = Put.class; - actual = new Put(row).MUTATION_OVERHEAD + ClassSize.align(ClassSize.ARRAY); + actual = Mutation.MUTATION_OVERHEAD + ClassSize.align(ClassSize.ARRAY); expected = ClassSize.estimateBase(cl, false); //The actual TreeMap is not included in the above calculation expected += ClassSize.align(ClassSize.TREEMAP); @@ -381,7 +383,7 @@ public class TestHeapSize { } cl = Delete.class; - actual = new Delete(row).MUTATION_OVERHEAD + ClassSize.align(ClassSize.ARRAY); + actual = Mutation.MUTATION_OVERHEAD + ClassSize.align(ClassSize.ARRAY); expected = ClassSize.estimateBase(cl, false); //The actual TreeMap is not included in the above calculation expected += ClassSize.align(ClassSize.TREEMAP); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java index 71da577..5716197 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestImmutableBytesWritable.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.io; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import java.io.ByteArrayOutputStream; import java.io.DataOutputStream; import java.io.IOException; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestImmutableBytesWritable extends TestCase { public void testHash() throws Exception { assertEquals( diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java index 62fec53..80295ff 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestReference.java @@ -23,6 +23,7 @@ import java.io.IOException; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -30,7 +31,7 @@ import org.junit.experimental.categories.Category; /** * Reference tests that run on local fs. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestReference { private final HBaseTestingUtility HTU = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java index 5d9e56a..9330cea 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestBufferedDataBlockEncoder.java @@ -19,10 +19,11 @@ package org.apache.hadoop.hbase.io.encoding; import static org.junit.Assert.assertEquals; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestBufferedDataBlockEncoder { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java index 0e0c15d..e002b8b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestChangingEncoding.java @@ -34,7 +34,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Durability; @@ -45,9 +44,10 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -56,7 +56,7 @@ import org.junit.experimental.categories.Category; /** * Tests changing data block encoding settings of a column family. */ -@Category(LargeTests.class) +@Category({IOTests.class, LargeTests.class}) public class TestChangingEncoding { private static final Log LOG = LogFactory.getLog(TestChangingEncoding.class); static final String CF = "EncodingTestCF"; @@ -187,7 +187,7 @@ public class TestChangingEncoding { // wait for regions out of transition. Otherwise, for online // encoding change, verification phase may be flaky because // regions could be still in transition. - ZKAssign.blockUntilNoRIT(TEST_UTIL.getZooKeeperWatcher()); + TEST_UTIL.waitUntilNoRegionsInTransition(TIMEOUT_MS); } @Test(timeout=TIMEOUT_MS) diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java index 1f3525d..cabb67f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java @@ -35,12 +35,13 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.Type; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.hfile.HFileBlock.Writer.BufferGrabbingByteArrayOutputStream; import org.apache.hadoop.hbase.io.hfile.HFileContext; import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.test.RedundantKVGenerator; import org.junit.Test; @@ -53,7 +54,7 @@ import org.junit.runners.Parameterized.Parameters; * Test all of the data block encoding algorithms for correctness. Most of the * class generate data which will test different branches in code. */ -@Category(LargeTests.class) +@Category({IOTests.class, LargeTests.class}) @RunWith(Parameterized.class) public class TestDataBlockEncoders { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java index 8eed93e..e087457 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestEncodedSeekers.java @@ -29,7 +29,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Get; @@ -40,6 +39,8 @@ import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.io.hfile.LruBlockCache; import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Strings; import org.apache.hadoop.hbase.util.test.LoadTestKVGenerator; @@ -52,7 +53,7 @@ import org.junit.runners.Parameterized.Parameters; /** * Tests encoded seekers by loading and reading values. */ -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) @RunWith(Parameterized.class) public class TestEncodedSeekers { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java index 7b98548..26183ac 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestLoadAndSwitchEncodeOnDisk.java @@ -24,6 +24,7 @@ import java.util.NavigableMap; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -43,7 +44,7 @@ import org.junit.runners.Parameterized.Parameters; /** * Uses the load tester */ -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestLoadAndSwitchEncodeOnDisk extends TestMiniClusterLoadSequential { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTree.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTree.java index aca9b78..cc74498 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTree.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTree.java @@ -26,7 +26,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; @@ -35,6 +34,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -43,7 +44,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestPrefixTree { private static final String row4 = "a-b-B-2-1402397300-1402416535"; private static final byte[] row4_bytes = Bytes.toBytes(row4); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java index 1736faa..ee664bd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java @@ -38,13 +38,14 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.codec.prefixtree.PrefixTreeCodec; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoder.EncodedSeeker; import org.apache.hadoop.hbase.io.hfile.HFileContext; import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CollectionBackedScanner; import org.junit.Assert; @@ -59,7 +60,7 @@ import org.junit.runners.Parameterized.Parameters; * Tests scanning/seeking data with PrefixTree Encoding. */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestPrefixTreeEncoding { private static final Log LOG = LogFactory.getLog(TestPrefixTreeEncoding.class); private static final String CF = "EncodingTestCF"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java index ec2befe..c053449 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java @@ -27,15 +27,16 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.hfile.HFileContext; import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestSeekToBlockWithEncoders { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java index f999a25..a4c1a9b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/KeySampler.java @@ -18,8 +18,8 @@ package org.apache.hadoop.hbase.io.hfile; import java.util.Random; -import org.apache.hadoop.io.BytesWritable; import org.apache.hadoop.hbase.io.hfile.RandomDistribution.DiscreteRNG; +import org.apache.hadoop.io.BytesWritable; /* *

    diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java index 3b9161c..4080249 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestBlockCacheReporting.java @@ -28,6 +28,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.DataCacheEntry; import org.apache.hadoop.hbase.io.hfile.TestCacheConfig.IndexCacheEntry; @@ -38,7 +39,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestBlockCacheReporting { private static final Log LOG = LogFactory.getLog(TestBlockCacheReporting.class); private Configuration conf; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java index 4d547c7..c5fcc3c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheConfig.java @@ -35,6 +35,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache; import org.apache.hadoop.hbase.util.Threads; @@ -50,7 +51,7 @@ import org.junit.experimental.categories.Category; // (seconds). It is large because it depends on being able to reset the global // blockcache instance which is in a global variable. Experience has it that // tests clash on the global variable if this test is run as small sized test. -@Category(LargeTests.class) +@Category({IOTests.class, LargeTests.class}) public class TestCacheConfig { private static final Log LOG = LogFactory.getLog(TestCacheConfig.class); private Configuration conf; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java index 0ffd004..b13c076 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java @@ -37,20 +37,23 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache; import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.StoreFile; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.BloomFilterFactory; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ChecksumType; @@ -68,7 +71,7 @@ import org.junit.runners.Parameterized.Parameters; * types: data blocks, non-root index blocks, and Bloom filter blocks. */ @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestCacheOnWrite { private static final Log LOG = LogFactory.getLog(TestCacheOnWrite.class); @@ -158,27 +161,48 @@ public class TestCacheOnWrite { } public TestCacheOnWrite(CacheOnWriteType cowType, Compression.Algorithm compress, - BlockEncoderTestType encoderType, boolean cacheCompressedData) { + BlockEncoderTestType encoderType, boolean cacheCompressedData, BlockCache blockCache) { this.cowType = cowType; this.compress = compress; this.encoderType = encoderType; this.encoder = encoderType.getEncoder(); this.cacheCompressedData = cacheCompressedData; + this.blockCache = blockCache; testDescription = "[cacheOnWrite=" + cowType + ", compress=" + compress + ", encoderType=" + encoderType + ", cacheCompressedData=" + cacheCompressedData + "]"; System.out.println(testDescription); } + private static List getBlockCaches() throws IOException { + Configuration conf = TEST_UTIL.getConfiguration(); + List blockcaches = new ArrayList(); + // default + blockcaches.add(new CacheConfig(conf).getBlockCache()); + + // memory + BlockCache lru = new LruBlockCache(128 * 1024 * 1024, 64 * 1024, TEST_UTIL.getConfiguration()); + blockcaches.add(lru); + + // bucket cache + FileSystem.get(conf).mkdirs(TEST_UTIL.getDataTestDir()); + int[] bucketSizes = {INDEX_BLOCK_SIZE, DATA_BLOCK_SIZE, BLOOM_BLOCK_SIZE, 64 * 1024 }; + BlockCache bucketcache = + new BucketCache("file:" + TEST_UTIL.getDataTestDir() + "/bucket.data", + 128 * 1024 * 1024, 64 * 1024, bucketSizes, 5, 64 * 100, null); + blockcaches.add(bucketcache); + return blockcaches; + } + @Parameters - public static Collection getParameters() { + public static Collection getParameters() throws IOException { List cowTypes = new ArrayList(); - for (CacheOnWriteType cowType : CacheOnWriteType.values()) { - for (Compression.Algorithm compress : - HBaseTestingUtility.COMPRESSION_ALGORITHMS) { - for (BlockEncoderTestType encoderType : - BlockEncoderTestType.values()) { - for (boolean cacheCompressedData : new boolean[] { false, true }) { - cowTypes.add(new Object[] { cowType, compress, encoderType, cacheCompressedData }); + for (BlockCache blockache : getBlockCaches()) { + for (CacheOnWriteType cowType : CacheOnWriteType.values()) { + for (Compression.Algorithm compress : HBaseTestingUtility.COMPRESSION_ALGORITHMS) { + for (BlockEncoderTestType encoderType : BlockEncoderTestType.values()) { + for (boolean cacheCompressedData : new boolean[] { false, true }) { + cowTypes.add(new Object[] { cowType, compress, encoderType, cacheCompressedData, blockache}); + } } } } @@ -194,17 +218,13 @@ public class TestCacheOnWrite { conf.setInt(HFileBlockIndex.MAX_CHUNK_SIZE_KEY, INDEX_BLOCK_SIZE); conf.setInt(BloomFilterFactory.IO_STOREFILE_BLOOM_BLOCK_SIZE, BLOOM_BLOCK_SIZE); - conf.setBoolean(CacheConfig.CACHE_BLOCKS_ON_WRITE_KEY, - cowType.shouldBeCached(BlockType.DATA)); - conf.setBoolean(CacheConfig.CACHE_INDEX_BLOCKS_ON_WRITE_KEY, - cowType.shouldBeCached(BlockType.LEAF_INDEX)); - conf.setBoolean(CacheConfig.CACHE_BLOOM_BLOCKS_ON_WRITE_KEY, - cowType.shouldBeCached(BlockType.BLOOM_CHUNK)); conf.setBoolean(CacheConfig.CACHE_DATA_BLOCKS_COMPRESSED_KEY, cacheCompressedData); cowType.modifyConf(conf); fs = HFileSystem.get(conf); - cacheConf = new CacheConfig(conf); - blockCache = cacheConf.getBlockCache(); + cacheConf = + new CacheConfig(blockCache, true, true, cowType.shouldBeCached(BlockType.DATA), + cowType.shouldBeCached(BlockType.LEAF_INDEX), + cowType.shouldBeCached(BlockType.BLOOM_CHUNK), false, cacheCompressedData, true, false); } @After @@ -308,6 +328,11 @@ public class TestCacheOnWrite { assertEquals("{" + cachedDataBlockType + "=1379, LEAF_INDEX=154, BLOOM_CHUNK=9, INTERMEDIATE_INDEX=18}", countByType); } + + // iterate all the keyvalue from hfile + while (scanner.next()) { + Cell cell = scanner.getKeyValue(); + } reader.close(); } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java index 7cc3378..600b407 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCachedBlockQueue.java @@ -21,10 +21,11 @@ package org.apache.hadoop.hbase.io.hfile; import java.nio.ByteBuffer; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestCachedBlockQueue extends TestCase { public void testQueue() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java index 011ddbf..80266af 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestChecksum.java @@ -37,6 +37,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper; @@ -46,7 +47,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestChecksum { private static final Log LOG = LogFactory.getLog(TestHFileBlock.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java index 29adb9c..1b6731a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java @@ -28,6 +28,7 @@ import java.util.Collection; import java.util.List; import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; @@ -47,7 +48,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestFixedFileTrailer { private static final Log LOG = LogFactory.getLog(TestFixedFileTrailer.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java index 3115ef4..2af3a6e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java @@ -25,6 +25,7 @@ import java.util.Collection; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; @@ -49,7 +50,7 @@ import org.junit.runners.Parameterized.Parameters; * need to reveal more about what is being cached whether DATA or INDEX blocks and then we could * do more verification in this test. */ -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) @RunWith(Parameterized.class) public class TestForceCacheImportantBlocks { private final HBaseTestingUtility TEST_UTIL = HBaseTestingUtility.createLocalHTU(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java index d198274..3855629 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFile.java @@ -37,13 +37,17 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.Type; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.hfile.HFile.Reader; import org.apache.hadoop.hbase.io.hfile.HFile.Writer; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.Writable; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; /** @@ -54,7 +58,7 @@ import org.junit.experimental.categories.Category; * Remove after tfile is committed and use the tfile version of this class * instead.

    */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFile extends HBaseTestCase { static final Log LOG = LogFactory.getLog(TestHFile.class); @@ -66,12 +70,12 @@ public class TestHFile extends HBaseTestCase { private static CacheConfig cacheConf = null; private Map startingMetrics; - @Override + @Before public void setUp() throws Exception { super.setUp(); } - @Override + @After public void tearDown() throws Exception { super.tearDown(); } @@ -82,6 +86,7 @@ public class TestHFile extends HBaseTestCase { * Test all features work reasonably when hfile is empty of entries. * @throws IOException */ + @Test public void testEmptyHFile() throws IOException { if (cacheConf == null) cacheConf = new CacheConfig(conf); Path f = new Path(ROOT_DIR, getName()); @@ -98,6 +103,7 @@ public class TestHFile extends HBaseTestCase { /** * Create 0-length hfile and show that it fails */ + @Test public void testCorrupt0LengthHFile() throws IOException { if (cacheConf == null) cacheConf = new CacheConfig(conf); Path f = new Path(ROOT_DIR, getName()); @@ -131,6 +137,7 @@ public class TestHFile extends HBaseTestCase { /** * Create a truncated hfile and verify that exception thrown. */ + @Test public void testCorruptTruncatedHFile() throws IOException { if (cacheConf == null) cacheConf = new CacheConfig(conf); Path f = new Path(ROOT_DIR, getName()); @@ -280,11 +287,13 @@ public class TestHFile extends HBaseTestCase { fs.delete(ncTFile, true); } + @Test public void testTFileFeatures() throws IOException { testTFilefeaturesInternals(false); testTFilefeaturesInternals(true); } + @Test protected void testTFilefeaturesInternals(boolean useTags) throws IOException { basicWithSomeCodec("none", useTags); basicWithSomeCodec("gz", useTags); @@ -352,11 +361,13 @@ public class TestHFile extends HBaseTestCase { } // test meta blocks for tfiles + @Test public void testMetaBlocks() throws Exception { metablocks("none"); metablocks("gz"); } + @Test public void testNullMetaBlocks() throws Exception { if (cacheConf == null) cacheConf = new CacheConfig(conf); for (Compression.Algorithm compressAlgo : diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java index 766ddf9..eb1f1bb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java @@ -50,12 +50,13 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ChecksumType; import org.apache.hadoop.hbase.util.ClassSize; @@ -69,7 +70,7 @@ import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; import org.mockito.Mockito; -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) @RunWith(Parameterized.class) public class TestHFileBlock { // change this value to activate more logs diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java index 52596f4..fc44f3c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java @@ -41,7 +41,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper; import org.apache.hadoop.hbase.io.compress.Compression; @@ -49,6 +48,8 @@ import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext; import org.apache.hadoop.hbase.io.encoding.HFileBlockEncodingContext; import org.apache.hadoop.hbase.io.hfile.HFileBlock.BlockWritable; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ChecksumType; import org.apache.hadoop.io.WritableUtils; @@ -66,7 +67,7 @@ import com.google.common.base.Preconditions; * This class has unit tests to prove that older versions of * HFiles (without checksums) are compatible with current readers. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestHFileBlockCompatibility { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java index 9808dbe..c0f2fed 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java @@ -45,12 +45,13 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.io.hfile.HFileBlockIndex.BlockIndexChunk; import org.apache.hadoop.hbase.io.hfile.HFileBlockIndex.BlockIndexReader; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ClassSize; import org.junit.Before; @@ -61,7 +62,7 @@ import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestHFileBlockIndex { @Parameters diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java index 50ed33d..3cdc92b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java @@ -29,13 +29,14 @@ import java.util.List; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; import org.apache.hadoop.hbase.io.encoding.HFileBlockDefaultEncodingContext; import org.apache.hadoop.hbase.io.encoding.HFileBlockEncodingContext; import org.apache.hadoop.hbase.io.hfile.HFileBlock.Writer.BufferGrabbingByteArrayOutputStream; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ChecksumType; import org.apache.hadoop.hbase.util.test.RedundantKVGenerator; import org.junit.Test; @@ -45,7 +46,7 @@ import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileDataBlockEncoder { private HFileDataBlockEncoder blockEncoder; private RedundantKVGenerator generator = new RedundantKVGenerator(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java index bf6770b..0cb3c3c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileEncryption.java @@ -36,12 +36,13 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.crypto.Cipher; import org.apache.hadoop.hbase.io.crypto.Encryption; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.test.RedundantKVGenerator; import org.junit.BeforeClass; @@ -50,7 +51,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.*; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileEncryption { private static final Log LOG = LogFactory.getLog(TestHFileEncryption.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java index d07a6de..c0683f8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunkConversion.java @@ -24,6 +24,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; @@ -38,13 +39,13 @@ import org.junit.experimental.categories.Category; * the configured chunk size, and split it into a number of intermediate index blocks that should * really be leaf-level blocks. If more keys were added, we would flush the leaf-level block, add * another entry to the root-level block, and that would prevent us from upgrading the leaf-level - * chunk to the root chunk, thus not triggering the bug. + * chunk to the root chunk, thus not triggering the bug. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileInlineToRootChunkConversion { private final HBaseTestingUtility testUtil = new HBaseTestingUtility(); private final Configuration conf = testUtil.getConfiguration(); - + @Test public void testWriteHFile() throws Exception { Path hfPath = new Path(testUtil.getDataTestDir(), @@ -52,7 +53,7 @@ public class TestHFileInlineToRootChunkConversion { int maxChunkSize = 1024; FileSystem fs = FileSystem.get(conf); CacheConfig cacheConf = new CacheConfig(conf); - conf.setInt(HFileBlockIndex.MAX_CHUNK_SIZE_KEY, maxChunkSize); + conf.setInt(HFileBlockIndex.MAX_CHUNK_SIZE_KEY, maxChunkSize); HFileContext context = new HFileContextBuilder().withBlockSize(16).build(); HFileWriterV2 hfw = (HFileWriterV2) new HFileWriterV2.WriterFactoryV2(conf, cacheConf) diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java index 5f6a593..8f62639 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFilePerformance.java @@ -174,68 +174,68 @@ public class TestHFilePerformance extends AbstractHBaseTool { FSDataOutputStream fout = createFSOutput(path); if ("HFile".equals(fileType)){ - HFileContextBuilder builder = new HFileContextBuilder() - .withCompression(AbstractHFileWriter.compressionByName(codecName)) - .withBlockSize(minBlockSize); - if (cipherName != "none") { - byte[] cipherKey = new byte[AES.KEY_LENGTH]; - new SecureRandom().nextBytes(cipherKey); - builder.withEncryptionContext( - Encryption.newContext(conf) - .setCipher(Encryption.getCipher(conf, cipherName)) - .setKey(cipherKey)); - } - HFileContext context = builder.build(); - System.out.println("HFile write method: "); - HFile.Writer writer = HFile.getWriterFactoryNoCache(conf) - .withOutputStream(fout) - .withFileContext(context) - .withComparator(new KeyValue.RawBytesComparator()) - .create(); - - // Writing value in one shot. - for (long l=0; l */ -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestHFileSeek extends TestCase { private static final byte[] CF = "f1".getBytes(); private static final byte[] QUAL = "q1".getBytes(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java index bdf2ecc..42e918a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java @@ -41,10 +41,11 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.hfile.HFile.FileInfo; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Writables; import org.apache.hadoop.io.Text; @@ -57,7 +58,7 @@ import org.junit.experimental.categories.Category; * Testing writing a version 2 {@link HFile}. This is a low-level test written * during the development of {@link HFileWriterV2}. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileWriterV2 { private static final Log LOG = LogFactory.getLog(TestHFileWriterV2.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java index 76da924..f96e8ef 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV3.java @@ -42,11 +42,12 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.compress.Compression.Algorithm; import org.apache.hadoop.hbase.io.hfile.HFile.FileInfo; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Writables; import org.apache.hadoop.io.Text; @@ -63,7 +64,7 @@ import org.junit.runners.Parameterized.Parameters; * during the development of {@link HFileWriterV3}. */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestHFileWriterV3 { private static final Log LOG = LogFactory.getLog(TestHFileWriterV3.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLazyDataBlockDecompression.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLazyDataBlockDecompression.java index b7f7fa1..2fd3684 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLazyDataBlockDecompression.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLazyDataBlockDecompression.java @@ -27,9 +27,10 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper; import org.apache.hadoop.hbase.io.compress.Compression; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.Before; @@ -51,7 +52,7 @@ import static org.junit.Assert.*; * A kind of integration test at the intersection of {@link HFileBlock}, {@link CacheConfig}, * and {@link LruBlockCache}. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestLazyDataBlockDecompression { private static final Log LOG = LogFactory.getLog(TestLazyDataBlockDecompression.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java index ec8d31d..ec60bcd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertTrue; import java.nio.ByteBuffer; import java.util.Random; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.io.hfile.LruBlockCache.EvictionThread; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; * evictions run when they're supposed to and do what they should, * and that cached blocks are accessible when expected to be. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestLruBlockCache { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruCachedBlock.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruCachedBlock.java index 2f15bae..141c95b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruCachedBlock.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruCachedBlock.java @@ -20,13 +20,14 @@ package org.apache.hadoop.hbase.io.hfile; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotEquals; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestLruCachedBlock { LruCachedBlock block; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java index a70e5ec..4ceafb4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java @@ -29,15 +29,16 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.regionserver.StoreFile; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestPrefetch { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java index 54801e4..3a0fdf7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestReseekTo.java @@ -29,8 +29,9 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; /** * Test {@link HFileScanner#reseekTo(byte[])} */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestReseekTo { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingKeyRange.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingKeyRange.java index e96b394..55aa97b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingKeyRange.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingKeyRange.java @@ -32,6 +32,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; @@ -51,7 +52,7 @@ import org.junit.runners.Parameterized.Parameters; * Test the optimization that does not scan files where all key ranges are excluded. */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestScannerSelectionUsingKeyRange { private static final HBaseTestingUtility TEST_UTIL = HBaseTestingUtility.createLocalHTU(); private static TableName TABLE = TableName.valueOf("myTable"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java index 640ac6e..c1a5061 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java @@ -33,12 +33,13 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HStore; import org.apache.hadoop.hbase.regionserver.InternalScanner; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Threads; @@ -53,14 +54,13 @@ import org.junit.runners.Parameterized.Parameters; * expired. */ @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({IOTests.class, MediumTests.class}) public class TestScannerSelectionUsingTTL { private static final Log LOG = LogFactory.getLog(TestScannerSelectionUsingTTL.class); - private static final HBaseTestingUtility TEST_UTIL = - new HBaseTestingUtility().createLocalHTU(); + private static final HBaseTestingUtility TEST_UTIL = HBaseTestingUtility.createLocalHTU(); private static TableName TABLE = TableName.valueOf("myTable"); private static String FAMILY = "myCF"; private static byte[] FAMILY_BYTES = Bytes.toBytes(FAMILY); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java index a642e8d..b9a126f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java @@ -27,15 +27,17 @@ import org.apache.hadoop.hbase.HBaseTestCase; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; +import org.apache.hadoop.hbase.testclassification.IOTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Test; import org.junit.experimental.categories.Category; /** * Test {@link HFileScanner#seekTo(byte[])} and its variants. */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestSeekTo extends HBaseTestCase { static boolean switchKVs = false; @@ -70,7 +72,7 @@ public class TestSeekTo extends HBaseTestCase { } Path makeNewFile(TagUsage tagUsage) throws IOException { - Path ncTFile = new Path(this.testDir, "basic.hfile"); + Path ncTFile = new Path(testDir, "basic.hfile"); if (tagUsage != TagUsage.NO_TAG) { conf.setInt("hfile.format.version", 3); } else { @@ -96,6 +98,7 @@ public class TestSeekTo extends HBaseTestCase { return ncTFile; } + @Test public void testSeekBefore() throws Exception { testSeekBeforeInternals(TagUsage.NO_TAG); testSeekBeforeInternals(TagUsage.ONLY_TAG); @@ -137,6 +140,7 @@ public class TestSeekTo extends HBaseTestCase { reader.close(); } + @Test public void testSeekBeforeWithReSeekTo() throws Exception { testSeekBeforeWithReSeekToInternals(TagUsage.NO_TAG); testSeekBeforeWithReSeekToInternals(TagUsage.ONLY_TAG); @@ -226,6 +230,7 @@ public class TestSeekTo extends HBaseTestCase { assertEquals("k", toRowStr(scanner.getKeyValue())); } + @Test public void testSeekTo() throws Exception { testSeekToInternals(TagUsage.NO_TAG); testSeekToInternals(TagUsage.ONLY_TAG); @@ -254,6 +259,8 @@ public class TestSeekTo extends HBaseTestCase { reader.close(); } + + @Test public void testBlockContainingKey() throws Exception { testBlockContainingKeyInternals(TagUsage.NO_TAG); testBlockContainingKeyInternals(TagUsage.ONLY_TAG); @@ -289,4 +296,4 @@ public class TestSeekTo extends HBaseTestCase { toKV("l", tagUsage))); reader.close(); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java index 5bd7781..d29be01 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java @@ -27,6 +27,7 @@ import java.util.Arrays; import java.util.List; import java.util.Random; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; import org.apache.hadoop.hbase.io.hfile.CacheTestUtils; @@ -47,7 +48,7 @@ import org.junit.runners.Parameterized; * concurrency */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestBucketCache { private static final Random RAND = new Random(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketWriterThread.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketWriterThread.java index 0d8ffbc..4d3f550 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketWriterThread.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketWriterThread.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase.io.hfile.bucket; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; import org.apache.hadoop.hbase.io.hfile.Cacheable; @@ -41,7 +42,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestBucketWriterThread { private BucketCache bc; private BucketCache.WriterThread wt; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java index 5e45b60..511f942 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestByteBufferIOEngine.java @@ -22,6 +22,7 @@ import static org.junit.Assert.assertTrue; import java.nio.ByteBuffer; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import org.junit.experimental.categories.Category; /** * Basic test for {@link ByteBufferIOEngine} */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestByteBufferIOEngine { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java index 5f46681..8306114 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestFileIOEngine.java @@ -24,6 +24,7 @@ import java.io.File; import java.io.IOException; import java.nio.ByteBuffer; +import org.apache.hadoop.hbase.testclassification.IOTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; /** * Basic test for {@link FileIOEngine} */ -@Category(SmallTests.class) +@Category({IOTests.class, SmallTests.class}) public class TestFileIOEngine { @Test public void testFileIOEngine() throws IOException { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestBufferChain.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestBufferChain.java index 12c84fc..e8f6464 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestBufferChain.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestBufferChain.java @@ -25,6 +25,7 @@ import java.io.IOException; import java.nio.ByteBuffer; import java.nio.channels.FileChannel; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; @@ -36,7 +37,7 @@ import org.mockito.Mockito; import com.google.common.base.Charsets; import com.google.common.io.Files; -@Category(SmallTests.class) +@Category({RPCTests.class, SmallTests.class}) public class TestBufferChain { private File tmpFile; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestCallRunner.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestCallRunner.java index 8cbef91..be16529 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestCallRunner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestCallRunner.java @@ -17,13 +17,14 @@ */ package org.apache.hadoop.hbase.ipc; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.security.UserProvider; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({RPCTests.class, SmallTests.class}) public class TestCallRunner { /** * Does nothing but exercise a {@link CallRunner} outside of {@link RpcServer} context. diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java index deee717..961001f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java @@ -34,6 +34,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.ipc.protobuf.generated.TestDelayedRpcProtos; import org.apache.hadoop.hbase.ipc.protobuf.generated.TestDelayedRpcProtos.TestArg; @@ -57,7 +58,7 @@ import com.google.protobuf.ServiceException; * be delayed. Check that the last two, which are undelayed, return before the * first one. */ -@Category(MediumTests.class) // Fails sometimes with small tests +@Category({RPCTests.class, MediumTests.class}) // Fails sometimes with small tests public class TestDelayedRpc { private static final Log LOG = LogFactory.getLog(TestDelayedRpc.class); public static RpcServerInterface rpcServer; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestHBaseClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestHBaseClient.java index ec3d761..26488cf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestHBaseClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestHBaseClient.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.ipc; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.ManualEnvironmentEdge; import org.junit.Assert; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; import java.net.InetSocketAddress; -@Category(MediumTests.class) // Can't be small, we're playing with the EnvironmentEdge +@Category({RPCTests.class, MediumTests.class}) // Can't be small, we're playing with the EnvironmentEdge public class TestHBaseClient { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java index 2c70eb4..081b5dd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java @@ -48,6 +48,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.RowMutations; @@ -88,7 +89,7 @@ import com.google.protobuf.ServiceException; /** * Some basic ipc tests. */ -@Category(SmallTests.class) +@Category({RPCTests.class, SmallTests.class}) public class TestIPC { public static final Log LOG = LogFactory.getLog(TestIPC.class); static byte [] CELL_BYTES = Bytes.toBytes("xyz"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestProtoBufRpc.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestProtoBufRpc.java index fc2734f..cee459f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestProtoBufRpc.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestProtoBufRpc.java @@ -25,6 +25,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.ipc.protobuf.generated.TestProtos; import org.apache.hadoop.hbase.ipc.protobuf.generated.TestRpcServiceProtos; @@ -51,7 +52,7 @@ import com.google.protobuf.ServiceException; * This test depends on test.proto definition of types in src/test/protobuf/test.proto * and protobuf service definition from src/test/protobuf/test_rpc_service.proto */ -@Category(MediumTests.class) +@Category({RPCTests.class, MediumTests.class}) public class TestProtoBufRpc { public final static String ADDRESS = "localhost"; public static int PORT = 0; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java index 47140a3..443ec78 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcMetrics.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.ipc; import org.apache.hadoop.hbase.CompatibilityFactory; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.test.MetricsAssertHelper; import org.junit.Test; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.*; -@Category(SmallTests.class) +@Category({RPCTests.class, SmallTests.class}) public class TestRpcMetrics { public MetricsAssertHelper HELPER = CompatibilityFactory.getInstance(MetricsAssertHelper.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java index 815ae1d..11ac43f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestSimpleRpcScheduler.java @@ -27,6 +27,7 @@ import com.google.protobuf.Message; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.RPCTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.ipc.RpcServer.Call; import org.apache.hadoop.hbase.protobuf.generated.RPCProtos; @@ -57,7 +58,7 @@ import static org.mockito.Mockito.timeout; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; -@Category(SmallTests.class) +@Category({RPCTests.class, SmallTests.class}) public class TestSimpleRpcScheduler { public static final Log LOG = LogFactory.getLog(TestSimpleRpcScheduler.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java index 843d550..ab6a86d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestDriver.java @@ -18,6 +18,7 @@ */ package org.apache.hadoop.hbase.mapred; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.util.ProgramDriver; import org.junit.Test; @@ -27,7 +28,7 @@ import org.mockito.Mockito; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.verify; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestDriver { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestGroupingTableMap.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestGroupingTableMap.java index f2220fe..90ed73b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestGroupingTableMap.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestGroupingTableMap.java @@ -35,6 +35,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; @@ -48,7 +49,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.ImmutableList; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestGroupingTableMap { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestIdentityTableMap.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestIdentityTableMap.java index fe75e82..3fad1fe 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestIdentityTableMap.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestIdentityTableMap.java @@ -24,6 +24,7 @@ import static org.mockito.Mockito.verify; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; @@ -33,7 +34,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestIdentityTableMap { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestRowCounter.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestRowCounter.java index e8bcd51..6c7e445 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestRowCounter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestRowCounter.java @@ -31,6 +31,7 @@ import java.io.IOException; import java.io.PrintStream; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; @@ -44,7 +45,7 @@ import org.mockito.Mockito; import com.google.common.base.Joiner; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestRowCounter { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestSplitTable.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestSplitTable.java index 4a37f88..216041d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestSplitTable.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestSplitTable.java @@ -22,14 +22,15 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertTrue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestSplitTable { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java index 3438b6d..9fb4eb8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java @@ -42,6 +42,7 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.Before; @@ -55,7 +56,7 @@ import org.mockito.stubbing.Answer; * This tests the TableInputFormat and its recovery semantics * */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestTableInputFormat { private static final Log LOG = LogFactory.getLog(TestTableInputFormat.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java index f5179bc..107837e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduce.java @@ -27,6 +27,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category; * on our tables is simple - take every row in the table, reverse the value of * a particular cell, and write it back to the table. */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) @SuppressWarnings("deprecation") public class TestTableMapReduce extends TestTableMapReduceBase { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java index 8ed7772..628bb96 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableMapReduceUtil.java @@ -33,8 +33,9 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; @@ -56,7 +57,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.ImmutableMap; import com.google.common.collect.ImmutableSet; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestTableMapReduceUtil { private static final Log LOG = LogFactory diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java index 7707c19..eabedec 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapred/TestTableSnapshotInputFormat.java @@ -28,6 +28,7 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatTestBase; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.mapred.InputSplit; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; import java.io.IOException; import java.util.Iterator; -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestTableSnapshotInputFormat extends TableSnapshotInputFormatTestBase { private static final byte[] aaa = Bytes.toBytes("aaa"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java index e3d03b8..22bc330 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCellCounter.java @@ -23,14 +23,17 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.LocalFileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LauncherSecurityManager; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.util.GenericOptionsParser; +import org.apache.hadoop.util.ToolRunner; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -42,7 +45,7 @@ import static org.junit.Assert.assertTrue; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestCellCounter { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); private static final byte[] ROW1 = Bytes.toBytes("row1"); @@ -318,30 +321,10 @@ public class TestCellCounter { } } - @Test (timeout=300000) + @Test public void TestCellCounterWithoutOutputDir() throws Exception { - PrintStream oldPrintStream = System.err; - SecurityManager SECURITY_MANAGER = System.getSecurityManager(); - LauncherSecurityManager newSecurityManager= new LauncherSecurityManager(); - System.setSecurityManager(newSecurityManager); - ByteArrayOutputStream data = new ByteArrayOutputStream(); - String[] args = {"tableName"}; - System.setErr(new PrintStream(data)); - try { - System.setErr(new PrintStream(data)); - try { - CellCounter.main(args); - fail("should be SecurityException"); - } catch (SecurityException e) { - assertEquals(-1, newSecurityManager.getExitCode()); - assertTrue(data.toString().contains("ERROR: Wrong number of parameters:")); - // should be information about usage - assertTrue(data.toString().contains("Usage:")); - } - - } finally { - System.setErr(oldPrintStream); - System.setSecurityManager(SECURITY_MANAGER); - } + String[] args = new String[] { "tableName" }; + assertEquals("CellCounter should exit with -1 as output directory is not specified.", -1, + ToolRunner.run(HBaseConfiguration.create(), new CellCounter(), args)); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java index 5492938..4b11abb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestCopyTable.java @@ -30,12 +30,13 @@ import java.io.PrintStream; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LauncherSecurityManager; import org.apache.hadoop.mapreduce.Job; @@ -48,7 +49,7 @@ import org.junit.experimental.categories.Category; /** * Basic test for the CopyTable M/R tool */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestCopyTable { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final byte[] ROW1 = Bytes.toBytes("row1"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestGroupingTableMapper.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestGroupingTableMapper.java index ba33420..fc7b102 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestGroupingTableMapper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestGroupingTableMapper.java @@ -21,6 +21,7 @@ import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; import static org.mockito.Mockito.*; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestGroupingTableMapper { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java index 8ed8464..c447895 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java @@ -52,10 +52,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.HadoopShims; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.PerformanceEvaluation; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.client.HRegionLocator; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.RegionLocator; @@ -74,6 +75,8 @@ import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.regionserver.HStore; import org.apache.hadoop.hbase.regionserver.StoreFile; import org.apache.hadoop.hbase.regionserver.TimeRangeTracker; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Threads; @@ -95,7 +98,7 @@ import org.mockito.Mockito; * Creates a few inner classes to implement splits and an inputformat that * emits keys and values like those of {@link PerformanceEvaluation}. */ -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestHFileOutputFormat { private final static int ROWSPERSPLIT = 1024; @@ -335,9 +338,10 @@ public class TestHFileOutputFormat { public void testJobConfiguration() throws Exception { Job job = new Job(util.getConfiguration()); job.setWorkingDirectory(util.getDataTestDir("testJobConfiguration")); - HTable table = Mockito.mock(HTable.class); - setupMockStartKeys(table); - HFileOutputFormat.configureIncrementalLoad(job, table); + HTableDescriptor tableDescriptor = Mockito.mock(HTableDescriptor.class); + RegionLocator regionLocator = Mockito.mock(RegionLocator.class); + setupMockStartKeys(regionLocator); + HFileOutputFormat2.configureIncrementalLoad(job, tableDescriptor, regionLocator); assertEquals(job.getNumReduceTasks(), 4); } @@ -467,12 +471,13 @@ public class TestHFileOutputFormat { MutationSerialization.class.getName(), ResultSerialization.class.getName(), KeyValueSerialization.class.getName()); setupRandomGeneratorMapper(job); - HFileOutputFormat.configureIncrementalLoad(job, table); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), + table.getRegionLocator()); FileOutputFormat.setOutputPath(job, outDir); Assert.assertFalse( util.getTestFileSystem().exists(outDir)) ; - assertEquals(table.getRegionLocations().size(), job.getNumReduceTasks()); + assertEquals(table.getRegionLocator().getAllRegionLocations().size(), job.getNumReduceTasks()); assertTrue(job.waitForCompletion(true)); } @@ -769,14 +774,14 @@ public class TestHFileOutputFormat { return familyToDataBlockEncoding; } - private void setupMockStartKeys(RegionLocator table) throws IOException { + private void setupMockStartKeys(RegionLocator regionLocator) throws IOException { byte[][] mockKeys = new byte[][] { HConstants.EMPTY_BYTE_ARRAY, Bytes.toBytes("aaa"), Bytes.toBytes("ggg"), Bytes.toBytes("zzz") }; - Mockito.doReturn(mockKeys).when(table).getStartKeys(); + Mockito.doReturn(mockKeys).when(regionLocator).getStartKeys(); } /** @@ -791,15 +796,16 @@ public class TestHFileOutputFormat { Path dir = util.getDataTestDir("testColumnFamilySettings"); // Setup table descriptor - HTable table = Mockito.mock(HTable.class); + Table table = Mockito.mock(Table.class); + RegionLocator regionLocator = Mockito.mock(RegionLocator.class); HTableDescriptor htd = new HTableDescriptor(TABLE_NAME); Mockito.doReturn(htd).when(table).getTableDescriptor(); - for (HColumnDescriptor hcd: this.util.generateColumnDescriptors()) { + for (HColumnDescriptor hcd: HBaseTestingUtility.generateColumnDescriptors()) { htd.addFamily(hcd); } // set up the table to return some mock keys - setupMockStartKeys(table); + setupMockStartKeys(regionLocator); try { // partial map red setup to get an operational writer for testing @@ -809,7 +815,7 @@ public class TestHFileOutputFormat { Job job = new Job(conf, "testLocalMRIncrementalLoad"); job.setWorkingDirectory(util.getDataTestDirOnTestFS("testColumnFamilySettings")); setupRandomGeneratorMapper(job); - HFileOutputFormat.configureIncrementalLoad(job, table); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); FileOutputFormat.setOutputPath(job, dir); context = createTestTaskAttemptContext(job); HFileOutputFormat hof = new HFileOutputFormat(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java index 2a780d4..dcfa185 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java @@ -25,15 +25,6 @@ import static org.junit.Assert.assertNotSame; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; -import java.io.IOException; -import java.util.Arrays; -import java.util.HashMap; -import java.util.Map; -import java.util.Map.Entry; -import java.util.Random; -import java.util.Set; -import java.util.concurrent.Callable; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; @@ -50,10 +41,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.HadoopShims; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.PerformanceEvaluation; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.RegionLocator; @@ -71,6 +63,8 @@ import org.apache.hadoop.hbase.io.hfile.HFile.Reader; import org.apache.hadoop.hbase.regionserver.BloomType; import org.apache.hadoop.hbase.regionserver.StoreFile; import org.apache.hadoop.hbase.regionserver.TimeRangeTracker; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Threads; @@ -86,13 +80,23 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; +import java.io.IOException; +import java.io.UnsupportedEncodingException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.Map.Entry; +import java.util.Random; +import java.util.Set; +import java.util.concurrent.Callable; + /** * Simple test for {@link CellSortReducer} and {@link HFileOutputFormat2}. * Sets up and runs a mapreduce job that writes hfile output. * Creates a few inner classes to implement splits and an inputformat that * emits keys and values like those of {@link PerformanceEvaluation}. */ -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestHFileOutputFormat2 { private final static int ROWSPERSPLIT = 1024; @@ -334,9 +338,10 @@ public class TestHFileOutputFormat2 { public void testJobConfiguration() throws Exception { Job job = new Job(util.getConfiguration()); job.setWorkingDirectory(util.getDataTestDir("testJobConfiguration")); - HTable table = Mockito.mock(HTable.class); - setupMockStartKeys(table); - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + Table table = Mockito.mock(Table.class); + RegionLocator regionLocator = Mockito.mock(RegionLocator.class); + setupMockStartKeys(regionLocator); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); assertEquals(job.getNumReduceTasks(), 4); } @@ -369,12 +374,10 @@ public class TestHFileOutputFormat2 { util = new HBaseTestingUtility(); Configuration conf = util.getConfiguration(); byte[][] startKeys = generateRandomStartKeys(5); - HBaseAdmin admin = null; - try { - util.startMiniCluster(); - Path testDir = util.getDataTestDirOnTestFS("testLocalMRIncrementalLoad"); - admin = new HBaseAdmin(conf); - HTable table = util.createTable(TABLE_NAME, FAMILIES); + util.startMiniCluster(); + Path testDir = util.getDataTestDirOnTestFS("testLocalMRIncrementalLoad"); + try (HTable table = util.createTable(TABLE_NAME, FAMILIES); + Admin admin = table.getConnection().getAdmin()) { assertEquals("Should start with empty table", 0, util.countRows(table)); int numRegions = util.createMultiRegions( @@ -383,7 +386,7 @@ public class TestHFileOutputFormat2 { // Generate the bulk load files util.startMiniMapReduceCluster(); - runIncrementalPELoad(conf, table, testDir); + runIncrementalPELoad(conf, table.getTableDescriptor(), table.getRegionLocator(), testDir); // This doesn't write into the table, just makes files assertEquals("HFOF should not touch actual table", 0, util.countRows(table)); @@ -403,7 +406,7 @@ public class TestHFileOutputFormat2 { // handle the split case if (shouldChangeRegions) { LOG.info("Changing regions in table"); - admin.disableTable(table.getTableName()); + admin.disableTable(table.getName()); while(util.getMiniHBaseCluster().getMaster().getAssignmentManager(). getRegionStates().isRegionsInTransition()) { Threads.sleep(200); @@ -412,9 +415,9 @@ public class TestHFileOutputFormat2 { byte[][] newStartKeys = generateRandomStartKeys(15); util.createMultiRegions( util.getConfiguration(), table, FAMILIES[0], newStartKeys); - admin.enableTable(table.getTableName()); + admin.enableTable(table.getName()); while (table.getRegionLocations().size() != 15 || - !admin.isTableAvailable(table.getTableName())) { + !admin.isTableAvailable(table.getName())) { Thread.sleep(200); LOG.info("Waiting for new region assignment to happen"); } @@ -451,27 +454,26 @@ public class TestHFileOutputFormat2 { assertEquals("Data should remain after reopening of regions", tableDigestBefore, util.checksumRows(table)); } finally { - if (admin != null) admin.close(); util.shutdownMiniMapReduceCluster(); util.shutdownMiniCluster(); } } - private void runIncrementalPELoad( - Configuration conf, HTable table, Path outDir) - throws Exception { + private void runIncrementalPELoad(Configuration conf, HTableDescriptor tableDescriptor, + RegionLocator regionLocator, Path outDir) throws IOException, UnsupportedEncodingException, + InterruptedException, ClassNotFoundException { Job job = new Job(conf, "testLocalMRIncrementalLoad"); job.setWorkingDirectory(util.getDataTestDirOnTestFS("runIncrementalPELoad")); job.getConfiguration().setStrings("io.serializations", conf.get("io.serializations"), MutationSerialization.class.getName(), ResultSerialization.class.getName(), KeyValueSerialization.class.getName()); setupRandomGeneratorMapper(job); - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + HFileOutputFormat2.configureIncrementalLoad(job, tableDescriptor, regionLocator); FileOutputFormat.setOutputPath(job, outDir); assertFalse(util.getTestFileSystem().exists(outDir)) ; - assertEquals(table.getRegionLocations().size(), job.getNumReduceTasks()); + assertEquals(regionLocator.getAllRegionLocations().size(), job.getNumReduceTasks()); assertTrue(job.waitForCompletion(true)); } @@ -493,7 +495,7 @@ public class TestHFileOutputFormat2 { getMockColumnFamiliesForCompression(numCfs); Table table = Mockito.mock(HTable.class); setupMockColumnFamiliesForCompression(table, familyToCompression); - HFileOutputFormat2.configureCompression(table, conf); + HFileOutputFormat2.configureCompression(conf, table.getTableDescriptor()); // read back family specific compression setting from the configuration Map retrievedFamilyToCompressionMap = HFileOutputFormat2 @@ -565,7 +567,7 @@ public class TestHFileOutputFormat2 { Table table = Mockito.mock(HTable.class); setupMockColumnFamiliesForBloomType(table, familyToBloomType); - HFileOutputFormat2.configureBloomType(table, conf); + HFileOutputFormat2.configureBloomType(table.getTableDescriptor(), conf); // read back family specific data block encoding settings from the // configuration @@ -636,7 +638,7 @@ public class TestHFileOutputFormat2 { Table table = Mockito.mock(HTable.class); setupMockColumnFamiliesForBlockSize(table, familyToBlockSize); - HFileOutputFormat2.configureBlockSize(table, conf); + HFileOutputFormat2.configureBlockSize(table.getTableDescriptor(), conf); // read back family specific data block encoding settings from the // configuration @@ -694,10 +696,9 @@ public class TestHFileOutputFormat2 { return familyToBlockSize; } - /** - * Test for {@link HFileOutputFormat2#configureDataBlockEncoding(org.apache.hadoop.hbase.client.Table, - * Configuration)} and {@link HFileOutputFormat2#createFamilyDataBlockEncodingMap - * (Configuration)}. + /** + * Test for {@link HFileOutputFormat2#configureDataBlockEncoding(HTableDescriptor, Configuration) + * and {@link HFileOutputFormat2#createFamilyDataBlockEncodingMap(Configuration)}. * Tests that the compression map is correctly serialized into * and deserialized from configuration * @@ -712,7 +713,8 @@ public class TestHFileOutputFormat2 { Table table = Mockito.mock(HTable.class); setupMockColumnFamiliesForDataBlockEncoding(table, familyToDataBlockEncoding); - HFileOutputFormat2.configureDataBlockEncoding(table, conf); + HTableDescriptor tableDescriptor = table.getTableDescriptor(); + HFileOutputFormat2.configureDataBlockEncoding(tableDescriptor, conf); // read back family specific data block encoding settings from the // configuration @@ -791,7 +793,8 @@ public class TestHFileOutputFormat2 { Path dir = util.getDataTestDir("testColumnFamilySettings"); // Setup table descriptor - HTable table = Mockito.mock(HTable.class); + Table table = Mockito.mock(Table.class); + RegionLocator regionLocator = Mockito.mock(RegionLocator.class); HTableDescriptor htd = new HTableDescriptor(TABLE_NAME); Mockito.doReturn(htd).when(table).getTableDescriptor(); for (HColumnDescriptor hcd: HBaseTestingUtility.generateColumnDescriptors()) { @@ -799,7 +802,7 @@ public class TestHFileOutputFormat2 { } // set up the table to return some mock keys - setupMockStartKeys(table); + setupMockStartKeys(regionLocator); try { // partial map red setup to get an operational writer for testing @@ -809,7 +812,7 @@ public class TestHFileOutputFormat2 { Job job = new Job(conf, "testLocalMRIncrementalLoad"); job.setWorkingDirectory(util.getDataTestDirOnTestFS("testColumnFamilySettings")); setupRandomGeneratorMapper(job); - HFileOutputFormat2.configureIncrementalLoad(job, table, table); + HFileOutputFormat2.configureIncrementalLoad(job, table.getTableDescriptor(), regionLocator); FileOutputFormat.setOutputPath(job, dir); context = createTestTaskAttemptContext(job); HFileOutputFormat2 hof = new HFileOutputFormat2(); @@ -890,10 +893,10 @@ public class TestHFileOutputFormat2 { conf.setInt("hbase.hstore.compaction.min", 2); generateRandomStartKeys(5); - try { - util.startMiniCluster(); + util.startMiniCluster(); + try (Connection conn = ConnectionFactory.createConnection(); + Admin admin = conn.getAdmin()) { final FileSystem fs = util.getDFSCluster().getFileSystem(); - HBaseAdmin admin = new HBaseAdmin(conf); HTable table = util.createTable(TABLE_NAME, FAMILIES); assertEquals("Should start with empty table", 0, util.countRows(table)); @@ -911,7 +914,8 @@ public class TestHFileOutputFormat2 { for (int i = 0; i < 2; i++) { Path testDir = util.getDataTestDirOnTestFS("testExcludeAllFromMinorCompaction_" + i); - runIncrementalPELoad(conf, table, testDir); + runIncrementalPELoad(conf, table.getTableDescriptor(), conn.getRegionLocator(TABLE_NAME), + testDir); // Perform the actual load new LoadIncrementalHFiles(conf).doBulkLoad(testDir, table); } @@ -925,7 +929,7 @@ public class TestHFileOutputFormat2 { assertEquals(2, fs.listStatus(storePath).length); // minor compactions shouldn't get rid of the file - admin.compact(TABLE_NAME.getName()); + admin.compact(TABLE_NAME); try { quickPoll(new Callable() { public Boolean call() throws Exception { @@ -938,7 +942,7 @@ public class TestHFileOutputFormat2 { } // a major compaction should work though - admin.majorCompact(TABLE_NAME.getName()); + admin.majorCompact(TABLE_NAME); quickPoll(new Callable() { public Boolean call() throws Exception { return fs.listStatus(storePath).length == 1; @@ -957,12 +961,12 @@ public class TestHFileOutputFormat2 { conf.setInt("hbase.hstore.compaction.min", 2); generateRandomStartKeys(5); - try { + try (Connection conn = ConnectionFactory.createConnection(conf); + Admin admin = conn.getAdmin()){ util.startMiniCluster(); Path testDir = util.getDataTestDirOnTestFS("testExcludeMinorCompaction"); final FileSystem fs = util.getDFSCluster().getFileSystem(); - HBaseAdmin admin = new HBaseAdmin(conf); - HTable table = util.createTable(TABLE_NAME, FAMILIES); + Table table = util.createTable(TABLE_NAME, FAMILIES); assertEquals("Should start with empty table", 0, util.countRows(table)); // deep inspection: get the StoreFile dir @@ -976,7 +980,7 @@ public class TestHFileOutputFormat2 { Put p = new Put(Bytes.toBytes("test")); p.add(FAMILIES[0], Bytes.toBytes("1"), Bytes.toBytes("1")); table.put(p); - admin.flush(TABLE_NAME.getName()); + admin.flush(TABLE_NAME); assertEquals(1, util.countRows(table)); quickPoll(new Callable() { public Boolean call() throws Exception { @@ -988,10 +992,12 @@ public class TestHFileOutputFormat2 { conf.setBoolean("hbase.mapreduce.hfileoutputformat.compaction.exclude", true); util.startMiniMapReduceCluster(); - runIncrementalPELoad(conf, table, testDir); + + RegionLocator regionLocator = conn.getRegionLocator(TABLE_NAME); + runIncrementalPELoad(conf, table.getTableDescriptor(), regionLocator, testDir); // Perform the actual load - new LoadIncrementalHFiles(conf).doBulkLoad(testDir, table); + new LoadIncrementalHFiles(conf).doBulkLoad(testDir, admin, table, regionLocator); // Ensure data shows up int expectedRows = NMapInputFormat.getNumMapTasks(conf) * ROWSPERSPLIT; @@ -1002,7 +1008,7 @@ public class TestHFileOutputFormat2 { assertEquals(2, fs.listStatus(storePath).length); // minor compactions shouldn't get rid of the file - admin.compact(TABLE_NAME.getName()); + admin.compact(TABLE_NAME); try { quickPoll(new Callable() { public Boolean call() throws Exception { @@ -1015,7 +1021,7 @@ public class TestHFileOutputFormat2 { } // a major compaction should work though - admin.majorCompact(TABLE_NAME.getName()); + admin.majorCompact(TABLE_NAME); quickPoll(new Callable() { public Boolean call() throws Exception { return fs.listStatus(storePath).length == 1; @@ -1048,18 +1054,22 @@ public class TestHFileOutputFormat2 { Configuration conf = HBaseConfiguration.create(); util = new HBaseTestingUtility(conf); if ("newtable".equals(args[0])) { - byte[] tname = args[1].getBytes(); - HTable table = util.createTable(tname, FAMILIES); - HBaseAdmin admin = new HBaseAdmin(conf); - admin.disableTable(tname); - byte[][] startKeys = generateRandomStartKeys(5); - util.createMultiRegions(conf, table, FAMILIES[0], startKeys); - admin.enableTable(tname); + TableName tname = TableName.valueOf(args[1]); + try (HTable table = util.createTable(tname, FAMILIES); + Admin admin = table.getConnection().getAdmin()) { + admin.disableTable(tname); + byte[][] startKeys = generateRandomStartKeys(5); + util.createMultiRegions(conf, table, FAMILIES[0], startKeys); + admin.enableTable(tname); + } } else if ("incremental".equals(args[0])) { TableName tname = TableName.valueOf(args[1]); - HTable table = new HTable(conf, tname); - Path outDir = new Path("incremental-out"); - runIncrementalPELoad(conf, table, outDir); + try(Connection c = ConnectionFactory.createConnection(conf); + Admin admin = c.getAdmin(); + RegionLocator regionLocator = c.getRegionLocator(tname)) { + Path outDir = new Path("incremental-out"); + runIncrementalPELoad(conf, admin.getTableDescriptor(tname), regionLocator, outDir); + } } else { throw new RuntimeException( "usage: TestHFileOutputFormat2 newtable | incremental"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java index dd47325..d82f36b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHLogRecordReader.java @@ -17,17 +17,18 @@ */ package org.apache.hadoop.hbase.mapreduce; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.mapreduce.WALInputFormat.WALRecordReader; import org.apache.hadoop.hbase.mapreduce.HLogInputFormat.HLogKeyRecordReader; import org.apache.hadoop.hbase.regionserver.wal.HLogKey; import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.experimental.categories.Category; /** * JUnit tests for the record reader in HLogInputFormat */ -@Category(MediumTests.class) +@Category({MapReduceTests.class, MediumTests.class}) public class TestHLogRecordReader extends TestWALRecordReader { @Override diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java index 220bc02..33d0e74 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHRegionPartitioner.java @@ -17,8 +17,9 @@ package org.apache.hadoop.hbase.mapreduce; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.assertEquals; -@Category(MediumTests.class) +@Category({MapReduceTests.class, MediumTests.class}) public class TestHRegionPartitioner { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java index 9448d30..a2df105 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java @@ -40,9 +40,9 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; @@ -62,6 +62,8 @@ import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LauncherSecurityManager; import org.apache.hadoop.mapreduce.Job; @@ -80,7 +82,7 @@ import org.mockito.stubbing.Answer; /** * Tests the table import and table export MR job functionality */ -@Category(MediumTests.class) +@Category({VerySlowMapReduceTests.class, MediumTests.class}) public class TestImportExport { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); private static final byte[] ROW1 = Bytes.toBytes("row1"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java index eb54dee..eddee5a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java @@ -37,9 +37,10 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; @@ -61,7 +62,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestImportTSVWithOperationAttributes implements Configurable { protected static final Log LOG = LogFactory.getLog(TestImportTSVWithOperationAttributes.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java index 5cb4fa2..a5cceb0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithTTLs.java @@ -34,6 +34,8 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; @@ -41,7 +43,6 @@ import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.util.Tool; import org.apache.hadoop.util.ToolRunner; @@ -51,7 +52,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestImportTSVWithTTLs implements Configurable { protected static final Log LOG = LogFactory.getLog(TestImportTSVWithTTLs.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java index e196698..0ca0f8f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java @@ -41,9 +41,10 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; @@ -70,7 +71,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestImportTSVWithVisibilityLabels implements Configurable { protected static final Log LOG = LogFactory.getLog(TestImportTSVWithVisibilityLabels.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java index 96eea13..3844a64 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsv.java @@ -41,7 +41,6 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.HTable; @@ -49,6 +48,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.Utils.OutputFileUtils.OutputFilesFilter; @@ -61,7 +62,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestImportTsv implements Configurable { protected static final Log LOG = LogFactory.getLog(TestImportTsv.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsvParser.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsvParser.java index c7593e4..81e0a70 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsvParser.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTsvParser.java @@ -27,6 +27,7 @@ import static org.junit.Assert.fail; import java.util.ArrayList; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser; import org.apache.hadoop.hbase.mapreduce.ImportTsv.TsvParser.BadTsvLineException; @@ -43,7 +44,7 @@ import com.google.common.collect.Iterables; /** * Tests for {@link TsvParser}. */ -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestImportTsvParser { private void assertBytesEquals(byte[] a, byte[] b) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java index 49be5c8..fff0200 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java @@ -33,8 +33,9 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; @@ -57,7 +58,7 @@ import org.apache.hadoop.hbase.security.SecureBulkLoadUtil; * functionality. These tests run faster than the full MR cluster * tests in TestHFileOutputFormat */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestLoadIncrementalHFiles { private static final byte[] QUALIFIER = Bytes.toBytes("myqual"); private static final byte[] FAMILY = Bytes.toBytes("myfam"); @@ -130,7 +131,7 @@ public class TestLoadIncrementalHFiles { new byte[][]{ Bytes.toBytes("fff"), Bytes.toBytes("zzz") }, }); } - + /** * Test loading into a column family that has a ROWCOL bloom filter. */ @@ -353,7 +354,7 @@ public class TestLoadIncrementalHFiles { map.put(last, value-1); } - @Test + @Test public void testInferBoundaries() { TreeMap map = new TreeMap(Bytes.BYTES_COMPARATOR); @@ -363,8 +364,8 @@ public class TestLoadIncrementalHFiles { * * Should be inferred as: * a-----------------k m-------------q r--------------t u---------x - * - * The output should be (m,r,u) + * + * The output should be (m,r,u) */ String first; @@ -372,7 +373,7 @@ public class TestLoadIncrementalHFiles { first = "a"; last = "e"; addStartEndKeysForTest(map, first.getBytes(), last.getBytes()); - + first = "r"; last = "s"; addStartEndKeysForTest(map, first.getBytes(), last.getBytes()); @@ -393,14 +394,14 @@ public class TestLoadIncrementalHFiles { first = "s"; last = "t"; addStartEndKeysForTest(map, first.getBytes(), last.getBytes()); - + first = "u"; last = "w"; addStartEndKeysForTest(map, first.getBytes(), last.getBytes()); byte[][] keysArray = LoadIncrementalHFiles.inferBoundaries(map); byte[][] compare = new byte[3][]; compare[0] = "m".getBytes(); - compare[1] = "r".getBytes(); + compare[1] = "r".getBytes(); compare[2] = "u".getBytes(); assertEquals(keysArray.length, 3); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java index e7ee0ab..4d4043b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java @@ -60,9 +60,10 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.BulkLoadHFileRequest; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.TestHRegionServerBulkLoad; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -76,7 +77,7 @@ import com.google.protobuf.ServiceException; /** * Test cases for the atomic load error handling of the bulk load functionality. */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestLoadIncrementalHFilesSplitRecovery { final static Log LOG = LogFactory.getLog(TestHRegionServerBulkLoad.class); @@ -274,7 +275,8 @@ public class TestLoadIncrementalHFilesSplitRecovery { try (Connection connection = ConnectionFactory.createConnection(this.util.getConfiguration())) { setupTable(connection, table, 10); LoadIncrementalHFiles lih = new LoadIncrementalHFiles(util.getConfiguration()) { - protected List tryAtomicRegionLoad(final HConnection conn, + @Override + protected List tryAtomicRegionLoad(final Connection conn, TableName tableName, final byte[] first, Collection lqis) throws IOException { int i = attmptedCalls.incrementAndGet(); @@ -348,7 +350,8 @@ public class TestLoadIncrementalHFilesSplitRecovery { // files to fail when attempt to atomically import. This is recoverable. final AtomicInteger attemptedCalls = new AtomicInteger(); LoadIncrementalHFiles lih2 = new LoadIncrementalHFiles(util.getConfiguration()) { - protected void bulkLoadPhase(final Table htable, final HConnection conn, + @Override + protected void bulkLoadPhase(final Table htable, final Connection conn, ExecutorService pool, Deque queue, final Multimap regionGroups) throws IOException { int i = attemptedCalls.incrementAndGet(); @@ -390,9 +393,10 @@ public class TestLoadIncrementalHFilesSplitRecovery { final AtomicInteger countedLqis= new AtomicInteger(); LoadIncrementalHFiles lih = new LoadIncrementalHFiles( util.getConfiguration()) { + @Override protected List groupOrSplit( Multimap regionGroups, - final LoadQueueItem item, final HTable htable, + final LoadQueueItem item, final Table htable, final Pair startEndKeys) throws IOException { List lqis = super.groupOrSplit(regionGroups, item, htable, startEndKeys); if (lqis != null) { @@ -426,9 +430,10 @@ public class TestLoadIncrementalHFilesSplitRecovery { util.getConfiguration()) { int i = 0; + @Override protected List groupOrSplit( Multimap regionGroups, - final LoadQueueItem item, final HTable table, + final LoadQueueItem item, final Table table, final Pair startEndKeys) throws IOException { i++; @@ -489,8 +494,7 @@ public class TestLoadIncrementalHFilesSplitRecovery { dir = buildBulkFiles(tableName, 3); // Mess it up by leaving a hole in the hbase:meta - List regionInfos = MetaTableAccessor.getTableRegions(util.getZooKeeperWatcher(), - connection, tableName); + List regionInfos = MetaTableAccessor.getTableRegions(connection, tableName); for (HRegionInfo regionInfo : regionInfos) { if (Bytes.equals(regionInfo.getStartKey(), HConstants.EMPTY_BYTE_ARRAY)) { MetaTableAccessor.deleteRegion(connection, regionInfo); @@ -508,8 +512,7 @@ public class TestLoadIncrementalHFilesSplitRecovery { table.close(); // Make sure at least the one region that still exists can be found. - regionInfos = MetaTableAccessor.getTableRegions(util.getZooKeeperWatcher(), - connection, tableName); + regionInfos = MetaTableAccessor.getTableRegions(connection, tableName); assertTrue(regionInfos.size() >= 1); this.assertExpectedTable(connection, tableName, ROWCOUNT, 2); @@ -546,4 +549,4 @@ public class TestLoadIncrementalHFilesSplitRecovery { if (t != null) t.close(); } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java index d8f6b24..a46e76a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableInputFormat.java @@ -33,12 +33,13 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.mapreduce.Job; @@ -55,7 +56,7 @@ import org.junit.experimental.categories.Category; * tested in a MapReduce job to see if that is handed over and done properly * too. */ -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestMultiTableInputFormat { static final Log LOG = LogFactory.getLog(TestMultiTableInputFormat.class); @@ -75,10 +76,11 @@ public class TestMultiTableInputFormat { TEST_UTIL.startMiniCluster(3); // create and fill table for (int i = 0; i < 3; i++) { - HTable table = - TEST_UTIL.createTable(TableName.valueOf(TABLE_NAME + String.valueOf(i)), INPUT_FAMILY); - TEST_UTIL.createMultiRegions(TEST_UTIL.getConfiguration(), table, INPUT_FAMILY, 4); - TEST_UTIL.loadTable(table, INPUT_FAMILY, false); + try (HTable table = + TEST_UTIL.createTable(TableName.valueOf(TABLE_NAME + String.valueOf(i)), INPUT_FAMILY)) { + TEST_UTIL.createMultiRegions(TEST_UTIL.getConfiguration(), table, INPUT_FAMILY, 4); + TEST_UTIL.loadTable(table, INPUT_FAMILY, false); + } } // start MR cluster TEST_UTIL.startMiniMapReduceCluster(); @@ -138,6 +140,7 @@ public class TestMultiTableInputFormat { private String first = null; private String last = null; + @Override protected void reduce(ImmutableBytesWritable key, Iterable values, Context context) throws IOException, InterruptedException { @@ -153,6 +156,7 @@ public class TestMultiTableInputFormat { assertEquals(3, count); } + @Override protected void cleanup(Context context) throws IOException, InterruptedException { Configuration c = context.getConfiguration(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java index f33ac13..20d577d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java @@ -37,6 +37,7 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; @@ -53,7 +54,7 @@ import static org.junit.Assert.assertTrue; * on our tables is simple - take every row in the table, reverse the value of * a particular cell, and write it back to the table. */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestMultithreadedTableMapper { private static final Log LOG = LogFactory.getLog(TestMultithreadedTableMapper.class); private static final HBaseTestingUtility UTIL = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java index 53813d9..59854ee 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRowCounter.java @@ -25,15 +25,18 @@ import static org.junit.Assert.fail; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.io.PrintStream; +import java.sql.Time; import java.util.ArrayList; +import java.util.Arrays; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.mapreduce.RowCounter.RowCounterMapper; @@ -50,32 +53,23 @@ import org.junit.experimental.categories.Category; /** * Test the rowcounter map reduce job. */ -@Category(MediumTests.class) +@Category({MapReduceTests.class, MediumTests.class}) public class TestRowCounter { final Log LOG = LogFactory.getLog(getClass()); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private final static String TABLE_NAME = "testRowCounter"; - private final static String COL_FAM = "col_fam"; - private final static String COL1 = "c1"; - private final static String COL2 = "c2"; - private final static String COMPOSITE_COLUMN = "C:A:A"; - private final static int TOTAL_ROWS = 10; - private final static int ROWS_WITH_ONE_COL = 2; /** * @throws java.lang.Exception */ @BeforeClass - public static void setUpBeforeClass() - throws Exception { + public static void setUpBeforeClass() throws Exception { TEST_UTIL.startMiniCluster(); TEST_UTIL.startMiniMapReduceCluster(); Table table = TEST_UTIL.createTable(TableName.valueOf(TABLE_NAME), Bytes.toBytes(COL_FAM)); @@ -87,8 +81,7 @@ public class TestRowCounter { * @throws java.lang.Exception */ @AfterClass - public static void tearDownAfterClass() - throws Exception { + public static void tearDownAfterClass() throws Exception { TEST_UTIL.shutdownMiniCluster(); TEST_UTIL.shutdownMiniMapReduceCluster(); } @@ -99,9 +92,10 @@ public class TestRowCounter { * @throws Exception */ @Test - public void testRowCounterNoColumn() - throws Exception { - String[] args = new String[] {TABLE_NAME}; + public void testRowCounterNoColumn() throws Exception { + String[] args = new String[] { + TABLE_NAME + }; runRowCount(args, 10); } @@ -112,9 +106,10 @@ public class TestRowCounter { * @throws Exception */ @Test - public void testRowCounterExclusiveColumn() - throws Exception { - String[] args = new String[] {TABLE_NAME, COL_FAM + ":" + COL1}; + public void testRowCounterExclusiveColumn() throws Exception { + String[] args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL1 + }; runRowCount(args, 8); } @@ -125,9 +120,10 @@ public class TestRowCounter { * @throws Exception */ @Test - public void testRowCounterColumnWithColonInQualifier() - throws Exception { - String[] args = new String[] {TABLE_NAME, COL_FAM + ":" + COMPOSITE_COLUMN}; + public void testRowCounterColumnWithColonInQualifier() throws Exception { + String[] args = new String[] { + TABLE_NAME, COL_FAM + ":" + COMPOSITE_COLUMN + }; runRowCount(args, 8); } @@ -138,20 +134,20 @@ public class TestRowCounter { * @throws Exception */ @Test - public void testRowCounterHiddenColumn() - throws Exception { - String[] args = new String[] {TABLE_NAME, COL_FAM + ":" + COL2}; + public void testRowCounterHiddenColumn() throws Exception { + String[] args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL2 + }; runRowCount(args, 10); } - /** + /** * Test a case when the timerange is specified with --starttime and --endtime options * * @throws Exception */ @Test - public void testRowCounterTimeRange() - throws Exception { + public void testRowCounterTimeRange() throws Exception { final byte[] family = Bytes.toBytes(COL_FAM); final byte[] col1 = Bytes.toBytes(COL1); Put put1 = new Put(Bytes.toBytes("row_timerange_" + 1)); @@ -161,7 +157,7 @@ public class TestRowCounter { long ts; // clean up content of TABLE_NAME - HTable table = TEST_UTIL.truncateTable(TableName.valueOf(TABLE_NAME)); + HTable table = TEST_UTIL.deleteTableData(TableName.valueOf(TABLE_NAME)); ts = System.currentTimeMillis(); put1.add(family, col1, ts, Bytes.toBytes("val1")); table.put(put1); @@ -174,20 +170,32 @@ public class TestRowCounter { table.put(put3); table.close(); - String[] args = new String[] {TABLE_NAME, COL_FAM + ":" + COL1, "--starttime=" + 0, - "--endtime=" + ts}; + String[] args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL1, + "--starttime=" + 0, + "--endtime=" + ts + }; runRowCount(args, 1); - args = new String[] {TABLE_NAME, COL_FAM + ":" + COL1, "--starttime=" + 0, - "--endtime=" + (ts - 10)}; + args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL1, + "--starttime=" + 0, + "--endtime=" + (ts - 10) + }; runRowCount(args, 1); - args = new String[] {TABLE_NAME, COL_FAM + ":" + COL1, "--starttime=" + ts, - "--endtime=" + (ts + 1000)}; + args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL1, + "--starttime=" + ts, + "--endtime=" + (ts + 1000) + }; runRowCount(args, 2); - args = new String[] {TABLE_NAME, COL_FAM + ":" + COL1, "--starttime=" + (ts - 30 * 1000), - "--endtime=" + (ts + 30 * 1000),}; + args = new String[] { + TABLE_NAME, COL_FAM + ":" + COL1, + "--starttime=" + (ts - 30 * 1000), + "--endtime=" + (ts + 30 * 1000), + }; runRowCount(args, 3); } @@ -198,15 +206,16 @@ public class TestRowCounter { * @param expectedCount the expected row count (result of map reduce job). * @throws Exception */ - private void runRowCount(String[] args, int expectedCount) - throws Exception { - GenericOptionsParser opts = new GenericOptionsParser(TEST_UTIL.getConfiguration(), args); + private void runRowCount(String[] args, int expectedCount) throws Exception { + GenericOptionsParser opts = new GenericOptionsParser( + TEST_UTIL.getConfiguration(), args); Configuration conf = opts.getConfiguration(); args = opts.getRemainingArgs(); Job job = RowCounter.createSubmittableJob(conf, args); job.waitForCompletion(true); assertTrue(job.isSuccessful()); - Counter counter = job.getCounters().findCounter(RowCounterMapper.Counters.ROWS); + Counter counter = job.getCounters().findCounter( + RowCounterMapper.Counters.ROWS); assertEquals(expectedCount, counter.getValue()); } @@ -217,8 +226,7 @@ public class TestRowCounter { * @param table * @throws IOException */ - private static void writeRows(Table table) - throws IOException { + private static void writeRows(Table table) throws IOException { final byte[] family = Bytes.toBytes(COL_FAM); final byte[] value = Bytes.toBytes("abcd"); final byte[] col1 = Bytes.toBytes(COL1); @@ -250,11 +258,10 @@ public class TestRowCounter { * test main method. Import should print help and call System.exit */ @Test - public void testImportMain() - throws Exception { + public void testImportMain() throws Exception { PrintStream oldPrintStream = System.err; SecurityManager SECURITY_MANAGER = System.getSecurityManager(); - LauncherSecurityManager newSecurityManager = new LauncherSecurityManager(); + LauncherSecurityManager newSecurityManager= new LauncherSecurityManager(); System.setSecurityManager(newSecurityManager); ByteArrayOutputStream data = new ByteArrayOutputStream(); String[] args = {}; @@ -268,10 +275,11 @@ public class TestRowCounter { } catch (SecurityException e) { assertEquals(-1, newSecurityManager.getExitCode()); assertTrue(data.toString().contains("Wrong number of parameters:")); - assertTrue(data.toString().contains("Usage: RowCounter [options] " + - "[--starttime=[start] --endtime=[end] " + - "[--range=[startKey],[endKey]] " + - "[ ...]")); + assertTrue(data.toString().contains( + "Usage: RowCounter [options] " + + "[--starttime=[start] --endtime=[end] " + + "[--range=[startKey],[endKey]] " + + "[ ...]")); assertTrue(data.toString().contains("-Dhbase.client.scanner.caching=100")); assertTrue(data.toString().contains("-Dmapreduce.map.speculative=false")); } @@ -284,13 +292,14 @@ public class TestRowCounter { fail("should be SecurityException"); } catch (SecurityException e) { assertEquals(-1, newSecurityManager.getExitCode()); - assertTrue(data.toString().contains("Please specify range in such format as \"--range=a,b\" or, with only one boundary," + - - " \"--range=,b\" or \"--range=a,\"")); - assertTrue(data.toString().contains("Usage: RowCounter [options] " + - "[--starttime=[start] --endtime=[end] " + - "[--range=[startKey],[endKey]] " + - "[ ...]")); + assertTrue(data.toString().contains( + "Please specify range in such format as \"--range=a,b\" or, with only one boundary," + + " \"--range=,b\" or \"--range=a,\"")); + assertTrue(data.toString().contains( + "Usage: RowCounter [options] " + + "[--starttime=[start] --endtime=[end] " + + "[--range=[startKey],[endKey]] " + + "[ ...]")); } } finally { @@ -299,4 +308,5 @@ public class TestRowCounter { } } + } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java index 3e5a1ba..e8aca29 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFiles.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.mapreduce; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.security.access.AccessControlLists; import org.apache.hadoop.hbase.security.access.SecureTestUtil; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; * invaluable as it verifies the other mechanisms that need to be * supported as part of a LoadIncrementalFiles call. */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestSecureLoadIncrementalHFiles extends TestLoadIncrementalHFiles{ @BeforeClass diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java index ea13845..0e877ad 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSecureLoadIncrementalHFilesSplitRecovery.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.mapreduce; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.security.access.AccessControlLists; import org.apache.hadoop.hbase.security.access.SecureTestUtil; @@ -40,7 +41,7 @@ import org.junit.experimental.categories.Category; * invaluable as it verifies the other mechanisms that need to be * supported as part of a LoadIncrementalFiles call. */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestSecureLoadIncrementalHFilesSplitRecovery extends TestLoadIncrementalHFilesSplitRecovery { //This "overrides" the parent static method diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java index 443b94f..119df80 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestSimpleTotalOrderPartitioner.java @@ -23,6 +23,7 @@ import static org.junit.Assert.assertEquals; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; @@ -32,7 +33,7 @@ import org.junit.Test; /** * Test of simple partitioner. */ -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestSimpleTotalOrderPartitioner { protected final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); Configuration conf = TEST_UTIL.getConfiguration(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatBase.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatBase.java index c8b6e08..c757a2d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatBase.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatBase.java @@ -25,11 +25,12 @@ import java.net.InetAddress; import java.net.UnknownHostException; import javax.naming.NamingException; + import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SmallTests.class}) public class TestTableInputFormatBase { @Test public void testTableInputFormatBaseReverseDNSForIPv6() diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan1.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan1.java index 490e89a..0503f19 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan1.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan1.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.mapreduce; import java.io.IOException; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; * TestTableInputFormatScan part 1. * @see TestTableInputFormatScanBase */ -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestTableInputFormatScan1 extends TestTableInputFormatScanBase { /** @@ -96,4 +97,93 @@ public class TestTableInputFormatScan1 extends TestTableInputFormatScanBase { testScan(null, "opp", "opo"); } + /** + * Tests a MR scan using specific number of mappers. The test table has 25 regions, + * and all region sizes are set as 0 as default. The average region size is 1 (the smallest + * positive). When we set hbase.mapreduce.input.ratio as -1, all regions will be cut into two + * MapRedcue input splits, the number of MR input splits should be 50; when we set hbase + * .mapreduce.input.ratio as 100, the sum of all region sizes is less then the average region + * size, all regions will be combined into 1 MapRedcue input split. + * + * @throws IOException + * @throws ClassNotFoundException + * @throws InterruptedException + */ + @Test + public void testGetSplits() throws IOException, InterruptedException, ClassNotFoundException { + testNumOfSplits("-1", 50); + testNumOfSplits("100", 1); + } + + /** + * Tests the getSplitKey() method in TableInputFormatBase.java + * + * @throws IOException + * @throws ClassNotFoundException + * @throws InterruptedException + */ + @Test + public void testGetSplitsPoint() throws IOException, InterruptedException, + ClassNotFoundException { + // Test Case 1: "aaabcdef" and "aaaff", split point is "aaad". + byte[] start1 = { 'a', 'a', 'a', 'b', 'c', 'd', 'e', 'f' }; + byte[] end1 = { 'a', 'a', 'a', 'f', 'f' }; + byte[] splitPoint1 = { 'a', 'a', 'a', 'd' }; + testGetSplitKey(start1, end1, splitPoint1, true); + + // Test Case 2: "111000" and "1125790", split point is "111b". + byte[] start2 = { '1', '1', '1', '0', '0', '0' }; + byte[] end2 = { '1', '1', '2', '5', '7', '9', '0' }; + byte[] splitPoint2 = { '1', '1', '1', 'b' }; + testGetSplitKey(start2, end2, splitPoint2, true); + + // Test Case 3: "aaaaaa" and "aab", split point is "aaap". + byte[] start3 = { 'a', 'a', 'a', 'a', 'a', 'a' }; + byte[] end3 = { 'a', 'a', 'b' }; + byte[] splitPoint3 = { 'a', 'a', 'a', 'p' }; + testGetSplitKey(start3, end3, splitPoint3, true); + + // Test Case 4: "aaa" and "aaaz", split point is "aaaM". + byte[] start4 = { 'a', 'a', 'a' }; + byte[] end4 = { 'a', 'a', 'a', 'z' }; + byte[] splitPoint4 = { 'a', 'a', 'a', 'M' }; + testGetSplitKey(start4, end4, splitPoint4, true); + + // Test Case 5: "aaa" and "aaba", split point is "aaap". + byte[] start5 = { 'a', 'a', 'a' }; + byte[] end5 = { 'a', 'a', 'b', 'a' }; + byte[] splitPoint5 = { 'a', 'a', 'a', 'p' }; + testGetSplitKey(start5, end5, splitPoint5, true); + + // Test Case 6: empty key and "hhhqqqwww", split point is "h" + byte[] start6 = {}; + byte[] end6 = { 'h', 'h', 'h', 'q', 'q', 'q', 'w', 'w' }; + byte[] splitPoint6 = { 'h' }; + testGetSplitKey(start6, end6, splitPoint6, true); + + // Test Case 7: "ffffaaa" and empty key, split point depends on the mode we choose(text key or + // binary key). + byte[] start7 = { 'f', 'f', 'f', 'f', 'a', 'a', 'a' }; + byte[] end7 = {}; + byte[] splitPointText7 = { 'f', '~', '~', '~', '~', '~', '~' }; + byte[] splitPointBinary7 = { 'f', 127, 127, 127, 127, 127, 127 }; + testGetSplitKey(start7, end7, splitPointText7, true); + testGetSplitKey(start7, end7, splitPointBinary7, false); + + // Test Case 8: both start key and end key are empty. Split point depends on the mode we + // choose (text key or binary key). + byte[] start8 = {}; + byte[] end8 = {}; + byte[] splitPointText8 = { 'O' }; + byte[] splitPointBinary8 = { 0 }; + testGetSplitKey(start8, end8, splitPointText8, true); + testGetSplitKey(start8, end8, splitPointBinary8, false); + + // Test Case 9: Binary Key example + byte[] start9 = { 13, -19, 126, 127 }; + byte[] end9 = { 13, -19, 127, 0 }; + byte[] splitPoint9 = { 13, -19, 127, -64 }; + testGetSplitKey(start9, end9, splitPoint9, false); + } + } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan2.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan2.java index e022d6b..02f893f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScan2.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hbase.mapreduce; import java.io.IOException; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; * TestTableInputFormatScan part 2. * @see TestTableInputFormatScanBase */ -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestTableInputFormatScan2 extends TestTableInputFormatScanBase { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java index 750ea39..eb42092 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormatScanBase.java @@ -22,6 +22,8 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import java.io.IOException; +import java.util.Arrays; +import java.util.List; import java.util.Map; import java.util.NavigableMap; @@ -37,12 +39,15 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.NullWritable; +import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.junit.AfterClass; +import org.junit.Assert; import org.junit.BeforeClass; + /** *

    * Tests various scan start and stop row scenarios. This is set in a scan and @@ -240,5 +245,42 @@ public abstract class TestTableInputFormatScanBase { LOG.info("After map/reduce completion - job " + jobName); } + + /** + * Tests a MR scan using data skew auto-balance + * + * @throws IOException + * @throws ClassNotFoundException + * @throws InterruptedException + */ + public void testNumOfSplits(String ratio, int expectedNumOfSplits) throws IOException, + InterruptedException, + ClassNotFoundException { + String jobName = "TestJobForNumOfSplits"; + LOG.info("Before map/reduce startup - job " + jobName); + Configuration c = new Configuration(TEST_UTIL.getConfiguration()); + Scan scan = new Scan(); + scan.addFamily(INPUT_FAMILY); + c.set("hbase.mapreduce.input.autobalance", "true"); + c.set("hbase.mapreduce.input.autobalance.maxskewratio", ratio); + c.set(KEY_STARTROW, ""); + c.set(KEY_LASTROW, ""); + Job job = new Job(c, jobName); + TableMapReduceUtil.initTableMapperJob(Bytes.toString(TABLE_NAME), scan, ScanMapper.class, + ImmutableBytesWritable.class, ImmutableBytesWritable.class, job); + TableInputFormat tif = new TableInputFormat(); + tif.setConf(job.getConfiguration()); + Assert.assertEquals(new String(TABLE_NAME), new String(table.getTableName())); + List splits = tif.getSplits(job); + Assert.assertEquals(expectedNumOfSplits, splits.size()); + } + + /** + * Tests for the getSplitKey() method in TableInputFormatBase.java + */ + public void testGetSplitKey(byte[] startKey, byte[] endKey, byte[] splitKey, boolean isText) { + byte[] result = TableInputFormatBase.getSplitKey(startKey, endKey, isText); + Assert.assertArrayEquals(splitKey, result); + } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java index 6fb9460..11a35f0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduce.java @@ -18,9 +18,7 @@ */ package org.apache.hadoop.hbase.mapreduce; -import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; import java.io.File; import java.io.IOException; @@ -31,12 +29,13 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.FileUtil; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; @@ -47,7 +46,8 @@ import org.junit.experimental.categories.Category; * on our tables is simple - take every row in the table, reverse the value of * a particular cell, and write it back to the table. */ -@Category(LargeTests.class) + +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestTableMapReduce extends TestTableMapReduceBase { private static final Log LOG = LogFactory.getLog(TestTableMapReduce.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java index 7190145..303a144 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceUtil.java @@ -19,6 +19,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNull; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; /** * Test different variants of initTableMapperJob method */ -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestTableMapReduceUtil { /* diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java index 62fae77..8d7e2d3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java @@ -29,12 +29,13 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HDFSBlocksDistribution; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat.TableSnapshotRegionSplit; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.mapreduce.InputSplit; @@ -50,7 +51,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(LargeTests.class) +@Category({VerySlowMapReduceTests.class, LargeTests.class}) public class TestTableSnapshotInputFormat extends TableSnapshotInputFormatTestBase { private static final byte[] bbb = Bytes.toBytes("bbb"); @@ -76,37 +77,42 @@ public class TestTableSnapshotInputFormat extends TableSnapshotInputFormatTestBa Configuration conf = UTIL.getConfiguration(); HDFSBlocksDistribution blockDistribution = new HDFSBlocksDistribution(); - Assert.assertEquals(Lists.newArrayList(), tsif.getBestLocations(conf, blockDistribution)); + Assert.assertEquals(Lists.newArrayList(), + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h1"}, 1); - Assert.assertEquals(Lists.newArrayList("h1"), tsif.getBestLocations(conf, blockDistribution)); + Assert.assertEquals(Lists.newArrayList("h1"), + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h1"}, 1); - Assert.assertEquals(Lists.newArrayList("h1"), tsif.getBestLocations(conf, blockDistribution)); + Assert.assertEquals(Lists.newArrayList("h1"), + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h2"}, 1); - Assert.assertEquals(Lists.newArrayList("h1"), tsif.getBestLocations(conf, blockDistribution)); + Assert.assertEquals(Lists.newArrayList("h1"), + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution = new HDFSBlocksDistribution(); blockDistribution.addHostsAndBlockWeight(new String[] {"h1"}, 10); blockDistribution.addHostsAndBlockWeight(new String[] {"h2"}, 7); blockDistribution.addHostsAndBlockWeight(new String[] {"h3"}, 5); blockDistribution.addHostsAndBlockWeight(new String[] {"h4"}, 1); - Assert.assertEquals(Lists.newArrayList("h1"), tsif.getBestLocations(conf, blockDistribution)); + Assert.assertEquals(Lists.newArrayList("h1"), + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h2"}, 2); Assert.assertEquals(Lists.newArrayList("h1", "h2"), - tsif.getBestLocations(conf, blockDistribution)); + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h2"}, 3); Assert.assertEquals(Lists.newArrayList("h2", "h1"), - tsif.getBestLocations(conf, blockDistribution)); + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); blockDistribution.addHostsAndBlockWeight(new String[] {"h3"}, 6); blockDistribution.addHostsAndBlockWeight(new String[] {"h4"}, 9); Assert.assertEquals(Lists.newArrayList("h2", "h3", "h4", "h1"), - tsif.getBestLocations(conf, blockDistribution)); + TableSnapshotInputFormatImpl.getBestLocations(conf, blockDistribution)); } public static enum TestTableSnapshotCounters { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSplit.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSplit.java index 2de2ac3..59f787f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSplit.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSplit.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.mapreduce; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.util.ReflectionUtils; import org.junit.Assert; @@ -30,7 +31,7 @@ import java.util.HashSet; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; -@Category(SmallTests.class) +@Category({MapReduceTests.class, SmallTests.class}) public class TestTableSplit { @Test public void testHashCode() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java index 9efc77e..b701c35 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTimeRangeMapRed.java @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.ImmutableBytesWritable; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.io.MapWritable; import org.apache.hadoop.io.Text; @@ -55,7 +56,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestTimeRangeMapRed { private final static Log log = LogFactory.getLog(TestTimeRangeMapRed.class); private static final HBaseTestingUtility UTIL = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java index 14cafdf..68cf8ba 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALPlayer.java @@ -33,12 +33,11 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellUtil; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; @@ -49,6 +48,8 @@ import org.apache.hadoop.hbase.mapreduce.WALPlayer.WALKeyValueMapper; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LauncherSecurityManager; import org.apache.hadoop.mapreduce.Mapper; @@ -63,7 +64,7 @@ import org.mockito.stubbing.Answer; /** * Basic test for the WALPlayer M/R tool */ -@Category(LargeTests.class) +@Category({MapReduceTests.class, LargeTests.class}) public class TestWALPlayer { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static MiniHBaseCluster cluster; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java index 70ace2c..d9fe0d0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestWALRecordReader.java @@ -38,13 +38,14 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.mapreduce.WALInputFormat.WALKeyRecordReader; import org.apache.hadoop.hbase.mapreduce.WALInputFormat.WALRecordReader; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.testclassification.MapReduceTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.MapReduceTestUtil; @@ -57,7 +58,7 @@ import org.junit.experimental.categories.Category; /** * JUnit tests for the WALRecordReader */ -@Category(MediumTests.class) +@Category({MapReduceTests.class, MediumTests.class}) public class TestWALRecordReader { private final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java index d613852..82d224b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java @@ -90,6 +90,7 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutateResponse; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; +import org.apache.hadoop.hbase.quotas.RegionServerQuotaManager; import org.apache.hadoop.hbase.regionserver.CompactionRequestor; import org.apache.hadoop.hbase.regionserver.FlushRequester; import org.apache.hadoop.hbase.regionserver.HRegion; @@ -319,10 +320,15 @@ ClientProtos.ClientService.BlockingInterface, RegionServerServices { return null; } + @Override public TableLockManager getTableLockManager() { return new NullTableLockManager(); } + public RegionServerQuotaManager getRegionServerQuotaManager() { + return null; + } + @Override public void postOpenDeployTasks(HRegion r) throws KeeperException, IOException { @@ -521,6 +527,12 @@ ClientProtos.ClientService.BlockingInterface, RegionServerServices { } @Override + public Set getOnlineTables() { + // TODO Auto-generated method stub + return null; + } + + @Override public Leases getLeases() { // TODO Auto-generated method stub return null; @@ -602,4 +614,4 @@ ClientProtos.ClientService.BlockingInterface, RegionServerServices { throws ServiceException { return null; } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/Mocking.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/Mocking.java deleted file mode 100644 index 10127c8..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/Mocking.java +++ /dev/null @@ -1,110 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - -import static org.junit.Assert.assertNotSame; - -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.RegionState.State; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; - -/** - * Package scoped mocking utility. - */ -public class Mocking { - - static void waitForRegionFailedToCloseAndSetToPendingClose( - AssignmentManager am, HRegionInfo hri) throws InterruptedException { - // Since region server is fake, sendRegionClose will fail, and closing - // region will fail. For testing purpose, moving it back to pending close - boolean wait = true; - while (wait) { - RegionState state = am.getRegionStates().getRegionState(hri); - if (state != null && state.isFailedClose()){ - am.getRegionStates().updateRegionState(hri, State.PENDING_CLOSE); - wait = false; - } else { - Thread.sleep(1); - } - } - } - - static void waitForRegionPendingOpenInRIT(AssignmentManager am, String encodedName) - throws InterruptedException { - // We used to do a check like this: - //!Mocking.verifyRegionState(this.watcher, REGIONINFO, EventType.M_ZK_REGION_OFFLINE)) { - // There is a race condition with this: because we may do the transition to - // RS_ZK_REGION_OPENING before the RIT is internally updated. We need to wait for the - // RIT to be as we need it to be instead. This cannot happen in a real cluster as we - // update the RIT before sending the openRegion request. - - boolean wait = true; - while (wait) { - RegionState state = am.getRegionStates() - .getRegionsInTransition().get(encodedName); - if (state != null && state.isPendingOpen()){ - wait = false; - } else { - Thread.sleep(1); - } - } - } - - /** - * Verifies that the specified region is in the specified state in ZooKeeper. - *

    - * Returns true if region is in transition and in the specified state in - * ZooKeeper. Returns false if the region does not exist in ZK or is in - * a different state. - *

    - * Method synchronizes() with ZK so will yield an up-to-date result but is - * a slow read. - * @param zkw - * @param region - * @param expectedState - * @return true if region exists and is in expected state - * @throws DeserializationException - */ - static boolean verifyRegionState(ZooKeeperWatcher zkw, HRegionInfo region, EventType expectedState) - throws KeeperException, DeserializationException { - String encoded = region.getEncodedName(); - - String node = ZKAssign.getNodeName(zkw, encoded); - zkw.sync(node); - - // Read existing data of the node - byte [] existingBytes = null; - try { - existingBytes = ZKUtil.getDataAndWatch(zkw, node); - } catch (KeeperException.NoNodeException nne) { - return false; - } catch (KeeperException e) { - throw e; - } - if (existingBytes == null) return false; - RegionTransition rt = RegionTransition.parseFrom(existingBytes); - return rt.getEventType().equals(expectedState); - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java index 9a7351b..34c890e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java @@ -30,11 +30,12 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.zookeeper.ClusterStatusTracker; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; @@ -51,7 +52,7 @@ import org.mockito.Mockito; /** * Test the {@link ActiveMasterManager}. */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestActiveMasterManager { private final static Log LOG = LogFactory.getLog(TestActiveMasterManager.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -320,4 +321,4 @@ public class TestActiveMasterManager { return activeMasterManager; } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java index 21f2343..fccff59 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentListener.java @@ -28,6 +28,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; @@ -46,7 +47,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestAssignmentListener { private static final Log LOG = LogFactory.getLog(TestAssignmentListener.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java deleted file mode 100644 index 65dc2ab..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java +++ /dev/null @@ -1,1514 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotSame; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.HashMap; -import java.util.List; -import java.util.Map; -import java.util.concurrent.atomic.AtomicBoolean; - -import org.apache.hadoop.hbase.CellScannable; -import org.apache.hadoop.hbase.CellUtil; -import org.apache.hadoop.hbase.CoordinatedStateException; -import org.apache.hadoop.hbase.CoordinatedStateManager; -import org.apache.hadoop.hbase.CoordinatedStateManagerFactory; -import org.apache.hadoop.hbase.DoNotRetryIOException; -import org.apache.hadoop.hbase.HBaseConfiguration; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.MetaMockingUtil; -import org.apache.hadoop.hbase.RegionException; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.ServerLoad; -import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.ZooKeeperConnectionException; -import org.apache.hadoop.hbase.client.ClusterConnection; -import org.apache.hadoop.hbase.client.HConnectionTestingUtility; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.executor.ExecutorService; -import org.apache.hadoop.hbase.executor.ExecutorType; -import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; -import org.apache.hadoop.hbase.master.RegionState.State; -import org.apache.hadoop.hbase.master.TableLockManager.NullTableLockManager; -import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory; -import org.apache.hadoop.hbase.master.balancer.SimpleLoadBalancer; -import org.apache.hadoop.hbase.master.handler.EnableTableHandler; -import org.apache.hadoop.hbase.master.handler.ServerShutdownHandler; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.GetRequest; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.GetResponse; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest; -import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanResponse; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table; -import org.apache.hadoop.hbase.regionserver.RegionOpeningState; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; -import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.apache.zookeeper.Watcher; -import org.junit.After; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; -import org.mockito.Mockito; -import org.mockito.internal.util.reflection.Whitebox; -import org.mockito.invocation.InvocationOnMock; -import org.mockito.stubbing.Answer; - -import com.google.protobuf.RpcController; -import com.google.protobuf.ServiceException; - - -/** - * Test {@link AssignmentManager} - */ -@Category(MediumTests.class) -public class TestAssignmentManager { - private static final HBaseTestingUtility HTU = new HBaseTestingUtility(); - private static final ServerName SERVERNAME_A = - ServerName.valueOf("example.org", 1234, 5678); - private static final ServerName SERVERNAME_B = - ServerName.valueOf("example.org", 0, 5678); - private static final HRegionInfo REGIONINFO = - new HRegionInfo(TableName.valueOf("t"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - private static int assignmentCount; - private static boolean enabling = false; - - // Mocked objects or; get redone for each test. - private Server server; - private ServerManager serverManager; - private ZooKeeperWatcher watcher; - private CoordinatedStateManager cp; - private MetaTableLocator mtl; - private LoadBalancer balancer; - private HMaster master; - private ClusterConnection connection; - - @BeforeClass - public static void beforeClass() throws Exception { - HTU.getConfiguration().setBoolean("hbase.assignment.usezk", true); - HTU.startMiniZKCluster(); - } - - @AfterClass - public static void afterClass() throws IOException { - HTU.shutdownMiniZKCluster(); - } - - @Before - public void before() throws ZooKeeperConnectionException, IOException { - // TODO: Make generic versions of what we do below and put up in a mocking - // utility class or move up into HBaseTestingUtility. - - // Mock a Server. Have it return a legit Configuration and ZooKeeperWatcher. - // If abort is called, be sure to fail the test (don't just swallow it - // silently as is mockito default). - this.server = Mockito.mock(Server.class); - Mockito.when(server.getServerName()).thenReturn(ServerName.valueOf("master,1,1")); - Mockito.when(server.getConfiguration()).thenReturn(HTU.getConfiguration()); - this.watcher = - new ZooKeeperWatcher(HTU.getConfiguration(), "mockedServer", this.server, true); - Mockito.when(server.getZooKeeper()).thenReturn(this.watcher); - Mockito.doThrow(new RuntimeException("Aborted")). - when(server).abort(Mockito.anyString(), (Throwable)Mockito.anyObject()); - - cp = new ZkCoordinatedStateManager(); - cp.initialize(this.server); - cp.start(); - - mtl = Mockito.mock(MetaTableLocator.class); - - Mockito.when(server.getCoordinatedStateManager()).thenReturn(cp); - Mockito.when(server.getMetaTableLocator()).thenReturn(mtl); - - // Get a connection w/ mocked up common methods. - this.connection = - (ClusterConnection)HConnectionTestingUtility.getMockedConnection(HTU.getConfiguration()); - - // Make it so we can get a catalogtracker from servermanager.. .needed - // down in guts of server shutdown handler. - Mockito.when(server.getConnection()).thenReturn(connection); - - // Mock a ServerManager. Say server SERVERNAME_{A,B} are online. Also - // make it so if close or open, we return 'success'. - this.serverManager = Mockito.mock(ServerManager.class); - Mockito.when(this.serverManager.isServerOnline(SERVERNAME_A)).thenReturn(true); - Mockito.when(this.serverManager.isServerOnline(SERVERNAME_B)).thenReturn(true); - Mockito.when(this.serverManager.getDeadServers()).thenReturn(new DeadServer()); - final Map onlineServers = new HashMap(); - onlineServers.put(SERVERNAME_B, ServerLoad.EMPTY_SERVERLOAD); - onlineServers.put(SERVERNAME_A, ServerLoad.EMPTY_SERVERLOAD); - Mockito.when(this.serverManager.getOnlineServersList()).thenReturn( - new ArrayList(onlineServers.keySet())); - Mockito.when(this.serverManager.getOnlineServers()).thenReturn(onlineServers); - - List avServers = new ArrayList(); - avServers.addAll(onlineServers.keySet()); - Mockito.when(this.serverManager.createDestinationServersList()).thenReturn(avServers); - Mockito.when(this.serverManager.createDestinationServersList(null)).thenReturn(avServers); - - Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_A, REGIONINFO, -1)). - thenReturn(true); - Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_B, REGIONINFO, -1)). - thenReturn(true); - // Ditto on open. - Mockito.when(this.serverManager.sendRegionOpen(SERVERNAME_A, REGIONINFO, -1, null)). - thenReturn(RegionOpeningState.OPENED); - Mockito.when(this.serverManager.sendRegionOpen(SERVERNAME_B, REGIONINFO, -1, null)). - thenReturn(RegionOpeningState.OPENED); - this.master = Mockito.mock(HMaster.class); - - Mockito.when(this.master.getServerManager()).thenReturn(serverManager); - } - - @After public void after() throws KeeperException, IOException { - if (this.watcher != null) { - // Clean up all znodes - ZKAssign.deleteAllNodes(this.watcher); - this.watcher.close(); - this.cp.stop(); - } - if (this.connection != null) this.connection.close(); - } - - /** - * Test a balance going on at same time as a master failover - * - * @throws IOException - * @throws KeeperException - * @throws InterruptedException - * @throws DeserializationException - */ - @Test(timeout = 60000) - public void testBalanceOnMasterFailoverScenarioWithOpenedNode() - throws IOException, KeeperException, InterruptedException, ServiceException, - DeserializationException, CoordinatedStateException { - AssignmentManagerWithExtrasForTesting am = - setUpMockedAssignmentManager(this.server, this.serverManager); - try { - createRegionPlanAndBalance(am, SERVERNAME_A, SERVERNAME_B, REGIONINFO); - startFakeFailedOverMasterAssignmentManager(am, this.watcher); - while (!am.processRITInvoked) Thread.sleep(1); - // As part of the failover cleanup, the balancing region plan is removed. - // So a random server will be used to open the region. For testing purpose, - // let's assume it is going to open on server b: - am.addPlan(REGIONINFO.getEncodedName(), new RegionPlan(REGIONINFO, null, SERVERNAME_B)); - - Mocking.waitForRegionFailedToCloseAndSetToPendingClose(am, REGIONINFO); - - // Now fake the region closing successfully over on the regionserver; the - // regionserver will have set the region in CLOSED state. This will - // trigger callback into AM. The below zk close call is from the RS close - // region handler duplicated here because its down deep in a private - // method hard to expose. - int versionid = - ZKAssign.transitionNodeClosed(this.watcher, REGIONINFO, SERVERNAME_A, -1); - assertNotSame(versionid, -1); - Mocking.waitForRegionPendingOpenInRIT(am, REGIONINFO.getEncodedName()); - - // Get current versionid else will fail on transition from OFFLINE to - // OPENING below - versionid = ZKAssign.getVersion(this.watcher, REGIONINFO); - assertNotSame(-1, versionid); - // This uglyness below is what the openregionhandler on RS side does. - versionid = ZKAssign.transitionNode(server.getZooKeeper(), REGIONINFO, - SERVERNAME_B, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, versionid); - assertNotSame(-1, versionid); - // Move znode from OPENING to OPENED as RS does on successful open. - versionid = ZKAssign.transitionNodeOpened(this.watcher, REGIONINFO, - SERVERNAME_B, versionid); - assertNotSame(-1, versionid); - am.gate.set(false); - // Block here until our znode is cleared or until this test times out. - ZKAssign.blockUntilNoRIT(watcher); - } finally { - am.getExecutorService().shutdown(); - am.shutdown(); - } - } - - @Test(timeout = 60000) - public void testBalanceOnMasterFailoverScenarioWithClosedNode() - throws IOException, KeeperException, InterruptedException, ServiceException, - DeserializationException, CoordinatedStateException { - AssignmentManagerWithExtrasForTesting am = - setUpMockedAssignmentManager(this.server, this.serverManager); - try { - createRegionPlanAndBalance(am, SERVERNAME_A, SERVERNAME_B, REGIONINFO); - startFakeFailedOverMasterAssignmentManager(am, this.watcher); - while (!am.processRITInvoked) Thread.sleep(1); - // As part of the failover cleanup, the balancing region plan is removed. - // So a random server will be used to open the region. For testing purpose, - // let's assume it is going to open on server b: - am.addPlan(REGIONINFO.getEncodedName(), new RegionPlan(REGIONINFO, null, SERVERNAME_B)); - - Mocking.waitForRegionFailedToCloseAndSetToPendingClose(am, REGIONINFO); - - // Now fake the region closing successfully over on the regionserver; the - // regionserver will have set the region in CLOSED state. This will - // trigger callback into AM. The below zk close call is from the RS close - // region handler duplicated here because its down deep in a private - // method hard to expose. - int versionid = - ZKAssign.transitionNodeClosed(this.watcher, REGIONINFO, SERVERNAME_A, -1); - assertNotSame(versionid, -1); - am.gate.set(false); - Mocking.waitForRegionPendingOpenInRIT(am, REGIONINFO.getEncodedName()); - - // Get current versionid else will fail on transition from OFFLINE to - // OPENING below - versionid = ZKAssign.getVersion(this.watcher, REGIONINFO); - assertNotSame(-1, versionid); - // This uglyness below is what the openregionhandler on RS side does. - versionid = ZKAssign.transitionNode(server.getZooKeeper(), REGIONINFO, - SERVERNAME_B, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, versionid); - assertNotSame(-1, versionid); - // Move znode from OPENING to OPENED as RS does on successful open. - versionid = ZKAssign.transitionNodeOpened(this.watcher, REGIONINFO, - SERVERNAME_B, versionid); - assertNotSame(-1, versionid); - - // Block here until our znode is cleared or until this test timesout. - ZKAssign.blockUntilNoRIT(watcher); - } finally { - am.getExecutorService().shutdown(); - am.shutdown(); - } - } - - @Test(timeout = 60000) - public void testBalanceOnMasterFailoverScenarioWithOfflineNode() - throws IOException, KeeperException, InterruptedException, ServiceException, - DeserializationException, CoordinatedStateException { - AssignmentManagerWithExtrasForTesting am = - setUpMockedAssignmentManager(this.server, this.serverManager); - try { - createRegionPlanAndBalance(am, SERVERNAME_A, SERVERNAME_B, REGIONINFO); - startFakeFailedOverMasterAssignmentManager(am, this.watcher); - while (!am.processRITInvoked) Thread.sleep(1); - // As part of the failover cleanup, the balancing region plan is removed. - // So a random server will be used to open the region. For testing purpose, - // let's assume it is going to open on server b: - am.addPlan(REGIONINFO.getEncodedName(), new RegionPlan(REGIONINFO, null, SERVERNAME_B)); - - Mocking.waitForRegionFailedToCloseAndSetToPendingClose(am, REGIONINFO); - - // Now fake the region closing successfully over on the regionserver; the - // regionserver will have set the region in CLOSED state. This will - // trigger callback into AM. The below zk close call is from the RS close - // region handler duplicated here because its down deep in a private - // method hard to expose. - int versionid = - ZKAssign.transitionNodeClosed(this.watcher, REGIONINFO, SERVERNAME_A, -1); - assertNotSame(versionid, -1); - Mocking.waitForRegionPendingOpenInRIT(am, REGIONINFO.getEncodedName()); - - am.gate.set(false); - // Get current versionid else will fail on transition from OFFLINE to - // OPENING below - versionid = ZKAssign.getVersion(this.watcher, REGIONINFO); - assertNotSame(-1, versionid); - // This uglyness below is what the openregionhandler on RS side does. - versionid = ZKAssign.transitionNode(server.getZooKeeper(), REGIONINFO, - SERVERNAME_B, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, versionid); - assertNotSame(-1, versionid); - // Move znode from OPENING to OPENED as RS does on successful open. - versionid = ZKAssign.transitionNodeOpened(this.watcher, REGIONINFO, - SERVERNAME_B, versionid); - assertNotSame(-1, versionid); - // Block here until our znode is cleared or until this test timesout. - ZKAssign.blockUntilNoRIT(watcher); - } finally { - am.getExecutorService().shutdown(); - am.shutdown(); - } - } - - private void createRegionPlanAndBalance( - final AssignmentManager am, final ServerName from, - final ServerName to, final HRegionInfo hri) throws RegionException { - // Call the balance function but fake the region being online first at - // servername from. - am.regionOnline(hri, from); - // Balance region from 'from' to 'to'. It calls unassign setting CLOSING state - // up in zk. Create a plan and balance - am.balance(new RegionPlan(hri, from, to)); - } - - /** - * Tests AssignmentManager balance function. Runs a balance moving a region - * from one server to another mocking regionserver responding over zk. - * @throws IOException - * @throws KeeperException - * @throws DeserializationException - */ - @Test (timeout=180000) - public void testBalance() throws IOException, KeeperException, DeserializationException, - InterruptedException, CoordinatedStateException { - // Create and startup an executor. This is used by AssignmentManager - // handling zk callbacks. - ExecutorService executor = startupMasterExecutor("testBalanceExecutor"); - - // We need a mocked catalog tracker. - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server - .getConfiguration()); - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, executor, null, master.getTableLockManager()); - am.failoverCleanupDone.set(true); - try { - // Make sure our new AM gets callbacks; once registered, can't unregister. - // Thats ok because we make a new zk watcher for each test. - this.watcher.registerListenerFirst(am); - // Call the balance function but fake the region being online first at - // SERVERNAME_A. Create a balance plan. - am.regionOnline(REGIONINFO, SERVERNAME_A); - // Balance region from A to B. - RegionPlan plan = new RegionPlan(REGIONINFO, SERVERNAME_A, SERVERNAME_B); - am.balance(plan); - - RegionStates regionStates = am.getRegionStates(); - // Must be failed to close since the server is fake - assertTrue(regionStates.isRegionInTransition(REGIONINFO) - && regionStates.isRegionInState(REGIONINFO, State.FAILED_CLOSE)); - // Move it back to pending_close - regionStates.updateRegionState(REGIONINFO, State.PENDING_CLOSE); - - // Now fake the region closing successfully over on the regionserver; the - // regionserver will have set the region in CLOSED state. This will - // trigger callback into AM. The below zk close call is from the RS close - // region handler duplicated here because its down deep in a private - // method hard to expose. - int versionid = - ZKAssign.transitionNodeClosed(this.watcher, REGIONINFO, SERVERNAME_A, -1); - assertNotSame(versionid, -1); - // AM is going to notice above CLOSED and queue up a new assign. The - // assign will go to open the region in the new location set by the - // balancer. The zk node will be OFFLINE waiting for regionserver to - // transition it through OPENING, OPENED. Wait till we see the OFFLINE - // zk node before we proceed. - Mocking.waitForRegionPendingOpenInRIT(am, REGIONINFO.getEncodedName()); - - // Get current versionid else will fail on transition from OFFLINE to OPENING below - versionid = ZKAssign.getVersion(this.watcher, REGIONINFO); - assertNotSame(-1, versionid); - // This uglyness below is what the openregionhandler on RS side does. - versionid = ZKAssign.transitionNode(server.getZooKeeper(), REGIONINFO, - SERVERNAME_B, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, versionid); - assertNotSame(-1, versionid); - // Move znode from OPENING to OPENED as RS does on successful open. - versionid = - ZKAssign.transitionNodeOpened(this.watcher, REGIONINFO, SERVERNAME_B, versionid); - assertNotSame(-1, versionid); - // Wait on the handler removing the OPENED znode. - while(regionStates.isRegionInTransition(REGIONINFO)) Threads.sleep(1); - } finally { - executor.shutdown(); - am.shutdown(); - // Clean up all znodes - ZKAssign.deleteAllNodes(this.watcher); - } - } - - /** - * Run a simple server shutdown handler. - * @throws KeeperException - * @throws IOException - */ - @Test (timeout=180000) - public void testShutdownHandler() - throws KeeperException, IOException, CoordinatedStateException, ServiceException { - // Create and startup an executor. This is used by AssignmentManager - // handling zk callbacks. - ExecutorService executor = startupMasterExecutor("testShutdownHandler"); - - // Create an AM. - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager( - this.server, this.serverManager); - try { - processServerShutdownHandler(am, false); - } finally { - executor.shutdown(); - am.shutdown(); - // Clean up all znodes - ZKAssign.deleteAllNodes(this.watcher); - } - } - - /** - * To test closed region handler to remove rit and delete corresponding znode - * if region in pending close or closing while processing shutdown of a region - * server.(HBASE-5927). - * - * @throws KeeperException - * @throws IOException - * @throws ServiceException - */ - @Test (timeout=180000) - public void testSSHWhenDisableTableInProgress() throws KeeperException, IOException, - CoordinatedStateException, ServiceException { - testCaseWithPartiallyDisabledState(Table.State.DISABLING); - testCaseWithPartiallyDisabledState(Table.State.DISABLED); - } - - - /** - * To test if the split region is removed from RIT if the region was in SPLITTING state but the RS - * has actually completed the splitting in hbase:meta but went down. See HBASE-6070 and also HBASE-5806 - * - * @throws KeeperException - * @throws IOException - */ - @Test (timeout=180000) - public void testSSHWhenSplitRegionInProgress() throws KeeperException, IOException, Exception { - // true indicates the region is split but still in RIT - testCaseWithSplitRegionPartial(true); - // false indicate the region is not split - testCaseWithSplitRegionPartial(false); - } - - private void testCaseWithSplitRegionPartial(boolean regionSplitDone) throws KeeperException, - IOException, InterruptedException, - CoordinatedStateException, ServiceException { - // Create and startup an executor. This is used by AssignmentManager - // handling zk callbacks. - ExecutorService executor = startupMasterExecutor("testSSHWhenSplitRegionInProgress"); - // We need a mocked catalog tracker. - ZKAssign.deleteAllNodes(this.watcher); - - // Create an AM. - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager( - this.server, this.serverManager); - // adding region to regions and servers maps. - am.regionOnline(REGIONINFO, SERVERNAME_A); - // adding region in pending close. - am.getRegionStates().updateRegionState( - REGIONINFO, State.SPLITTING, SERVERNAME_A); - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.ENABLED); - RegionTransition data = RegionTransition.createRegionTransition(EventType.RS_ZK_REGION_SPLITTING, - REGIONINFO.getRegionName(), SERVERNAME_A); - String node = ZKAssign.getNodeName(this.watcher, REGIONINFO.getEncodedName()); - // create znode in M_ZK_REGION_CLOSING state. - ZKUtil.createAndWatch(this.watcher, node, data.toByteArray()); - - try { - processServerShutdownHandler(am, regionSplitDone); - // check znode deleted or not. - // In both cases the znode should be deleted. - - if (regionSplitDone) { - assertFalse("Region state of region in SPLITTING should be removed from rit.", - am.getRegionStates().isRegionsInTransition()); - } else { - while (!am.assignInvoked) { - Thread.sleep(1); - } - assertTrue("Assign should be invoked.", am.assignInvoked); - } - } finally { - REGIONINFO.setOffline(false); - REGIONINFO.setSplit(false); - executor.shutdown(); - am.shutdown(); - // Clean up all znodes - ZKAssign.deleteAllNodes(this.watcher); - } - } - - private void testCaseWithPartiallyDisabledState(Table.State state) throws KeeperException, - IOException, CoordinatedStateException, ServiceException { - // Create and startup an executor. This is used by AssignmentManager - // handling zk callbacks. - ExecutorService executor = startupMasterExecutor("testSSHWhenDisableTableInProgress"); - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server.getConfiguration()); - ZKAssign.deleteAllNodes(this.watcher); - - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, executor, null, master.getTableLockManager()); - // adding region to regions and servers maps. - am.regionOnline(REGIONINFO, SERVERNAME_A); - // adding region in pending close. - am.getRegionStates().updateRegionState(REGIONINFO, State.PENDING_CLOSE); - if (state == Table.State.DISABLING) { - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.DISABLING); - } else { - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.DISABLED); - } - RegionTransition data = RegionTransition.createRegionTransition(EventType.M_ZK_REGION_CLOSING, - REGIONINFO.getRegionName(), SERVERNAME_A); - // RegionTransitionData data = new - // RegionTransitionData(EventType.M_ZK_REGION_CLOSING, - // REGIONINFO.getRegionName(), SERVERNAME_A); - String node = ZKAssign.getNodeName(this.watcher, REGIONINFO.getEncodedName()); - // create znode in M_ZK_REGION_CLOSING state. - ZKUtil.createAndWatch(this.watcher, node, data.toByteArray()); - - try { - processServerShutdownHandler(am, false); - // check znode deleted or not. - // In both cases the znode should be deleted. - assertTrue("The znode should be deleted.", ZKUtil.checkExists(this.watcher, node) == -1); - // check whether in rit or not. In the DISABLING case also the below - // assert will be true but the piece of code added for HBASE-5927 will not - // do that. - if (state == Table.State.DISABLED) { - assertFalse("Region state of region in pending close should be removed from rit.", - am.getRegionStates().isRegionsInTransition()); - } - } finally { - am.setEnabledTable(REGIONINFO.getTable()); - executor.shutdown(); - am.shutdown(); - // Clean up all znodes - ZKAssign.deleteAllNodes(this.watcher); - } - } - - private void processServerShutdownHandler(AssignmentManager am, boolean splitRegion) - throws IOException, ServiceException { - // Make sure our new AM gets callbacks; once registered, can't unregister. - // Thats ok because we make a new zk watcher for each test. - this.watcher.registerListenerFirst(am); - - // Need to set up a fake scan of meta for the servershutdown handler - // Make an RS Interface implementation. Make it so a scanner can go against it. - ClientProtos.ClientService.BlockingInterface implementation = - Mockito.mock(ClientProtos.ClientService.BlockingInterface.class); - // Get a meta row result that has region up on SERVERNAME_A - - Result r; - if (splitRegion) { - r = MetaMockingUtil.getMetaTableRowResultAsSplitRegion(REGIONINFO, SERVERNAME_A); - } else { - r = MetaMockingUtil.getMetaTableRowResult(REGIONINFO, SERVERNAME_A); - } - - final ScanResponse.Builder builder = ScanResponse.newBuilder(); - builder.setMoreResults(true); - builder.addCellsPerResult(r.size()); - final List cellScannables = new ArrayList(1); - cellScannables.add(r); - Mockito.when(implementation.scan( - (RpcController)Mockito.any(), (ScanRequest)Mockito.any())). - thenAnswer(new Answer() { - @Override - public ScanResponse answer(InvocationOnMock invocation) throws Throwable { - PayloadCarryingRpcController controller = (PayloadCarryingRpcController) invocation - .getArguments()[0]; - if (controller != null) { - controller.setCellScanner(CellUtil.createCellScanner(cellScannables)); - } - return builder.build(); - } - }); - - // Get a connection w/ mocked up common methods. - ClusterConnection connection = - HConnectionTestingUtility.getMockedConnectionAndDecorate(HTU.getConfiguration(), - null, implementation, SERVERNAME_B, REGIONINFO); - // These mocks were done up when all connections were managed. World is different now we - // moved to unmanaged connections. It messes up the intercepts done in these tests. - // Just mark connections as marked and then down in MetaTableAccessor, it will go the path - // that picks up the above mocked up 'implementation' so 'scans' of meta return the expected - // result. Redo in new realm of unmanaged connections. - Mockito.when(connection.isManaged()).thenReturn(true); - try { - // Make it so we can get a catalogtracker from servermanager.. .needed - // down in guts of server shutdown handler. - Mockito.when(this.server.getConnection()).thenReturn(connection); - - // Now make a server shutdown handler instance and invoke process. - // Have it that SERVERNAME_A died. - DeadServer deadServers = new DeadServer(); - deadServers.add(SERVERNAME_A); - // I need a services instance that will return the AM - MasterFileSystem fs = Mockito.mock(MasterFileSystem.class); - Mockito.doNothing().when(fs).setLogRecoveryMode(); - Mockito.when(fs.getLogRecoveryMode()).thenReturn(RecoveryMode.LOG_REPLAY); - MasterServices services = Mockito.mock(MasterServices.class); - Mockito.when(services.getAssignmentManager()).thenReturn(am); - Mockito.when(services.getServerManager()).thenReturn(this.serverManager); - Mockito.when(services.getZooKeeper()).thenReturn(this.watcher); - Mockito.when(services.getMasterFileSystem()).thenReturn(fs); - Mockito.when(services.getConnection()).thenReturn(connection); - ServerShutdownHandler handler = new ServerShutdownHandler(this.server, - services, deadServers, SERVERNAME_A, false); - am.failoverCleanupDone.set(true); - handler.process(); - // The region in r will have been assigned. It'll be up in zk as unassigned. - } finally { - if (connection != null) connection.close(); - } - } - - /** - * Create and startup executor pools. Start same set as master does (just - * run a few less). - * @param name Name to give our executor - * @return Created executor (be sure to call shutdown when done). - */ - private ExecutorService startupMasterExecutor(final String name) { - // TODO: Move up into HBaseTestingUtility? Generally useful. - ExecutorService executor = new ExecutorService(name); - executor.startExecutorService(ExecutorType.MASTER_OPEN_REGION, 3); - executor.startExecutorService(ExecutorType.MASTER_CLOSE_REGION, 3); - executor.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS, 3); - executor.startExecutorService(ExecutorType.MASTER_META_SERVER_OPERATIONS, 3); - return executor; - } - - @Test (timeout=180000) - public void testUnassignWithSplitAtSameTime() throws KeeperException, - IOException, CoordinatedStateException { - // Region to use in test. - final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO; - // First amend the servermanager mock so that when we do send close of the - // first meta region on SERVERNAME_A, it will return true rather than - // default null. - Mockito.when(this.serverManager.sendRegionClose(SERVERNAME_A, hri, -1)).thenReturn(true); - // Need a mocked catalog tracker. - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer(server - .getConfiguration()); - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, null, null, master.getTableLockManager()); - try { - // First make sure my mock up basically works. Unassign a region. - unassign(am, SERVERNAME_A, hri); - // This delete will fail if the previous unassign did wrong thing. - ZKAssign.deleteClosingNode(this.watcher, hri, SERVERNAME_A); - // Now put a SPLITTING region in the way. I don't have to assert it - // go put in place. This method puts it in place then asserts it still - // owns it by moving state from SPLITTING to SPLITTING. - int version = createNodeSplitting(this.watcher, hri, SERVERNAME_A); - // Now, retry the unassign with the SPLTTING in place. It should just - // complete without fail; a sort of 'silent' recognition that the - // region to unassign has been split and no longer exists: TOOD: what if - // the split fails and the parent region comes back to life? - unassign(am, SERVERNAME_A, hri); - // This transition should fail if the znode has been messed with. - ZKAssign.transitionNode(this.watcher, hri, SERVERNAME_A, - EventType.RS_ZK_REGION_SPLITTING, EventType.RS_ZK_REGION_SPLITTING, version); - assertFalse(am.getRegionStates().isRegionInTransition(hri)); - } finally { - am.shutdown(); - } - } - - /** - * Tests the processDeadServersAndRegionsInTransition should not fail with NPE - * when it failed to get the children. Let's abort the system in this - * situation - * @throws ServiceException - */ - @Test(timeout = 60000) - public void testProcessDeadServersAndRegionsInTransitionShouldNotFailWithNPE() - throws IOException, KeeperException, CoordinatedStateException, - InterruptedException, ServiceException { - final RecoverableZooKeeper recoverableZk = Mockito - .mock(RecoverableZooKeeper.class); - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager( - this.server, this.serverManager); - Watcher zkw = new ZooKeeperWatcher(HBaseConfiguration.create(), "unittest", - null) { - @Override - public RecoverableZooKeeper getRecoverableZooKeeper() { - return recoverableZk; - } - }; - ((ZooKeeperWatcher) zkw).registerListener(am); - Mockito.doThrow(new InterruptedException()).when(recoverableZk) - .getChildren("/hbase/region-in-transition", null); - am.setWatcher((ZooKeeperWatcher) zkw); - try { - am.processDeadServersAndRegionsInTransition(null); - fail("Expected to abort"); - } catch (NullPointerException e) { - fail("Should not throw NPE"); - } catch (RuntimeException e) { - assertEquals("Aborted", e.getLocalizedMessage()); - } finally { - am.shutdown(); - } - } - /** - * TestCase verifies that the regionPlan is updated whenever a region fails to open - * and the master tries to process RS_ZK_FAILED_OPEN state.(HBASE-5546). - */ - @Test(timeout = 60000) - public void testRegionPlanIsUpdatedWhenRegionFailsToOpen() throws IOException, KeeperException, - ServiceException, InterruptedException, CoordinatedStateException { - this.server.getConfiguration().setClass( - HConstants.HBASE_MASTER_LOADBALANCER_CLASS, MockedLoadBalancer.class, - LoadBalancer.class); - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager( - this.server, this.serverManager); - try { - // Boolean variable used for waiting until randomAssignment is called and - // new - // plan is generated. - AtomicBoolean gate = new AtomicBoolean(false); - if (balancer instanceof MockedLoadBalancer) { - ((MockedLoadBalancer) balancer).setGateVariable(gate); - } - ZKAssign.createNodeOffline(this.watcher, REGIONINFO, SERVERNAME_A); - int v = ZKAssign.getVersion(this.watcher, REGIONINFO); - ZKAssign.transitionNode(this.watcher, REGIONINFO, SERVERNAME_A, - EventType.M_ZK_REGION_OFFLINE, EventType.RS_ZK_REGION_FAILED_OPEN, v); - String path = ZKAssign.getNodeName(this.watcher, REGIONINFO - .getEncodedName()); - am.getRegionStates().updateRegionState( - REGIONINFO, State.OPENING, SERVERNAME_A); - // a dummy plan inserted into the regionPlans. This plan is cleared and - // new one is formed - am.regionPlans.put(REGIONINFO.getEncodedName(), new RegionPlan( - REGIONINFO, null, SERVERNAME_A)); - RegionPlan regionPlan = am.regionPlans.get(REGIONINFO.getEncodedName()); - List serverList = new ArrayList(2); - serverList.add(SERVERNAME_B); - Mockito.when( - this.serverManager.createDestinationServersList(SERVERNAME_A)) - .thenReturn(serverList); - am.nodeDataChanged(path); - // here we are waiting until the random assignment in the load balancer is - // called. - while (!gate.get()) { - Thread.sleep(10); - } - // new region plan may take some time to get updated after random - // assignment is called and - // gate is set to true. - RegionPlan newRegionPlan = am.regionPlans - .get(REGIONINFO.getEncodedName()); - while (newRegionPlan == null) { - Thread.sleep(10); - newRegionPlan = am.regionPlans.get(REGIONINFO.getEncodedName()); - } - // the new region plan created may contain the same RS as destination but - // it should - // be new plan. - assertNotSame("Same region plan should not come", regionPlan, - newRegionPlan); - assertTrue("Destination servers should be different.", !(regionPlan - .getDestination().equals(newRegionPlan.getDestination()))); - - Mocking.waitForRegionPendingOpenInRIT(am, REGIONINFO.getEncodedName()); - } finally { - this.server.getConfiguration().setClass( - HConstants.HBASE_MASTER_LOADBALANCER_CLASS, SimpleLoadBalancer.class, - LoadBalancer.class); - am.getExecutorService().shutdown(); - am.shutdown(); - } - } - - /** - * Mocked load balancer class used in the testcase to make sure that the testcase waits until - * random assignment is called and the gate variable is set to true. - */ - public static class MockedLoadBalancer extends SimpleLoadBalancer { - private AtomicBoolean gate; - - public void setGateVariable(AtomicBoolean gate) { - this.gate = gate; - } - - @Override - public ServerName randomAssignment(HRegionInfo regionInfo, List servers) { - ServerName randomServerName = super.randomAssignment(regionInfo, servers); - this.gate.set(true); - return randomServerName; - } - - @Override - public Map> retainAssignment( - Map regions, List servers) { - this.gate.set(true); - return super.retainAssignment(regions, servers); - } - } - - /** - * Test the scenario when the master is in failover and trying to process a - * region which is in Opening state on a dead RS. Master will force offline the - * region and put it in transition. AM relies on SSH to reassign it. - */ - @Test(timeout = 60000) - public void testRegionInOpeningStateOnDeadRSWhileMasterFailover() throws IOException, - KeeperException, ServiceException, CoordinatedStateException, InterruptedException { - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager( - this.server, this.serverManager); - ZKAssign.createNodeOffline(this.watcher, REGIONINFO, SERVERNAME_A); - int version = ZKAssign.getVersion(this.watcher, REGIONINFO); - ZKAssign.transitionNode(this.watcher, REGIONINFO, SERVERNAME_A, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, version); - RegionTransition rt = RegionTransition.createRegionTransition(EventType.RS_ZK_REGION_OPENING, - REGIONINFO.getRegionName(), SERVERNAME_A, HConstants.EMPTY_BYTE_ARRAY); - version = ZKAssign.getVersion(this.watcher, REGIONINFO); - Mockito.when(this.serverManager.isServerOnline(SERVERNAME_A)).thenReturn(false); - am.getRegionStates().logSplit(SERVERNAME_A); // Assume log splitting is done - am.getRegionStates().createRegionState(REGIONINFO); - am.gate.set(false); - - BaseCoordinatedStateManager cp = new ZkCoordinatedStateManager(); - cp.initialize(server); - cp.start(); - - OpenRegionCoordination orc = cp.getOpenRegionCoordination(); - ZkOpenRegionCoordination.ZkOpenRegionDetails zkOrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkOrd.setServerName(server.getServerName()); - zkOrd.setVersion(version); - - assertFalse(am.processRegionsInTransition(rt, REGIONINFO, orc, zkOrd)); - am.getTableStateManager().setTableState(REGIONINFO.getTable(), Table.State.ENABLED); - processServerShutdownHandler(am, false); - // Waiting for the assignment to get completed. - while (!am.gate.get()) { - Thread.sleep(10); - } - assertTrue("The region should be assigned immediately.", null != am.regionPlans.get(REGIONINFO - .getEncodedName())); - am.shutdown(); - } - - /** - * Test verifies whether assignment is skipped for regions of tables in DISABLING state during - * clean cluster startup. See HBASE-6281. - * - * @throws KeeperException - * @throws IOException - * @throws Exception - */ - @Test(timeout = 60000) - public void testDisablingTableRegionsAssignmentDuringCleanClusterStartup() - throws KeeperException, IOException, Exception { - this.server.getConfiguration().setClass(HConstants.HBASE_MASTER_LOADBALANCER_CLASS, - MockedLoadBalancer.class, LoadBalancer.class); - Mockito.when(this.serverManager.getOnlineServers()).thenReturn( - new HashMap(0)); - List destServers = new ArrayList(1); - destServers.add(SERVERNAME_A); - Mockito.when(this.serverManager.createDestinationServersList()).thenReturn(destServers); - // To avoid cast exception in DisableTableHandler process. - HTU.getConfiguration().setInt(HConstants.MASTER_PORT, 0); - - CoordinatedStateManager csm = CoordinatedStateManagerFactory.getCoordinatedStateManager( - HTU.getConfiguration()); - Server server = new HMaster(HTU.getConfiguration(), csm); - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager(server, - this.serverManager); - - Whitebox.setInternalState(server, "metaTableLocator", Mockito.mock(MetaTableLocator.class)); - - // Make it so we can get a catalogtracker from servermanager.. .needed - // down in guts of server shutdown handler. - Whitebox.setInternalState(server, "clusterConnection", am.getConnection()); - - AtomicBoolean gate = new AtomicBoolean(false); - if (balancer instanceof MockedLoadBalancer) { - ((MockedLoadBalancer) balancer).setGateVariable(gate); - } - try{ - // set table in disabling state. - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.DISABLING); - am.joinCluster(); - // should not call retainAssignment if we get empty regions in assignAllUserRegions. - assertFalse( - "Assign should not be invoked for disabling table regions during clean cluster startup.", - gate.get()); - // need to change table state from disabling to disabled. - assertTrue("Table should be disabled.", - am.getTableStateManager().isTableState(REGIONINFO.getTable(), - Table.State.DISABLED)); - } finally { - this.server.getConfiguration().setClass( - HConstants.HBASE_MASTER_LOADBALANCER_CLASS, SimpleLoadBalancer.class, - LoadBalancer.class); - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.ENABLED); - am.shutdown(); - } - } - - /** - * Test verifies whether all the enabling table regions assigned only once during master startup. - * - * @throws KeeperException - * @throws IOException - * @throws Exception - */ - @Test (timeout=180000) - public void testMasterRestartWhenTableInEnabling() throws KeeperException, IOException, Exception { - enabling = true; - List destServers = new ArrayList(1); - destServers.add(SERVERNAME_A); - Mockito.when(this.serverManager.createDestinationServersList()).thenReturn(destServers); - Mockito.when(this.serverManager.isServerOnline(SERVERNAME_A)).thenReturn(true); - HTU.getConfiguration().setInt(HConstants.MASTER_PORT, 0); - CoordinatedStateManager csm = CoordinatedStateManagerFactory.getCoordinatedStateManager( - HTU.getConfiguration()); - Server server = new HMaster(HTU.getConfiguration(), csm); - Whitebox.setInternalState(server, "serverManager", this.serverManager); - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager(server, - this.serverManager); - - Whitebox.setInternalState(server, "metaTableLocator", Mockito.mock(MetaTableLocator.class)); - - // Make it so we can get a catalogtracker from servermanager.. .needed - // down in guts of server shutdown handler. - Whitebox.setInternalState(server, "clusterConnection", am.getConnection()); - - try { - // set table in enabling state. - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.ENABLING); - new EnableTableHandler(server, REGIONINFO.getTable(), - am, new NullTableLockManager(), true).prepare() - .process(); - assertEquals("Number of assignments should be 1.", 1, assignmentCount); - assertTrue("Table should be enabled.", - am.getTableStateManager().isTableState(REGIONINFO.getTable(), - Table.State.ENABLED)); - } finally { - enabling = false; - assignmentCount = 0; - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.ENABLED); - am.shutdown(); - ZKAssign.deleteAllNodes(this.watcher); - } - } - - /** - * Test verifies whether stale znodes of unknown tables as for the hbase:meta will be removed or - * not. - * @throws KeeperException - * @throws IOException - * @throws Exception - */ - @Test (timeout=180000) - public void testMasterRestartShouldRemoveStaleZnodesOfUnknownTableAsForMeta() - throws Exception { - List destServers = new ArrayList(1); - destServers.add(SERVERNAME_A); - Mockito.when(this.serverManager.createDestinationServersList()).thenReturn(destServers); - Mockito.when(this.serverManager.isServerOnline(SERVERNAME_A)).thenReturn(true); - HTU.getConfiguration().setInt(HConstants.MASTER_PORT, 0); - CoordinatedStateManager csm = CoordinatedStateManagerFactory.getCoordinatedStateManager( - HTU.getConfiguration()); - Server server = new HMaster(HTU.getConfiguration(), csm); - Whitebox.setInternalState(server, "serverManager", this.serverManager); - AssignmentManagerWithExtrasForTesting am = setUpMockedAssignmentManager(server, - this.serverManager); - - Whitebox.setInternalState(server, "metaTableLocator", Mockito.mock(MetaTableLocator.class)); - - // Make it so we can get a catalogtracker from servermanager.. .needed - // down in guts of server shutdown handler. - Whitebox.setInternalState(server, "clusterConnection", am.getConnection()); - - try { - TableName tableName = TableName.valueOf("dummyTable"); - // set table in enabling state. - am.getTableStateManager().setTableState(tableName, - Table.State.ENABLING); - am.joinCluster(); - assertFalse("Table should not be present in zookeeper.", - am.getTableStateManager().isTablePresent(tableName)); - } finally { - am.shutdown(); - } - } - /** - * When a region is in transition, if the region server opening the region goes down, - * the region assignment takes a long time normally (waiting for timeout monitor to trigger assign). - * This test is to make sure SSH reassigns it right away. - */ - @Test (timeout=180000) - public void testSSHTimesOutOpeningRegionTransition() - throws KeeperException, IOException, CoordinatedStateException, ServiceException { - // Create an AM. - AssignmentManagerWithExtrasForTesting am = - setUpMockedAssignmentManager(this.server, this.serverManager); - // adding region in pending open. - RegionState state = new RegionState(REGIONINFO, - State.OPENING, System.currentTimeMillis(), SERVERNAME_A); - am.getRegionStates().regionOnline(REGIONINFO, SERVERNAME_B); - am.getRegionStates().regionsInTransition.put(REGIONINFO.getEncodedName(), state); - // adding region plan - am.regionPlans.put(REGIONINFO.getEncodedName(), - new RegionPlan(REGIONINFO, SERVERNAME_B, SERVERNAME_A)); - am.getTableStateManager().setTableState(REGIONINFO.getTable(), - Table.State.ENABLED); - - try { - am.assignInvoked = false; - processServerShutdownHandler(am, false); - assertTrue(am.assignInvoked); - } finally { - am.getRegionStates().regionsInTransition.remove(REGIONINFO.getEncodedName()); - am.regionPlans.remove(REGIONINFO.getEncodedName()); - am.shutdown(); - } - } - - /** - * Scenario:

      - *
    • master starts a close, and creates a znode
    • - *
    • it fails just at this moment, before contacting the RS
    • - *
    • while the second master is coming up, the targeted RS dies. But it's before ZK timeout so - * we don't know, and we have an exception.
    • - *
    • the master must handle this nicely and reassign. - *
    - */ - @Test (timeout=180000) - public void testClosingFailureDuringRecovery() throws Exception { - - AssignmentManagerWithExtrasForTesting am = - setUpMockedAssignmentManager(this.server, this.serverManager); - ZKAssign.createNodeClosing(this.watcher, REGIONINFO, SERVERNAME_A); - try { - am.getRegionStates().createRegionState(REGIONINFO); - - assertFalse( am.getRegionStates().isRegionsInTransition() ); - - am.processRegionInTransition(REGIONINFO.getEncodedName(), REGIONINFO); - - assertTrue( am.getRegionStates().isRegionsInTransition() ); - } finally { - am.shutdown(); - } - } - - /** - * Creates a new ephemeral node in the SPLITTING state for the specified region. - * Create it ephemeral in case regionserver dies mid-split. - * - *

    Does not transition nodes from other states. If a node already exists - * for this region, a {@link NodeExistsException} will be thrown. - * - * @param zkw zk reference - * @param region region to be created as offline - * @param serverName server event originates from - * @return Version of znode created. - * @throws KeeperException - * @throws IOException - */ - // Copied from SplitTransaction rather than open the method over there in - // the regionserver package. - private static int createNodeSplitting(final ZooKeeperWatcher zkw, - final HRegionInfo region, final ServerName serverName) - throws KeeperException, IOException { - RegionTransition rt = - RegionTransition.createRegionTransition(EventType.RS_ZK_REGION_SPLITTING, - region.getRegionName(), serverName); - - String node = ZKAssign.getNodeName(zkw, region.getEncodedName()); - if (!ZKUtil.createEphemeralNodeAndWatch(zkw, node, rt.toByteArray())) { - throw new IOException("Failed create of ephemeral " + node); - } - // Transition node from SPLITTING to SPLITTING and pick up version so we - // can be sure this znode is ours; version is needed deleting. - return transitionNodeSplitting(zkw, region, serverName, -1); - } - - // Copied from SplitTransaction rather than open the method over there in - // the regionserver package. - private static int transitionNodeSplitting(final ZooKeeperWatcher zkw, - final HRegionInfo parent, - final ServerName serverName, final int version) - throws KeeperException, IOException { - return ZKAssign.transitionNode(zkw, parent, serverName, - EventType.RS_ZK_REGION_SPLITTING, EventType.RS_ZK_REGION_SPLITTING, version); - } - - private void unassign(final AssignmentManager am, final ServerName sn, - final HRegionInfo hri) throws RegionException { - // Before I can unassign a region, I need to set it online. - am.regionOnline(hri, sn); - // Unassign region. - am.unassign(hri); - } - - /** - * Create an {@link AssignmentManagerWithExtrasForTesting} that has mocked - * {@link CatalogTracker} etc. - * @param server - * @param manager - * @return An AssignmentManagerWithExtras with mock connections, etc. - * @throws IOException - * @throws KeeperException - */ - private AssignmentManagerWithExtrasForTesting setUpMockedAssignmentManager(final Server server, - final ServerManager manager) throws IOException, KeeperException, - ServiceException, CoordinatedStateException { - // Make an RS Interface implementation. Make it so a scanner can go against - // it and a get to return the single region, REGIONINFO, this test is - // messing with. Needed when "new master" joins cluster. AM will try and - // rebuild its list of user regions and it will also get the HRI that goes - // with an encoded name by doing a Get on hbase:meta - ClientProtos.ClientService.BlockingInterface ri = - Mockito.mock(ClientProtos.ClientService.BlockingInterface.class); - // Get a meta row result that has region up on SERVERNAME_A for REGIONINFO - Result r = MetaMockingUtil.getMetaTableRowResult(REGIONINFO, SERVERNAME_A); - final ScanResponse.Builder builder = ScanResponse.newBuilder(); - builder.setMoreResults(true); - builder.addCellsPerResult(r.size()); - final List rows = new ArrayList(1); - rows.add(r); - Answer ans = new Answer() { - @Override - public ScanResponse answer(InvocationOnMock invocation) throws Throwable { - PayloadCarryingRpcController controller = (PayloadCarryingRpcController) invocation - .getArguments()[0]; - if (controller != null) { - controller.setCellScanner(CellUtil.createCellScanner(rows)); - } - return builder.build(); - } - }; - if (enabling) { - Mockito.when(ri.scan((RpcController) Mockito.any(), (ScanRequest) Mockito.any())) - .thenAnswer(ans).thenAnswer(ans).thenAnswer(ans).thenAnswer(ans).thenAnswer(ans) - .thenReturn(ScanResponse.newBuilder().setMoreResults(false).build()); - } else { - Mockito.when(ri.scan((RpcController) Mockito.any(), (ScanRequest) Mockito.any())).thenAnswer( - ans); - } - // If a get, return the above result too for REGIONINFO - GetResponse.Builder getBuilder = GetResponse.newBuilder(); - getBuilder.setResult(ProtobufUtil.toResult(r)); - Mockito.when(ri.get((RpcController)Mockito.any(), (GetRequest) Mockito.any())). - thenReturn(getBuilder.build()); - // Get a connection w/ mocked up common methods. - ClusterConnection connection = (ClusterConnection)HConnectionTestingUtility. - getMockedConnectionAndDecorate(HTU.getConfiguration(), null, - ri, SERVERNAME_B, REGIONINFO); - // These mocks were done up when all connections were managed. World is different now we - // moved to unmanaged connections. It messes up the intercepts done in these tests. - // Just mark connections as marked and then down in MetaTableAccessor, it will go the path - // that picks up the above mocked up 'implementation' so 'scans' of meta return the expected - // result. Redo in new realm of unmanaged connections. - Mockito.when(connection.isManaged()).thenReturn(true); - // Make it so we can get the connection from our mocked catalogtracker - // Create and startup an executor. Used by AM handling zk callbacks. - ExecutorService executor = startupMasterExecutor("mockedAMExecutor"); - this.balancer = LoadBalancerFactory.getLoadBalancer(server.getConfiguration()); - AssignmentManagerWithExtrasForTesting am = new AssignmentManagerWithExtrasForTesting( - server, connection, manager, this.balancer, executor, new NullTableLockManager()); - return am; - } - - /** - * An {@link AssignmentManager} with some extra facility used testing - */ - class AssignmentManagerWithExtrasForTesting extends AssignmentManager { - // Keep a reference so can give it out below in {@link #getExecutorService} - private final ExecutorService es; - boolean processRITInvoked = false; - boolean assignInvoked = false; - AtomicBoolean gate = new AtomicBoolean(true); - private ClusterConnection connection; - - public AssignmentManagerWithExtrasForTesting( - final Server master, ClusterConnection connection, final ServerManager serverManager, - final LoadBalancer balancer, - final ExecutorService service, final TableLockManager tableLockManager) - throws KeeperException, IOException, CoordinatedStateException { - super(master, serverManager, balancer, service, null, tableLockManager); - this.es = service; - this.connection = connection; - } - - @Override - boolean processRegionInTransition(String encodedRegionName, - HRegionInfo regionInfo) throws KeeperException, IOException { - this.processRITInvoked = true; - return super.processRegionInTransition(encodedRegionName, regionInfo); - } - - @Override - public void assign(HRegionInfo region, boolean setOfflineInZK, boolean forceNewPlan) { - if (enabling) { - assignmentCount++; - this.regionOnline(region, SERVERNAME_A); - } else { - super.assign(region, setOfflineInZK, forceNewPlan); - this.gate.set(true); - } - } - - @Override - boolean assign(ServerName destination, List regions) - throws InterruptedException { - if (enabling) { - for (HRegionInfo region : regions) { - assignmentCount++; - this.regionOnline(region, SERVERNAME_A); - } - return true; - } - return super.assign(destination, regions); - } - - @Override - public void assign(List regions) - throws IOException, InterruptedException { - assignInvoked = (regions != null && regions.size() > 0); - super.assign(regions); - this.gate.set(true); - } - - /** reset the watcher */ - void setWatcher(ZooKeeperWatcher watcher) { - this.watcher = watcher; - } - - /** - * @return ExecutorService used by this instance. - */ - ExecutorService getExecutorService() { - return this.es; - } - - /* - * Convenient method to retrieve mocked up connection - */ - ClusterConnection getConnection() { - return this.connection; - } - - @Override - public void shutdown() { - super.shutdown(); - if (this.connection != null) - try { - this.connection.close(); - } catch (IOException e) { - fail("Failed to close connection"); - } - } - } - - /** - * Call joinCluster on the passed AssignmentManager. Do it in a thread - * so it runs independent of what all else is going on. Try to simulate - * an AM running insided a failed over master by clearing all in-memory - * AM state first. - */ - private void startFakeFailedOverMasterAssignmentManager(final AssignmentManager am, - final ZooKeeperWatcher watcher) { - // Make sure our new AM gets callbacks; once registered, we can't unregister. - // Thats ok because we make a new zk watcher for each test. - watcher.registerListenerFirst(am); - Thread t = new Thread("RunAmJoinCluster") { - @Override - public void run() { - // Call the joinCluster function as though we were doing a master - // failover at this point. It will stall just before we go to add - // the RIT region to our RIT Map in AM at processRegionsInTransition. - // First clear any inmemory state from AM so it acts like a new master - // coming on line. - am.getRegionStates().regionsInTransition.clear(); - am.regionPlans.clear(); - try { - am.joinCluster(); - } catch (IOException e) { - throw new RuntimeException(e); - } catch (KeeperException e) { - throw new RuntimeException(e); - } catch (InterruptedException e) { - throw new RuntimeException(e); - } catch (CoordinatedStateException e) { - throw new RuntimeException(e); - } - } - }; - t.start(); - while (!t.isAlive()) Threads.sleep(1); - } - - @Test (timeout=180000) - public void testForceAssignMergingRegion() throws Exception { - // Region to use in test. - final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO; - // Need a mocked catalog tracker. - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer( - server.getConfiguration()); - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, null, null, master.getTableLockManager()); - RegionStates regionStates = am.getRegionStates(); - try { - // First set the state of the region to merging - regionStates.updateRegionState(hri, RegionState.State.MERGING); - // Now, try to assign it with force new plan - am.assign(hri, true, true); - assertEquals("The region should be still in merging state", - RegionState.State.MERGING, regionStates.getRegionState(hri).getState()); - } finally { - am.shutdown(); - } - } - - /** - * Test assignment related ZK events are ignored by AM if the region is not known - * by AM to be in transition. During normal operation, all assignments are started - * by AM (not considering split/merge), if an event is received but the region - * is not in transition, the event must be a very late one. So it can be ignored. - * During master failover, since AM watches assignment znodes after failover cleanup - * is completed, when an event comes in, AM should already have the region in transition - * if ZK is used during the assignment action (only hbck doesn't use ZK for region - * assignment). So during master failover, we can ignored such events too. - */ - @Test (timeout=180000) - public void testAssignmentEventIgnoredIfNotExpected() throws KeeperException, IOException, - CoordinatedStateException { - // Region to use in test. - final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO; - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer( - server.getConfiguration()); - final AtomicBoolean zkEventProcessed = new AtomicBoolean(false); - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, null, null, master.getTableLockManager()) { - - @Override - void handleRegion(final RegionTransition rt, OpenRegionCoordination coordination, - OpenRegionCoordination.OpenRegionDetails ord) { - super.handleRegion(rt, coordination, ord); - if (rt != null && Bytes.equals(hri.getRegionName(), - rt.getRegionName()) && rt.getEventType() == EventType.RS_ZK_REGION_OPENING) { - zkEventProcessed.set(true); - } - } - }; - try { - // First make sure the region is not in transition - am.getRegionStates().regionOffline(hri); - zkEventProcessed.set(false); // Reset it before faking zk transition - this.watcher.registerListenerFirst(am); - assertFalse("The region should not be in transition", - am.getRegionStates().isRegionInTransition(hri)); - ZKAssign.createNodeOffline(this.watcher, hri, SERVERNAME_A); - // Trigger a transition event - ZKAssign.transitionNodeOpening(this.watcher, hri, SERVERNAME_A); - long startTime = EnvironmentEdgeManager.currentTime(); - while (!zkEventProcessed.get()) { - assertTrue("Timed out in waiting for ZK event to be processed", - EnvironmentEdgeManager.currentTime() - startTime < 30000); - Threads.sleepWithoutInterrupt(100); - } - assertFalse(am.getRegionStates().isRegionInTransition(hri)); - } finally { - am.shutdown(); - } - } - - /** - * If a table is deleted, we should not be able to balance it anymore. - * Otherwise, the region will be brought back. - * @throws Exception - */ - @Test (timeout=180000) - public void testBalanceRegionOfDeletedTable() throws Exception { - AssignmentManager am = new AssignmentManager(this.server, this.serverManager, - balancer, null, null, master.getTableLockManager()); - RegionStates regionStates = am.getRegionStates(); - HRegionInfo hri = REGIONINFO; - regionStates.createRegionState(hri); - assertFalse(regionStates.isRegionInTransition(hri)); - RegionPlan plan = new RegionPlan(hri, SERVERNAME_A, SERVERNAME_B); - // Fake table is deleted - regionStates.tableDeleted(hri.getTable()); - am.balance(plan); - assertFalse("The region should not in transition", - regionStates.isRegionInTransition(hri)); - am.shutdown(); - } - - /** - * Tests an on-the-fly RPC that was scheduled for the earlier RS on the same port - * for openRegion. AM should assign this somewhere else. (HBASE-9721) - */ - @SuppressWarnings("unchecked") - @Test (timeout=180000) - public void testOpenCloseRegionRPCIntendedForPreviousServer() throws Exception { - Mockito.when(this.serverManager.sendRegionOpen(Mockito.eq(SERVERNAME_B), Mockito.eq(REGIONINFO), - Mockito.anyInt(), (List)Mockito.any())) - .thenThrow(new DoNotRetryIOException()); - this.server.getConfiguration().setInt("hbase.assignment.maximum.attempts", 100); - - HRegionInfo hri = REGIONINFO; - LoadBalancer balancer = LoadBalancerFactory.getLoadBalancer( - server.getConfiguration()); - // Create an AM. - AssignmentManager am = new AssignmentManager(this.server, - this.serverManager, balancer, null, null, master.getTableLockManager()); - RegionStates regionStates = am.getRegionStates(); - try { - am.regionPlans.put(REGIONINFO.getEncodedName(), - new RegionPlan(REGIONINFO, null, SERVERNAME_B)); - - // Should fail once, but succeed on the second attempt for the SERVERNAME_A - am.assign(hri, true, false); - } finally { - assertEquals(SERVERNAME_A, regionStates.getRegionState(REGIONINFO).getServerName()); - am.shutdown(); - } - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java index bf44147..e6d08b9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java @@ -42,7 +42,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.MiniHBaseCluster.MiniHBaseClusterRegionServer; @@ -57,45 +56,41 @@ import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.coprocessor.RegionObserver; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.JVMClusterUtil; -import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.KeeperException; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; - /** * This tests AssignmentManager with a testing cluster. */ -@Category(MediumTests.class) @SuppressWarnings("deprecation") +@Category({MasterTests.class, MediumTests.class}) public class TestAssignmentManagerOnCluster { private final static byte[] FAMILY = Bytes.toBytes("FAMILY"); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); final static Configuration conf = TEST_UTIL.getConfiguration(); private static HBaseAdmin admin; - static void setupOnce() throws Exception { + @BeforeClass + public static void setUpBeforeClass() throws Exception { // Using the our load balancer to control region plans conf.setClass(HConstants.HBASE_MASTER_LOADBALANCER_CLASS, MyLoadBalancer.class, LoadBalancer.class); @@ -103,20 +98,11 @@ public class TestAssignmentManagerOnCluster { MyRegionObserver.class, RegionObserver.class); // Reduce the maximum attempts to speed up the test conf.setInt("hbase.assignment.maximum.attempts", 3); - // Put meta on master to avoid meta server shutdown handling - conf.set("hbase.balancer.tablesOnMaster", "hbase:meta"); TEST_UTIL.startMiniCluster(1, 4, null, MyMaster.class, MyRegionServer.class); admin = TEST_UTIL.getHBaseAdmin(); } - @BeforeClass - public static void setUpBeforeClass() throws Exception { - // Use ZK for region assignment - conf.setBoolean("hbase.assignment.usezk", true); - setupOnce(); - } - @AfterClass public static void tearDownAfterClass() throws Exception { TEST_UTIL.shutdownMiniCluster(); @@ -134,33 +120,38 @@ public class TestAssignmentManagerOnCluster { RegionStates regionStates = master.getAssignmentManager().getRegionStates(); ServerName metaServerName = regionStates.getRegionServerOfRegion( HRegionInfo.FIRST_META_REGIONINFO); - if (master.getServerName().equals(metaServerName) || metaServerName == null - || !metaServerName.equals(cluster.getServerHoldingMeta())) { + if (master.getServerName().equals(metaServerName)) { // Move meta off master metaServerName = cluster.getLiveRegionServerThreads() .get(0).getRegionServer().getServerName(); master.move(HRegionInfo.FIRST_META_REGIONINFO.getEncodedNameAsBytes(), - Bytes.toBytes(metaServerName.getServerName())); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); + Bytes.toBytes(metaServerName.getServerName())); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); } RegionState metaState = - MetaTableLocator.getMetaRegionState(master.getZooKeeper()); - assertEquals("Meta should be not in transition", - metaState.getState(), RegionState.State.OPEN); + MetaTableLocator.getMetaRegionState(master.getZooKeeper()); + assertEquals("Meta should be not in transition", + metaState.getState(), RegionState.State.OPEN); assertNotEquals("Meta should be moved off master", - metaServerName, master.getServerName()); + metaState.getServerName(), master.getServerName()); + assertEquals("Meta should be on the meta server", + metaState.getServerName(), metaServerName); cluster.killRegionServer(metaServerName); stoppedARegionServer = true; cluster.waitForRegionServerToStop(metaServerName, 60000); + // Wait for SSH to finish + final ServerName oldServerName = metaServerName; final ServerManager serverManager = master.getServerManager(); TEST_UTIL.waitFor(120000, 200, new Waiter.Predicate() { @Override public boolean evaluate() throws Exception { - return !serverManager.areDeadServersInProgress(); + return serverManager.isServerDead(oldServerName) + && !serverManager.areDeadServersInProgress(); } }); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); // Now, make sure meta is assigned assertTrue("Meta should be assigned", regionStates.isRegionOnline(HRegionInfo.FIRST_META_REGIONINFO)); @@ -202,73 +193,19 @@ public class TestAssignmentManagerOnCluster { RegionStates regionStates = am.getRegionStates(); ServerName serverName = regionStates.getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); // Region is assigned now. Let's assign it again. // Master should not abort, and region should be assigned. - RegionState oldState = regionStates.getRegionState(hri); TEST_UTIL.getHBaseAdmin().assign(hri.getRegionName()); master.getAssignmentManager().waitForAssignment(hri); RegionState newState = regionStates.getRegionState(hri); - assertTrue(newState.isOpened() - && newState.getStamp() != oldState.getStamp()); + assertTrue(newState.isOpened()); } finally { TEST_UTIL.deleteTable(Bytes.toBytes(table)); } } - // Simulate a scenario where the AssignCallable and SSH are trying to assign a region - @Test (timeout=60000) - public void testAssignRegionBySSH() throws Exception { - if (!conf.getBoolean("hbase.assignment.usezk", true)) { - return; - } - String table = "testAssignRegionBySSH"; - MyMaster master = (MyMaster) TEST_UTIL.getHBaseCluster().getMaster(); - try { - HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(table)); - desc.addFamily(new HColumnDescriptor(FAMILY)); - admin.createTable(desc); - - HTable meta = new HTable(conf, TableName.META_TABLE_NAME); - HRegionInfo hri = new HRegionInfo( - desc.getTableName(), Bytes.toBytes("A"), Bytes.toBytes("Z")); - MetaTableAccessor.addRegionToMeta(meta, hri); - // Add some dummy server for the region entry - MetaTableAccessor.updateRegionLocation(TEST_UTIL.getHBaseCluster().getMaster().getConnection(), hri, - ServerName.valueOf("example.org", 1234, System.currentTimeMillis()), 0); - RegionStates regionStates = master.getAssignmentManager().getRegionStates(); - int i = TEST_UTIL.getHBaseCluster().getServerWithMeta(); - HRegionServer rs = TEST_UTIL.getHBaseCluster().getRegionServer(i == 0 ? 1 : 0); - // Choose a server other than meta to kill - ServerName controlledServer = rs.getServerName(); - master.enableSSH(false); - TEST_UTIL.getHBaseCluster().killRegionServer(controlledServer); - TEST_UTIL.getHBaseCluster().waitForRegionServerToStop(controlledServer, -1); - AssignmentManager am = master.getAssignmentManager(); - - // Simulate the AssignCallable trying to assign the region. Have the region in OFFLINE state, - // but not in transition and the server is the dead 'controlledServer' - regionStates.createRegionState(hri, State.OFFLINE, controlledServer, null); - am.assign(hri, true, true); - // Region should remain OFFLINE and go to transition - assertEquals(State.OFFLINE, regionStates.getRegionState(hri).getState()); - assertTrue (regionStates.isRegionInTransition(hri)); - - master.enableSSH(true); - am.waitForAssignment(hri); - assertTrue (regionStates.getRegionState(hri).isOpened()); - ServerName serverName = regionStates.getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnlyOnServer(hri, serverName, 6000); - } finally { - if (master != null) { - master.enableSSH(true); - } - TEST_UTIL.deleteTable(Bytes.toBytes(table)); - TEST_UTIL.getHBaseCluster().startRegionServer(); - } - } - /** * This tests region assignment on a simulated restarted server */ @@ -277,7 +214,8 @@ public class TestAssignmentManagerOnCluster { String table = "testAssignRegionOnRestartedServer"; TEST_UTIL.getMiniHBaseCluster().getConf().setInt("hbase.assignment.maximum.attempts", 20); TEST_UTIL.getMiniHBaseCluster().stopMaster(0); - TEST_UTIL.getMiniHBaseCluster().startMaster(); //restart the master so that conf take into affect + //restart the master so that conf take into affect + TEST_UTIL.getMiniHBaseCluster().startMaster(); ServerName deadServer = null; HMaster master = null; @@ -298,7 +236,7 @@ public class TestAssignmentManagerOnCluster { // Use the first server as the destination server ServerName destServer = onlineServers.iterator().next(); - // Created faked dead server + // Created faked dead server that is still online in master deadServer = ServerName.valueOf(destServer.getHostname(), destServer.getPort(), destServer.getStartcode() - 100L); master.serverManager.recordNewServerWithLock(deadServer, ServerLoad.EMPTY_SERVERLOAD); @@ -308,11 +246,6 @@ public class TestAssignmentManagerOnCluster { am.addPlan(hri.getEncodedName(), plan); master.assignRegion(hri); - int version = ZKAssign.transitionNode(master.getZooKeeper(), hri, - destServer, EventType.M_ZK_REGION_OFFLINE, - EventType.RS_ZK_REGION_OPENING, 0); - assertEquals("TansitionNode should fail", -1, version); - TEST_UTIL.waitFor(60000, new Waiter.Predicate() { @Override public boolean evaluate() throws Exception { @@ -333,11 +266,6 @@ public class TestAssignmentManagerOnCluster { ServerName masterServerName = TEST_UTIL.getMiniHBaseCluster().getMaster().getServerName(); TEST_UTIL.getMiniHBaseCluster().stopMaster(masterServerName); TEST_UTIL.getMiniHBaseCluster().startMaster(); - // Wait till master is active and is initialized - while (TEST_UTIL.getMiniHBaseCluster().getMaster() == null || - !TEST_UTIL.getMiniHBaseCluster().getMaster().isInitialized()) { - Threads.sleep(1); - } } } @@ -354,7 +282,7 @@ public class TestAssignmentManagerOnCluster { RegionStates regionStates = TEST_UTIL.getHBaseCluster(). getMaster().getAssignmentManager().getRegionStates(); ServerName serverName = regionStates.getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); admin.offline(hri.getRegionName()); long timeoutTime = System.currentTimeMillis() + 800; @@ -410,7 +338,7 @@ public class TestAssignmentManagerOnCluster { while (true) { ServerName sn = regionStates.getRegionServerOfRegion(hri); if (sn != null && sn.equals(destServerName)) { - TEST_UTIL.assertRegionOnServer(hri, sn, 6000); + TEST_UTIL.assertRegionOnServer(hri, sn, 200); break; } long now = System.currentTimeMillis(); @@ -498,14 +426,11 @@ public class TestAssignmentManagerOnCluster { } /** - * This test should not be flaky. If it is flaky, it means something - * wrong with AssignmentManager which should be reported and fixed - * - * This tests forcefully assign a region while it's closing and re-assigned. + * This tests assign a region while it's closing. */ @Test (timeout=60000) - public void testForceAssignWhileClosing() throws Exception { - String table = "testForceAssignWhileClosing"; + public void testAssignWhileClosing() throws Exception { + String table = "testAssignWhileClosing"; try { HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(table)); desc.addFamily(new HColumnDescriptor(FAMILY)); @@ -529,12 +454,12 @@ public class TestAssignmentManagerOnCluster { assertEquals(RegionState.State.FAILED_CLOSE, state.getState()); MyRegionObserver.preCloseEnabled.set(false); - am.unassign(hri, true); + am.unassign(hri); // region is closing now, will be re-assigned automatically. // now, let's forcefully assign it again. it should be // assigned properly and no double-assignment - am.assign(hri, true, true); + am.assign(hri, true); // let's check if it's assigned after it's out of transition am.waitOnRegionToClearRegionsInTransition(hri); @@ -578,7 +503,7 @@ public class TestAssignmentManagerOnCluster { assertEquals(RegionState.State.FAILED_CLOSE, state.getState()); MyRegionObserver.preCloseEnabled.set(false); - am.unassign(hri, true); + am.unassign(hri); // region may still be assigned now since it's closing, // let's check if it's assigned after it's out of transition @@ -588,7 +513,7 @@ public class TestAssignmentManagerOnCluster { assertTrue(am.waitForAssignment(hri)); ServerName serverName = master.getAssignmentManager(). getRegionStates().getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); } finally { MyRegionObserver.preCloseEnabled.set(false); TEST_UTIL.deleteTable(Bytes.toBytes(table)); @@ -629,7 +554,7 @@ public class TestAssignmentManagerOnCluster { ServerName serverName = master.getAssignmentManager(). getRegionStates().getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); } finally { MyLoadBalancer.controledRegion = null; TEST_UTIL.deleteTable(Bytes.toBytes(table)); @@ -679,7 +604,7 @@ public class TestAssignmentManagerOnCluster { ServerName serverName = master.getAssignmentManager(). getRegionStates().getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); } finally { TEST_UTIL.deleteTable(table); } @@ -706,22 +631,9 @@ public class TestAssignmentManagerOnCluster { } } am.regionOffline(hri); - ZooKeeperWatcher zkw = TEST_UTIL.getHBaseCluster().getMaster().getZooKeeper(); - am.getRegionStates().updateRegionState(hri, State.PENDING_OPEN, destServerName); - if (ConfigUtil.useZKForAssignment(conf)) { - ZKAssign.createNodeOffline(zkw, hri, destServerName); - ZKAssign.transitionNodeOpening(zkw, hri, destServerName); - - // Wait till the event is processed and the region is in transition - long timeoutTime = System.currentTimeMillis() + 20000; - while (!am.getRegionStates().isRegionInTransition(hri)) { - assertTrue("Failed to process ZK opening event in time", - System.currentTimeMillis() < timeoutTime); - Thread.sleep(100); - } - } + am.getRegionStates().updateRegionState(hri, RegionState.State.PENDING_OPEN, destServerName); - am.getTableStateManager().setTableState(table, ZooKeeperProtos.Table.State.DISABLING); + am.getTableStateManager().setTableState(table, TableState.State.DISABLING); List toAssignRegions = am.processServerShutdown(destServerName); assertTrue("Regions to be assigned should be empty.", toAssignRegions.isEmpty()); assertTrue("Regions to be assigned should be empty.", am.getRegionStates() @@ -730,7 +642,7 @@ public class TestAssignmentManagerOnCluster { if (hri != null && serverName != null) { am.regionOnline(hri, serverName); } - am.getTableStateManager().setTableState(table, ZooKeeperProtos.Table.State.DISABLED); + am.getTableStateManager().setTableState(table, TableState.State.DISABLED); TEST_UTIL.deleteTable(table); } } @@ -760,15 +672,6 @@ public class TestAssignmentManagerOnCluster { MyRegionObserver.postCloseEnabled.set(true); am.unassign(hri); - // Now region should pending_close or closing - // Unassign it again forcefully so that we can trigger already - // in transition exception. This test is to make sure this scenario - // is handled properly. - am.server.getConfiguration().setLong( - AssignmentManager.ALREADY_IN_TRANSITION_WAITTIME, 1000); - am.unassign(hri, true); - RegionState state = am.getRegionStates().getRegionState(hri); - assertEquals(RegionState.State.FAILED_CLOSE, state.getState()); // Let region closing move ahead. The region should be closed // properly and re-assigned automatically @@ -782,7 +685,7 @@ public class TestAssignmentManagerOnCluster { assertTrue(am.waitForAssignment(hri)); ServerName serverName = master.getAssignmentManager(). getRegionStates().getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); } finally { MyRegionObserver.postCloseEnabled.set(false); TEST_UTIL.deleteTable(Bytes.toBytes(table)); @@ -824,12 +727,13 @@ public class TestAssignmentManagerOnCluster { am.unassign(hri); RegionState state = am.getRegionStates().getRegionState(hri); ServerName oldServerName = state.getServerName(); - assertTrue(state.isPendingOpenOrOpening() && oldServerName != null); + assertTrue(state.isOpening() && oldServerName != null); // Now the region is stuck in opening // Let's forcefully re-assign it to trigger closing/opening // racing. This test is to make sure this scenario // is handled properly. + MyRegionObserver.postOpenEnabled.set(false); ServerName destServerName = null; int numRS = TEST_UTIL.getHBaseCluster().getLiveRegionServerThreads().size(); for (int i = 0; i < numRS; i++) { @@ -845,9 +749,6 @@ public class TestAssignmentManagerOnCluster { List regions = new ArrayList(); regions.add(hri); am.assign(destServerName, regions); - - // let region open continue - MyRegionObserver.postOpenEnabled.set(false); // let's check if it's assigned after it's out of transition am.waitOnRegionToClearRegionsInTransition(hri); @@ -909,12 +810,13 @@ public class TestAssignmentManagerOnCluster { } // You can't assign a dead region before SSH - am.assign(hri, true, true); + am.assign(hri, true); RegionState state = regionStates.getRegionState(hri); assertTrue(state.isFailedClose()); // You can't unassign a dead region before SSH either - am.unassign(hri, true); + am.unassign(hri); + state = regionStates.getRegionState(hri); assertTrue(state.isFailedClose()); // Enable SSH so that log can be split @@ -938,6 +840,58 @@ public class TestAssignmentManagerOnCluster { } /** + * Test SSH waiting for extra region server for assignment + */ + @Test (timeout=300000) + public void testSSHWaitForServerToAssignRegion() throws Exception { + TableName table = TableName.valueOf("testSSHWaitForServerToAssignRegion"); + MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); + boolean startAServer = false; + try { + HTableDescriptor desc = new HTableDescriptor(table); + desc.addFamily(new HColumnDescriptor(FAMILY)); + admin.createTable(desc); + + HMaster master = cluster.getMaster(); + final ServerManager serverManager = master.getServerManager(); + MyLoadBalancer.countRegionServers = Integer.valueOf( + serverManager.countOfRegionServers()); + HRegionServer rs = TEST_UTIL.getRSForFirstRegionInTable(table); + assertNotNull("First region should be assigned", rs); + final ServerName serverName = rs.getServerName(); + // Wait till SSH tried to assign regions a several times + int counter = MyLoadBalancer.counter.get() + 5; + cluster.killRegionServer(serverName); + startAServer = true; + cluster.waitForRegionServerToStop(serverName, -1); + while (counter > MyLoadBalancer.counter.get()) { + Thread.sleep(1000); + } + cluster.startRegionServer(); + startAServer = false; + // Wait till the dead server is processed by SSH + TEST_UTIL.waitFor(120000, 1000, new Waiter.Predicate() { + @Override + public boolean evaluate() throws Exception { + return serverManager.isServerDead(serverName) + && !serverManager.areDeadServersInProgress(); + } + }); + TEST_UTIL.waitUntilNoRegionsInTransition(300000); + + rs = TEST_UTIL.getRSForFirstRegionInTable(table); + assertTrue("First region should be re-assigned to a different server", + rs != null && !serverName.equals(rs.getServerName())); + } finally { + MyLoadBalancer.countRegionServers = null; + TEST_UTIL.deleteTable(table); + if (startAServer) { + cluster.startRegionServer(); + } + } + } + + /** * Test force unassign/assign a region of a disabled table */ @Test (timeout=60000) @@ -967,11 +921,11 @@ public class TestAssignmentManagerOnCluster { assertTrue(regionStates.isRegionOffline(hri)); // You can't assign a disabled region - am.assign(hri, true, true); + am.assign(hri, true); assertTrue(regionStates.isRegionOffline(hri)); // You can't unassign a disabled region either - am.unassign(hri, true); + am.unassign(hri); assertTrue(regionStates.isRegionOffline(hri)); } finally { TEST_UTIL.deleteTable(table); @@ -1027,7 +981,7 @@ public class TestAssignmentManagerOnCluster { assertEquals(oldServerName, regionStates.getRegionServerOfRegion(hri)); // Try to unassign the dead region before SSH - am.unassign(hri, false); + am.unassign(hri); // The region should be moved to offline since the server is dead RegionState state = regionStates.getRegionState(hri); assertTrue(state.isOffline()); @@ -1058,58 +1012,6 @@ public class TestAssignmentManagerOnCluster { } /** - * Test SSH waiting for extra region server for assignment - */ - @Test (timeout=300000) - public void testSSHWaitForServerToAssignRegion() throws Exception { - TableName table = TableName.valueOf("testSSHWaitForServerToAssignRegion"); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - boolean startAServer = false; - try { - HTableDescriptor desc = new HTableDescriptor(table); - desc.addFamily(new HColumnDescriptor(FAMILY)); - admin.createTable(desc); - - HMaster master = cluster.getMaster(); - final ServerManager serverManager = master.getServerManager(); - MyLoadBalancer.countRegionServers = Integer.valueOf( - serverManager.countOfRegionServers()); - HRegionServer rs = TEST_UTIL.getRSForFirstRegionInTable(table); - assertNotNull("First region should be assigned", rs); - final ServerName serverName = rs.getServerName(); - // Wait till SSH tried to assign regions a several times - int counter = MyLoadBalancer.counter.get() + 5; - cluster.killRegionServer(serverName); - startAServer = true; - cluster.waitForRegionServerToStop(serverName, -1); - while (counter > MyLoadBalancer.counter.get()) { - Thread.sleep(1000); - } - cluster.startRegionServer(); - startAServer = false; - // Wait till the dead server is processed by SSH - TEST_UTIL.waitFor(120000, 1000, new Waiter.Predicate() { - @Override - public boolean evaluate() throws Exception { - return serverManager.isServerDead(serverName) - && !serverManager.areDeadServersInProgress(); - } - }); - TEST_UTIL.waitUntilAllRegionsAssigned(table, 300000); - - rs = TEST_UTIL.getRSForFirstRegionInTable(table); - assertTrue("First region should be re-assigned to a different server", - rs != null && !serverName.equals(rs.getServerName())); - } finally { - MyLoadBalancer.countRegionServers = null; - TEST_UTIL.deleteTable(table); - if (startAServer) { - cluster.startRegionServer(); - } - } - } - - /** * Test disabled region is ignored by SSH */ @Test (timeout=60000) @@ -1158,7 +1060,7 @@ public class TestAssignmentManagerOnCluster { assertEquals(oldServerName, regionStates.getRegionServerOfRegion(hri)); // Try to unassign the dead region before SSH - am.unassign(hri, false); + am.unassign(hri); // The region should be moved to offline since the server is dead RegionState state = regionStates.getRegionState(hri); assertTrue(state.isOffline()); @@ -1177,7 +1079,7 @@ public class TestAssignmentManagerOnCluster { } // Wait till no more RIT, the region should be offline. - am.waitUntilNoRegionsInTransition(60000); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); assertTrue(regionStates.isRegionOffline(hri)); } finally { MyRegionServer.abortedServer = null; @@ -1185,7 +1087,7 @@ public class TestAssignmentManagerOnCluster { cluster.startRegionServer(); } } - + /** * Test that region state transition call is idempotent */ @@ -1208,7 +1110,7 @@ public class TestAssignmentManagerOnCluster { RegionStates regionStates = am.getRegionStates(); ServerName serverName = regionStates.getRegionServerOfRegion(hri); // Assert the the region is actually open on the server - TEST_UTIL.assertRegionOnServer(hri, serverName, 6000); + TEST_UTIL.assertRegionOnServer(hri, serverName, 200); // Closing region should just work fine admin.disableTable(TableName.valueOf(table)); assertTrue(regionStates.isRegionOffline(hri)); @@ -1226,10 +1128,6 @@ public class TestAssignmentManagerOnCluster { */ @Test(timeout = 30000) public void testUpdatesRemoteMeta() throws Exception { - // Not for zk less assignment - if (conf.getBoolean("hbase.assignment.usezk", true)) { - return; - } conf.setInt("hbase.regionstatestore.meta.connection", 3); final RegionStateStore rss = new RegionStateStore(new MyRegionServer(conf, new ZkCoordinatedStateManager())); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java index 912c600..cc501ed 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestCatalogJanitor.java @@ -49,12 +49,14 @@ import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableDescriptors; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.HConnectionManager; import org.apache.hadoop.hbase.client.HConnectionTestingUtility; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; import org.apache.hadoop.hbase.coordination.SplitLogManagerCoordination; import org.apache.hadoop.hbase.coordination.SplitLogManagerCoordination.SplitLogManagerDetails; @@ -71,7 +73,10 @@ import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MutateResponse; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionAction; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionActionResult; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ResultOrException; +import org.apache.hadoop.hbase.quotas.MasterQuotaManager; import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HFileArchiveUtil; @@ -88,7 +93,7 @@ import com.google.protobuf.RpcController; import com.google.protobuf.Service; import com.google.protobuf.ServiceException; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestCatalogJanitor { private static final Log LOG = LogFactory.getLog(TestCatalogJanitor.class); @@ -241,6 +246,11 @@ public class TestCatalogJanitor { } @Override + public MasterQuotaManager getMasterQuotaManager() { + return null; + } + + @Override public ServerManager getServerManager() { return null; } @@ -302,13 +312,18 @@ public class TestCatalogJanitor { return new TableDescriptors() { @Override public HTableDescriptor remove(TableName tablename) throws IOException { - // TODO Auto-generated method stub + // noop return null; } @Override public Map getAll() throws IOException { - // TODO Auto-generated method stub + // noop + return null; + } + + @Override public Map getAllDescriptors() throws IOException { + // noop return null; } @@ -319,15 +334,26 @@ public class TestCatalogJanitor { } @Override + public TableDescriptor getDescriptor(TableName tablename) + throws IOException { + return createTableDescriptor(); + } + + @Override public Map getByNamespace(String name) throws IOException { return null; } @Override public void add(HTableDescriptor htd) throws IOException { - // TODO Auto-generated method stub + // noop + } + @Override + public void add(TableDescriptor htd) throws IOException { + // noop } + @Override public void setCacheOn() throws IOException { } @@ -418,6 +444,11 @@ public class TestCatalogJanitor { } @Override + public TableStateManager getTableStateManager() { + return null; + } + + @Override public void dispatchMergingRegions(HRegionInfo region_a, HRegionInfo region_b, boolean forcible) throws IOException { } @@ -859,7 +890,7 @@ public class TestCatalogJanitor { MasterServices services = new MockMasterServices(server); // create the janitor - + CatalogJanitor janitor = new CatalogJanitor(server, services); // Create regions. @@ -989,6 +1020,11 @@ public class TestCatalogJanitor { return htd; } + private TableDescriptor createTableDescriptor() { + TableDescriptor htd = new TableDescriptor(createHTableDescriptor(), TableState.State.ENABLED); + return htd; + } + private MultiResponse buildMultiResponse(MultiRequest req) { MultiResponse.Builder builder = MultiResponse.newBuilder(); RegionActionResult.Builder regionActionResultBuilder = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java index 56e86dc..dd53993 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClockSkewDetection.java @@ -31,13 +31,14 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestClockSkewDetection { private static final Log LOG = LogFactory.getLog(TestClockSkewDetection.class); @@ -78,7 +79,7 @@ public class TestClockSkewDetection { @Override public void abort(String why, Throwable e) {} - + @Override public boolean isAborted() { return false; @@ -113,7 +114,7 @@ public class TestClockSkewDetection { //we want an exception LOG.info("Recieved expected exception: "+e); } - + try { // Master Time < Region Server Time LOG.debug("Test: Master Time < Region Server Time"); @@ -125,17 +126,17 @@ public class TestClockSkewDetection { // we want an exception LOG.info("Recieved expected exception: " + e); } - + // make sure values above warning threshold but below max threshold don't kill LOG.debug("regionServerStartup 4"); InetAddress ia4 = InetAddress.getLocalHost(); sm.regionServerStartup(ia4, 1237, -1, System.currentTimeMillis() - warningSkew * 2); - + // make sure values above warning threshold but below max threshold don't kill LOG.debug("regionServerStartup 5"); InetAddress ia5 = InetAddress.getLocalHost(); sm.regionServerStartup(ia5, 1238, -1, System.currentTimeMillis() + warningSkew * 2); - + } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterStatusPublisher.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterStatusPublisher.java index 3774503..5d47ede 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterStatusPublisher.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestClusterStatusPublisher.java @@ -20,8 +20,9 @@ package org.apache.hadoop.hbase.master; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.ManualEnvironmentEdge; import org.apache.hadoop.hbase.util.Pair; @@ -33,7 +34,7 @@ import org.junit.experimental.categories.Category; import java.util.ArrayList; import java.util.List; -@Category(MediumTests.class) // Plays with the ManualEnvironmentEdge +@Category({MasterTests.class, MediumTests.class}) // Plays with the ManualEnvironmentEdge public class TestClusterStatusPublisher { private ManualEnvironmentEdge mee = new ManualEnvironmentEdge(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java index 5452de1..40d26f4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDeadServer.java @@ -17,8 +17,9 @@ */ package org.apache.hadoop.hbase.master; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.ManualEnvironmentEdge; import org.apache.hadoop.hbase.util.Pair; @@ -32,7 +33,7 @@ import java.util.Set; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestDeadServer { final ServerName hostname123 = ServerName.valueOf("127.0.0.1", 123, 3L); final ServerName hostname123_2 = ServerName.valueOf("127.0.0.1", 123, 4L); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java index 8daafe4..793d299 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java @@ -61,7 +61,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; @@ -84,6 +83,7 @@ import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; import org.apache.hadoop.hbase.coordination.ZKSplitLogManagerCoordination; import org.apache.hadoop.hbase.exceptions.OperationConflictException; import org.apache.hadoop.hbase.exceptions.RegionInRecoveryException; +import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException; import org.apache.hadoop.hbase.master.SplitLogManager.TaskBatch; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState; @@ -94,7 +94,10 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.wal.DefaultWALProvider; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.wal.WALSplitter; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; @@ -102,11 +105,9 @@ import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.hdfs.MiniDFSCluster; -import org.apache.zookeeper.KeeperException; import org.junit.After; import org.junit.AfterClass; import org.junit.Assert; @@ -115,7 +116,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) @SuppressWarnings("deprecation") public class TestDistributedLogSplitting { private static final Log LOG = LogFactory.getLog(TestSplitLogManager.class); @@ -176,7 +177,7 @@ public class TestDistributedLogSplitting { cluster.waitForActiveAndReadyMaster(); master = cluster.getMaster(); while (cluster.getLiveRegionServerThreads().size() < num_rs) { - Threads.sleep(1); + Threads.sleep(10); } } @@ -897,7 +898,7 @@ public class TestDistributedLogSplitting { List rsts = cluster.getLiveRegionServerThreads(); final ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "table-creation", null); - Table ht = installTable(zkw, "table", "family", NUM_REGIONS_TO_CREATE); + HTable ht = installTable(zkw, "table", "family", NUM_REGIONS_TO_CREATE); final SplitLogManager slm = master.getMasterFileSystem().splitLogManager; Set regionSet = new HashSet(); @@ -909,7 +910,6 @@ public class TestDistributedLogSplitting { List regions = ProtobufUtil.getOnlineRegions(hrs.getRSRpcServices()); if (regions.isEmpty()) continue; region = regions.get(0); - if (region.isMetaRegion()) continue; regionSet.add(region); dstRS = rsts.get((i+1) % NUM_RS).getRegionServer(); break; @@ -1415,14 +1415,14 @@ public class TestDistributedLogSplitting { }); // only one seqid file should exist assertEquals(1, files.length); - + // verify all seqId files aren't treated as recovered.edits files NavigableSet recoveredEdits = WALSplitter.getSplitEditFilesSorted(fs, regionDirs.get(0)); assertEquals(0, recoveredEdits.size()); - + ht.close(); - } - + } + HTable installTable(ZooKeeperWatcher zkw, String tname, String fname, int nrs) throws Exception { return installTable(zkw, tname, fname, nrs, 0); } @@ -1481,6 +1481,27 @@ public class TestDistributedLogSplitting { putData(region, hri.getStartKey(), nrows, Bytes.toBytes("q"), family); } } + + for (MasterThread mt : cluster.getLiveMasterThreads()) { + HRegionServer hrs = mt.getMaster(); + List hris; + try { + hris = ProtobufUtil.getOnlineRegions(hrs.getRSRpcServices()); + } catch (ServerNotRunningYetException e) { + // It's ok: this master may be a backup. Ignored. + continue; + } + for (HRegionInfo hri : hris) { + if (hri.getTable().isSystemTable()) { + continue; + } + LOG.debug("adding data to rs = " + mt.getName() + + " region = "+ hri.getRegionNameAsString()); + HRegion region = hrs.getOnlineRegion(hri.getRegionName()); + assertTrue(region != null); + putData(region, hri.getStartKey(), nrows, Bytes.toBytes("q"), family); + } + } } public void makeWAL(HRegionServer hrs, List regions, String tname, String fname, @@ -1586,10 +1607,8 @@ public class TestDistributedLogSplitting { return count; } - private void blockUntilNoRIT(ZooKeeperWatcher zkw, HMaster master) - throws KeeperException, InterruptedException { - ZKAssign.blockUntilNoRIT(zkw); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); + private void blockUntilNoRIT(ZooKeeperWatcher zkw, HMaster master) throws Exception { + TEST_UTIL.waitUntilNoRegionsInTransition(60000); } private void putData(HRegion region, byte[] startRow, int numRows, byte [] qf, @@ -1672,14 +1691,19 @@ public class TestDistributedLogSplitting { */ private HRegionServer findRSToKill(boolean hasMetaRegion, String tableName) throws Exception { List rsts = cluster.getLiveRegionServerThreads(); - int numOfRSs = rsts.size(); List regions = null; HRegionServer hrs = null; - for (int i = 0; i < numOfRSs; i++) { + for (RegionServerThread rst: rsts) { + hrs = rst.getRegionServer(); + while (rst.isAlive() && !hrs.isOnline()) { + Thread.sleep(100); + } + if (!rst.isAlive()) { + continue; + } boolean isCarryingMeta = false; boolean foundTableRegion = false; - hrs = rsts.get(i).getRegionServer(); regions = ProtobufUtil.getOnlineRegions(hrs.getRSRpcServices()); for (HRegionInfo region : regions) { if (region.isMetaRegion()) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestGetLastFlushedSequenceId.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestGetLastFlushedSequenceId.java new file mode 100644 index 0000000..0f7c281 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestGetLastFlushedSequenceId.java @@ -0,0 +1,99 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.master; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.List; + +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.MiniHBaseCluster; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.JVMClusterUtil; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +/** + * Trivial test to confirm that we can get last flushed sequence id by encodedRegionName. See + * HBASE-12715. + */ +@Category(MediumTests.class) +public class TestGetLastFlushedSequenceId { + + private final HBaseTestingUtility testUtil = new HBaseTestingUtility(); + + private final TableName tableName = TableName.valueOf(getClass().getSimpleName(), "test"); + + private final byte[] family = Bytes.toBytes("f1"); + + private final byte[][] families = new byte[][] { family }; + + @Before + public void setUp() throws Exception { + testUtil.getConfiguration().setInt("hbase.regionserver.msginterval", 1000); + testUtil.startMiniCluster(1, 1); + } + + @After + public void tearDown() throws Exception { + testUtil.shutdownMiniCluster(); + } + + @Test + public void test() throws IOException, InterruptedException { + testUtil.getHBaseAdmin().createNamespace( + NamespaceDescriptor.create(tableName.getNamespaceAsString()).build()); + HTable table = testUtil.createTable(tableName, families); + table.put(new Put(Bytes.toBytes("k")).add(family, Bytes.toBytes("q"), Bytes.toBytes("v"))); + table.flushCommits(); + MiniHBaseCluster cluster = testUtil.getMiniHBaseCluster(); + List rsts = cluster.getRegionServerThreads(); + HRegion region = null; + for (int i = 0; i < cluster.getRegionServerThreads().size(); i++) { + HRegionServer hrs = rsts.get(i).getRegionServer(); + for (HRegion r : hrs.getOnlineRegions(tableName)) { + region = r; + break; + } + } + assertNotNull(region); + Thread.sleep(2000); + assertEquals( + HConstants.NO_SEQNUM, + testUtil.getHBaseCluster().getMaster() + .getLastSequenceId(region.getRegionInfo().getEncodedNameAsBytes())); + testUtil.getHBaseAdmin().flush(tableName); + Thread.sleep(2000); + assertTrue(testUtil.getHBaseCluster().getMaster() + .getLastSequenceId(region.getRegionInfo().getEncodedNameAsBytes()) > 0); + table.close(); + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterCommandLine.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterCommandLine.java index b7f1e16..2cb42f7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterCommandLine.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterCommandLine.java @@ -21,11 +21,12 @@ import static org.junit.Assert.*; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestHMasterCommandLine { private static final HBaseTestingUtility TESTING_UTIL = new HBaseTestingUtility(); @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java index 9945647..2419918 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestHMasterRPCException.java @@ -34,6 +34,7 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos; import org.apache.hadoop.hbase.protobuf.generated.MasterProtos.IsMasterRunningRequest; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -41,7 +42,7 @@ import org.junit.experimental.categories.Category; import com.google.protobuf.BlockingRpcChannel; import com.google.protobuf.ServiceException; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestHMasterRPCException { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java index c1c148e..d2f1eab 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMaster.java @@ -33,7 +33,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.PleaseHoldException; @@ -42,7 +41,9 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownRegionException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.util.StringUtils; @@ -53,7 +54,7 @@ import org.junit.experimental.categories.Category; import com.google.common.base.Joiner; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMaster { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final Log LOG = LogFactory.getLog(TestMaster.class); @@ -69,7 +70,6 @@ public class TestMaster { // Start a cluster of two regionservers. TEST_UTIL.startMiniCluster(2); admin = TEST_UTIL.getHBaseAdmin(); - TEST_UTIL.getHBaseCluster().getMaster().assignmentManager.initializeHandlerTrackers(); } @AfterClass @@ -78,18 +78,18 @@ public class TestMaster { } @Test + @SuppressWarnings("deprecation") public void testMasterOpsWhileSplitting() throws Exception { MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); HMaster m = cluster.getMaster(); HTable ht = TEST_UTIL.createTable(TABLENAME, FAMILYNAME); assertTrue(m.assignmentManager.getTableStateManager().isTableState(TABLENAME, - ZooKeeperProtos.Table.State.ENABLED)); + TableState.State.ENABLED)); TEST_UTIL.loadTable(ht, FAMILYNAME, false); ht.close(); List> tableRegions = MetaTableAccessor.getTableRegionsAndLocations( - m.getZooKeeper(), m.getConnection(), TABLENAME); LOG.info("Regions after load: " + Joiner.on(',').join(tableRegions)); assertEquals(1, tableRegions.size()); @@ -107,8 +107,7 @@ public class TestMaster { Thread.sleep(100); } LOG.info("Making sure we can call getTableRegions while opening"); - tableRegions = MetaTableAccessor.getTableRegionsAndLocations(m.getZooKeeper(), - m.getConnection(), + tableRegions = MetaTableAccessor.getTableRegionsAndLocations(m.getConnection(), TABLENAME, false); LOG.info("Regions: " + Joiner.on(',').join(tableRegions)); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java index 26e46c6..f211754 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java @@ -19,23 +19,17 @@ package org.apache.hadoop.hbase.master; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; import java.io.IOException; -import java.util.ArrayList; -import java.util.Iterator; import java.util.List; -import java.util.Set; -import java.util.TreeSet; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.ClusterStatus; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; @@ -43,883 +37,30 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.MiniHBaseCluster; -import org.apache.hadoop.hbase.RegionTransition; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.TableStateManager; import org.apache.hadoop.hbase.client.RegionLocator; import org.apache.hadoop.hbase.client.Table; -import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.RegionState.State; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; -import org.apache.hadoop.hbase.regionserver.RegionMergeTransaction; -import org.apache.hadoop.hbase.regionserver.RegionServerStoppedException; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; -import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; -import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; -import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateManager; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.data.Stat; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({FlakeyTests.class, LargeTests.class}) public class TestMasterFailover { private static final Log LOG = LogFactory.getLog(TestMasterFailover.class); - /** - * Complex test of master failover that tests as many permutations of the - * different possible states that regions in transition could be in within ZK. - *

    - * This tests the proper handling of these states by the failed-over master - * and includes a thorough testing of the timeout code as well. - *

    - * Starts with a single master and three regionservers. - *

    - * Creates two tables, enabledTable and disabledTable, each containing 5 - * regions. The disabledTable is then disabled. - *

    - * After reaching steady-state, the master is killed. We then mock several - * states in ZK. - *

    - * After mocking them, we will startup a new master which should become the - * active master and also detect that it is a failover. The primary test - * passing condition will be that all regions of the enabled table are - * assigned and all the regions of the disabled table are not assigned. - *

    - * The different scenarios to be tested are below: - *

    - * ZK State: OFFLINE - *

    A node can get into OFFLINE state if

    - *
      - *
    • An RS fails to open a region, so it reverts the state back to OFFLINE - *
    • The Master is assigning the region to a RS before it sends RPC - *
    - *

    We will mock the scenarios

    - *
      - *
    • Master has assigned an enabled region but RS failed so a region is - * not assigned anywhere and is sitting in ZK as OFFLINE
    • - *
    • This seems to cover both cases?
    • - *
    - *

    - * ZK State: CLOSING - *

    A node can get into CLOSING state if

    - *
      - *
    • An RS has begun to close a region - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region of enabled table was being closed but did not complete - *
    • Region of disabled table was being closed but did not complete - *
    - *

    - * ZK State: CLOSED - *

    A node can get into CLOSED state if

    - *
      - *
    • An RS has completed closing a region but not acknowledged by master yet - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region of a table that should be enabled was closed on an RS - *
    • Region of a table that should be disabled was closed on an RS - *
    - *

    - * ZK State: OPENING - *

    A node can get into OPENING state if

    - *
      - *
    • An RS has begun to open a region - *
    - *

    We will mock the scenarios

    - *
      - *
    • RS was opening a region of enabled table but never finishes - *
    - *

    - * ZK State: OPENED - *

    A node can get into OPENED state if

    - *
      - *
    • An RS has finished opening a region but not acknowledged by master yet - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region of a table that should be enabled was opened on an RS - *
    • Region of a table that should be disabled was opened on an RS - *
    - * @throws Exception - */ - @Test (timeout=240000) - public void testMasterFailoverWithMockedRIT() throws Exception { - - final int NUM_MASTERS = 1; - final int NUM_RS = 3; - - // Create config to use for this cluster - Configuration conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", true); - - // Start the cluster - HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf); - TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - log("Cluster started"); - - // Create a ZKW to use in the test - ZooKeeperWatcher zkw = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); - - // get all the master threads - List masterThreads = cluster.getMasterThreads(); - assertEquals(1, masterThreads.size()); - - // only one master thread, let's wait for it to be initialized - assertTrue(cluster.waitForActiveAndReadyMaster()); - HMaster master = masterThreads.get(0).getMaster(); - assertTrue(master.isActiveMaster()); - assertTrue(master.isInitialized()); - - // disable load balancing on this master - master.balanceSwitch(false); - - // create two tables in META, each with 10 regions - byte [] FAMILY = Bytes.toBytes("family"); - byte [][] SPLIT_KEYS = new byte [][] { - new byte[0], Bytes.toBytes("aaa"), Bytes.toBytes("bbb"), - Bytes.toBytes("ccc"), Bytes.toBytes("ddd"), Bytes.toBytes("eee"), - Bytes.toBytes("fff"), Bytes.toBytes("ggg"), Bytes.toBytes("hhh"), - Bytes.toBytes("iii"), Bytes.toBytes("jjj") - }; - - byte [] enabledTable = Bytes.toBytes("enabledTable"); - HTableDescriptor htdEnabled = new HTableDescriptor(TableName.valueOf(enabledTable)); - htdEnabled.addFamily(new HColumnDescriptor(FAMILY)); - - FileSystem filesystem = FileSystem.get(conf); - Path rootdir = FSUtils.getRootDir(conf); - FSTableDescriptors fstd = new FSTableDescriptors(conf, filesystem, rootdir); - // Write the .tableinfo - fstd.createTableDescriptor(htdEnabled); - - HRegionInfo hriEnabled = new HRegionInfo(htdEnabled.getTableName(), null, null); - createRegion(hriEnabled, rootdir, conf, htdEnabled); - - List enabledRegions = TEST_UTIL.createMultiRegionsInMeta( - TEST_UTIL.getConfiguration(), htdEnabled, SPLIT_KEYS); - - TableName disabledTable = TableName.valueOf("disabledTable"); - HTableDescriptor htdDisabled = new HTableDescriptor(disabledTable); - htdDisabled.addFamily(new HColumnDescriptor(FAMILY)); - // Write the .tableinfo - fstd.createTableDescriptor(htdDisabled); - HRegionInfo hriDisabled = new HRegionInfo(htdDisabled.getTableName(), null, null); - createRegion(hriDisabled, rootdir, conf, htdDisabled); - List disabledRegions = TEST_UTIL.createMultiRegionsInMeta( - TEST_UTIL.getConfiguration(), htdDisabled, SPLIT_KEYS); - - TableName tableWithMergingRegions = TableName.valueOf("tableWithMergingRegions"); - TEST_UTIL.createTable(tableWithMergingRegions, FAMILY, new byte [][] {Bytes.toBytes("m")}); - - log("Regions in hbase:meta and namespace have been created"); - - // at this point we only expect 4 regions to be assigned out - // (catalogs and namespace, + 2 merging regions) - assertEquals(4, cluster.countServedRegions()); - - // Move merging regions to the same region server - AssignmentManager am = master.getAssignmentManager(); - RegionStates regionStates = am.getRegionStates(); - List mergingRegions = regionStates.getRegionsOfTable(tableWithMergingRegions); - assertEquals(2, mergingRegions.size()); - HRegionInfo a = mergingRegions.get(0); - HRegionInfo b = mergingRegions.get(1); - HRegionInfo newRegion = RegionMergeTransaction.getMergedRegionInfo(a, b); - ServerName mergingServer = regionStates.getRegionServerOfRegion(a); - ServerName serverB = regionStates.getRegionServerOfRegion(b); - if (!serverB.equals(mergingServer)) { - RegionPlan plan = new RegionPlan(b, serverB, mergingServer); - am.balance(plan); - assertTrue(am.waitForAssignment(b)); - } - - // Let's just assign everything to first RS - HRegionServer hrs = cluster.getRegionServer(0); - ServerName serverName = hrs.getServerName(); - HRegionInfo closingRegion = enabledRegions.remove(0); - // we'll need some regions to already be assigned out properly on live RS - List enabledAndAssignedRegions = new ArrayList(); - enabledAndAssignedRegions.add(enabledRegions.remove(0)); - enabledAndAssignedRegions.add(enabledRegions.remove(0)); - enabledAndAssignedRegions.add(closingRegion); - - List disabledAndAssignedRegions = new ArrayList(); - disabledAndAssignedRegions.add(disabledRegions.remove(0)); - disabledAndAssignedRegions.add(disabledRegions.remove(0)); - - // now actually assign them - for (HRegionInfo hri : enabledAndAssignedRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, serverName)); - master.assignRegion(hri); - } - - for (HRegionInfo hri : disabledAndAssignedRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, serverName)); - master.assignRegion(hri); - } - - // wait for no more RIT - log("Waiting for assignment to finish"); - ZKAssign.blockUntilNoRIT(zkw); - log("Assignment completed"); - - // Stop the master - log("Aborting master"); - cluster.abortMaster(0); - cluster.waitOnMaster(0); - log("Master has aborted"); - - /* - * Now, let's start mocking up some weird states as described in the method - * javadoc. - */ - - List regionsThatShouldBeOnline = new ArrayList(); - List regionsThatShouldBeOffline = new ArrayList(); - - log("Beginning to mock scenarios"); - - // Disable the disabledTable in ZK - TableStateManager zktable = new ZKTableStateManager(zkw); - zktable.setTableState(disabledTable, ZooKeeperProtos.Table.State.DISABLED); - - /* - * ZK = OFFLINE - */ - - // Region that should be assigned but is not and is in ZK as OFFLINE - // Cause: This can happen if the master crashed after creating the znode but before sending the - // request to the region server - HRegionInfo region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeOffline(zkw, region, serverName); - - /* - * ZK = CLOSING - */ - // Cause: Same as offline. - regionsThatShouldBeOnline.add(closingRegion); - ZKAssign.createNodeClosing(zkw, closingRegion, serverName); - - /* - * ZK = CLOSED - */ - - // Region of enabled table closed but not ack - //Cause: Master was down while the region server updated the ZK status. - region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - int version = ZKAssign.createNodeClosing(zkw, region, serverName); - ZKAssign.transitionNodeClosed(zkw, region, serverName, version); - - // Region of disabled table closed but not ack - region = disabledRegions.remove(0); - regionsThatShouldBeOffline.add(region); - version = ZKAssign.createNodeClosing(zkw, region, serverName); - ZKAssign.transitionNodeClosed(zkw, region, serverName, version); - - /* - * ZK = OPENED - */ - - // Region of enabled table was opened on RS - // Cause: as offline - region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeOffline(zkw, region, serverName); - ProtobufUtil.openRegion(hrs.getRSRpcServices(), hrs.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - break; - } - Thread.sleep(100); - } - - // Region of disable table was opened on RS - // Cause: Master failed while updating the status for this region server. - region = disabledRegions.remove(0); - regionsThatShouldBeOffline.add(region); - ZKAssign.createNodeOffline(zkw, region, serverName); - ProtobufUtil.openRegion(hrs.getRSRpcServices(), hrs.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - break; - } - Thread.sleep(100); - } - - /* - * ZK = MERGING - */ - - // Regions of table of merging regions - // Cause: Master was down while merging was going on - hrs.getCoordinatedStateManager(). - getRegionMergeCoordination().startRegionMergeTransaction(newRegion, mergingServer, a, b); - - /* - * ZK = NONE - */ - - /* - * DONE MOCKING - */ - - log("Done mocking data up in ZK"); - - // Start up a new master - log("Starting up a new master"); - master = cluster.startMaster().getMaster(); - log("Waiting for master to be ready"); - cluster.waitForActiveAndReadyMaster(); - log("Master is ready"); - - // Get new region states since master restarted - regionStates = master.getAssignmentManager().getRegionStates(); - // Merging region should remain merging - assertTrue(regionStates.isRegionInState(a, State.MERGING)); - assertTrue(regionStates.isRegionInState(b, State.MERGING)); - assertTrue(regionStates.isRegionInState(newRegion, State.MERGING_NEW)); - // Now remove the faked merging znode, merging regions should be - // offlined automatically, otherwise it is a bug in AM. - ZKAssign.deleteNodeFailSilent(zkw, newRegion); - - // Failover should be completed, now wait for no RIT - log("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); - log("No more RIT in ZK, now doing final test verification"); - - // Grab all the regions that are online across RSs - Set onlineRegions = new TreeSet(); - for (JVMClusterUtil.RegionServerThread rst : - cluster.getRegionServerThreads()) { - onlineRegions.addAll(ProtobufUtil.getOnlineRegions( - rst.getRegionServer().getRSRpcServices())); - } - - // Now, everything that should be online should be online - for (HRegionInfo hri : regionsThatShouldBeOnline) { - assertTrue(onlineRegions.contains(hri)); - } - - // Everything that should be offline should not be online - for (HRegionInfo hri : regionsThatShouldBeOffline) { - if (onlineRegions.contains(hri)) { - LOG.debug(hri); - } - assertFalse(onlineRegions.contains(hri)); - } - - log("Done with verification, all passed, shutting down cluster"); - - // Done, shutdown the cluster - TEST_UTIL.shutdownMiniCluster(); - } - - /** - * Complex test of master failover that tests as many permutations of the - * different possible states that regions in transition could be in within ZK - * pointing to an RS that has died while no master is around to process it. - *

    - * This tests the proper handling of these states by the failed-over master - * and includes a thorough testing of the timeout code as well. - *

    - * Starts with a single master and two regionservers. - *

    - * Creates two tables, enabledTable and disabledTable, each containing 5 - * regions. The disabledTable is then disabled. - *

    - * After reaching steady-state, the master is killed. We then mock several - * states in ZK. And one of the RS will be killed. - *

    - * After mocking them and killing an RS, we will startup a new master which - * should become the active master and also detect that it is a failover. The - * primary test passing condition will be that all regions of the enabled - * table are assigned and all the regions of the disabled table are not - * assigned. - *

    - * The different scenarios to be tested are below: - *

    - * ZK State: CLOSING - *

    A node can get into CLOSING state if

    - *
      - *
    • An RS has begun to close a region - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region was being closed but the RS died before finishing the close - *
    - * ZK State: OPENED - *

    A node can get into OPENED state if

    - *
      - *
    • An RS has finished opening a region but not acknowledged by master yet - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region of a table that should be enabled was opened by a now-dead RS - *
    • Region of a table that should be disabled was opened by a now-dead RS - *
    - *

    - * ZK State: NONE - *

    A region could not have a transition node if

    - *
      - *
    • The server hosting the region died and no master processed it - *
    - *

    We will mock the scenarios

    - *
      - *
    • Region of enabled table was on a dead RS that was not yet processed - *
    • Region of disabled table was on a dead RS that was not yet processed - *
    - * @throws Exception - */ - @Test (timeout=180000) - public void testMasterFailoverWithMockedRITOnDeadRS() throws Exception { - - final int NUM_MASTERS = 1; - final int NUM_RS = 2; - - // Create and start the cluster - HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - Configuration conf = TEST_UTIL.getConfiguration(); - conf.setBoolean("hbase.assignment.usezk", true); - - conf.setInt(ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART, 1); - conf.setInt(ServerManager.WAIT_ON_REGIONSERVERS_MAXTOSTART, 2); - TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - log("Cluster started"); - - // Create a ZKW to use in the test - ZooKeeperWatcher zkw = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(), - "unittest", new Abortable() { - - @Override - public void abort(String why, Throwable e) { - LOG.error("Fatal ZK Error: " + why, e); - org.junit.Assert.assertFalse("Fatal ZK error", true); - } - - @Override - public boolean isAborted() { - return false; - } - - }); - - // get all the master threads - List masterThreads = cluster.getMasterThreads(); - assertEquals(1, masterThreads.size()); - - // only one master thread, let's wait for it to be initialized - assertTrue(cluster.waitForActiveAndReadyMaster()); - HMaster master = masterThreads.get(0).getMaster(); - assertTrue(master.isActiveMaster()); - assertTrue(master.isInitialized()); - - // disable load balancing on this master - master.balanceSwitch(false); - - // create two tables in META, each with 30 regions - byte [] FAMILY = Bytes.toBytes("family"); - byte[][] SPLIT_KEYS = - TEST_UTIL.getRegionSplitStartKeys(Bytes.toBytes("aaa"), Bytes.toBytes("zzz"), 30); - - byte [] enabledTable = Bytes.toBytes("enabledTable"); - HTableDescriptor htdEnabled = new HTableDescriptor(TableName.valueOf(enabledTable)); - htdEnabled.addFamily(new HColumnDescriptor(FAMILY)); - FileSystem filesystem = FileSystem.get(conf); - Path rootdir = FSUtils.getRootDir(conf); - FSTableDescriptors fstd = new FSTableDescriptors(conf, filesystem, rootdir); - // Write the .tableinfo - fstd.createTableDescriptor(htdEnabled); - HRegionInfo hriEnabled = new HRegionInfo(htdEnabled.getTableName(), - null, null); - createRegion(hriEnabled, rootdir, conf, htdEnabled); - - List enabledRegions = TEST_UTIL.createMultiRegionsInMeta( - TEST_UTIL.getConfiguration(), htdEnabled, SPLIT_KEYS); - - TableName disabledTable = - TableName.valueOf("disabledTable"); - HTableDescriptor htdDisabled = new HTableDescriptor(disabledTable); - htdDisabled.addFamily(new HColumnDescriptor(FAMILY)); - // Write the .tableinfo - fstd.createTableDescriptor(htdDisabled); - HRegionInfo hriDisabled = new HRegionInfo(htdDisabled.getTableName(), null, null); - createRegion(hriDisabled, rootdir, conf, htdDisabled); - - List disabledRegions = TEST_UTIL.createMultiRegionsInMeta( - TEST_UTIL.getConfiguration(), htdDisabled, SPLIT_KEYS); - - log("Regions in hbase:meta and Namespace have been created"); - - // at this point we only expect 2 regions to be assigned out (catalogs and namespace ) - assertEquals(2, cluster.countServedRegions()); - - // The first RS will stay online - List regionservers = - cluster.getRegionServerThreads(); - HRegionServer hrs = regionservers.get(0).getRegionServer(); - - // The second RS is going to be hard-killed - RegionServerThread hrsDeadThread = regionservers.get(1); - HRegionServer hrsDead = hrsDeadThread.getRegionServer(); - ServerName deadServerName = hrsDead.getServerName(); - - // we'll need some regions to already be assigned out properly on live RS - List enabledAndAssignedRegions = new ArrayList(); - enabledAndAssignedRegions.addAll(enabledRegions.subList(0, 6)); - enabledRegions.removeAll(enabledAndAssignedRegions); - List disabledAndAssignedRegions = new ArrayList(); - disabledAndAssignedRegions.addAll(disabledRegions.subList(0, 6)); - disabledRegions.removeAll(disabledAndAssignedRegions); - - // now actually assign them - for (HRegionInfo hri : enabledAndAssignedRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, hrs.getServerName())); - master.assignRegion(hri); - } - for (HRegionInfo hri : disabledAndAssignedRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, hrs.getServerName())); - master.assignRegion(hri); - } - - log("Waiting for assignment to finish"); - ZKAssign.blockUntilNoRIT(zkw); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); - log("Assignment completed"); - - assertTrue(" Table must be enabled.", master.getAssignmentManager() - .getTableStateManager().isTableState(TableName.valueOf("enabledTable"), - ZooKeeperProtos.Table.State.ENABLED)); - // we also need regions assigned out on the dead server - List enabledAndOnDeadRegions = new ArrayList(); - enabledAndOnDeadRegions.addAll(enabledRegions.subList(0, 6)); - enabledRegions.removeAll(enabledAndOnDeadRegions); - List disabledAndOnDeadRegions = new ArrayList(); - disabledAndOnDeadRegions.addAll(disabledRegions.subList(0, 6)); - disabledRegions.removeAll(disabledAndOnDeadRegions); - - // set region plan to server to be killed and trigger assign - for (HRegionInfo hri : enabledAndOnDeadRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, deadServerName)); - master.assignRegion(hri); - } - for (HRegionInfo hri : disabledAndOnDeadRegions) { - master.assignmentManager.addPlan(hri.getEncodedName(), - new RegionPlan(hri, null, deadServerName)); - master.assignRegion(hri); - } - - // wait for no more RIT - log("Waiting for assignment to finish"); - ZKAssign.blockUntilNoRIT(zkw); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); - log("Assignment completed"); - - // Due to master.assignRegion(hri) could fail to assign a region to a specified RS - // therefore, we need make sure that regions are in the expected RS - verifyRegionLocation(hrs, enabledAndAssignedRegions); - verifyRegionLocation(hrs, disabledAndAssignedRegions); - verifyRegionLocation(hrsDead, enabledAndOnDeadRegions); - verifyRegionLocation(hrsDead, disabledAndOnDeadRegions); - - assertTrue(" Didn't get enough regions of enabledTalbe on live rs.", - enabledAndAssignedRegions.size() >= 2); - assertTrue(" Didn't get enough regions of disalbedTable on live rs.", - disabledAndAssignedRegions.size() >= 2); - assertTrue(" Didn't get enough regions of enabledTalbe on dead rs.", - enabledAndOnDeadRegions.size() >= 2); - assertTrue(" Didn't get enough regions of disalbedTable on dead rs.", - disabledAndOnDeadRegions.size() >= 2); - - // Stop the master - log("Aborting master"); - cluster.abortMaster(0); - cluster.waitOnMaster(0); - log("Master has aborted"); - - /* - * Now, let's start mocking up some weird states as described in the method - * javadoc. - */ - - List regionsThatShouldBeOnline = new ArrayList(); - List regionsThatShouldBeOffline = new ArrayList(); - - log("Beginning to mock scenarios"); - - // Disable the disabledTable in ZK - TableStateManager zktable = new ZKTableStateManager(zkw); - zktable.setTableState(disabledTable, ZooKeeperProtos.Table.State.DISABLED); - - assertTrue(" The enabled table should be identified on master fail over.", - zktable.isTableState(TableName.valueOf("enabledTable"), - ZooKeeperProtos.Table.State.ENABLED)); - - /* - * ZK = CLOSING - */ - - // Region of enabled table being closed on dead RS but not finished - HRegionInfo region = enabledAndOnDeadRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeClosing(zkw, region, deadServerName); - LOG.debug("\n\nRegion of enabled table was CLOSING on dead RS\n" + - region + "\n\n"); - - // Region of disabled table being closed on dead RS but not finished - region = disabledAndOnDeadRegions.remove(0); - regionsThatShouldBeOffline.add(region); - ZKAssign.createNodeClosing(zkw, region, deadServerName); - LOG.debug("\n\nRegion of disabled table was CLOSING on dead RS\n" + - region + "\n\n"); - - /* - * ZK = CLOSED - */ - - // Region of enabled on dead server gets closed but not ack'd by master - region = enabledAndOnDeadRegions.remove(0); - regionsThatShouldBeOnline.add(region); - int version = ZKAssign.createNodeClosing(zkw, region, deadServerName); - ZKAssign.transitionNodeClosed(zkw, region, deadServerName, version); - LOG.debug("\n\nRegion of enabled table was CLOSED on dead RS\n" + - region + "\n\n"); - - // Region of disabled on dead server gets closed but not ack'd by master - region = disabledAndOnDeadRegions.remove(0); - regionsThatShouldBeOffline.add(region); - version = ZKAssign.createNodeClosing(zkw, region, deadServerName); - ZKAssign.transitionNodeClosed(zkw, region, deadServerName, version); - LOG.debug("\n\nRegion of disabled table was CLOSED on dead RS\n" + - region + "\n\n"); - - /* - * ZK = OPENING - */ - - // RS was opening a region of enabled table then died - region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ZKAssign.transitionNodeOpening(zkw, region, deadServerName); - LOG.debug("\n\nRegion of enabled table was OPENING on dead RS\n" + - region + "\n\n"); - - // RS was opening a region of disabled table then died - region = disabledRegions.remove(0); - regionsThatShouldBeOffline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ZKAssign.transitionNodeOpening(zkw, region, deadServerName); - LOG.debug("\n\nRegion of disabled table was OPENING on dead RS\n" + - region + "\n\n"); - - /* - * ZK = OPENED - */ - - // Region of enabled table was opened on dead RS - region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ProtobufUtil.openRegion(hrsDead.getRSRpcServices(), - hrsDead.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - break; - } - Thread.sleep(100); - } - LOG.debug("\n\nRegion of enabled table was OPENED on dead RS\n" + - region + "\n\n"); - - // Region of disabled table was opened on dead RS - region = disabledRegions.remove(0); - regionsThatShouldBeOffline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ProtobufUtil.openRegion(hrsDead.getRSRpcServices(), - hrsDead.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - break; - } - Thread.sleep(100); - } - LOG.debug("\n\nRegion of disabled table was OPENED on dead RS\n" + - region + "\n\n"); - - /* - * ZK = NONE - */ - - // Region of enabled table was open at steady-state on dead RS - region = enabledRegions.remove(0); - regionsThatShouldBeOnline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ProtobufUtil.openRegion(hrsDead.getRSRpcServices(), - hrsDead.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - ZKAssign.deleteOpenedNode(zkw, region.getEncodedName(), rt.getServerName()); - LOG.debug("DELETED " + rt); - break; - } - Thread.sleep(100); - } - LOG.debug("\n\nRegion of enabled table was open at steady-state on dead RS" - + "\n" + region + "\n\n"); - - // Region of disabled table was open at steady-state on dead RS - region = disabledRegions.remove(0); - regionsThatShouldBeOffline.add(region); - ZKAssign.createNodeOffline(zkw, region, deadServerName); - ProtobufUtil.openRegion(hrsDead.getRSRpcServices(), - hrsDead.getServerName(), region); - while (true) { - byte [] bytes = ZKAssign.getData(zkw, region.getEncodedName()); - RegionTransition rt = RegionTransition.parseFrom(bytes); - if (rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_OPENED)) { - ZKAssign.deleteOpenedNode(zkw, region.getEncodedName(), rt.getServerName()); - break; - } - Thread.sleep(100); - } - LOG.debug("\n\nRegion of disabled table was open at steady-state on dead RS" - + "\n" + region + "\n\n"); - - /* - * DONE MOCKING - */ - - log("Done mocking data up in ZK"); - - // Kill the RS that had a hard death - log("Killing RS " + deadServerName); - hrsDead.abort("Killing for unit test"); - log("RS " + deadServerName + " killed"); - - // Start up a new master. Wait until regionserver is completely down - // before starting new master because of hbase-4511. - while (hrsDeadThread.isAlive()) { - Threads.sleep(10); - } - log("Starting up a new master"); - master = cluster.startMaster().getMaster(); - log("Waiting for master to be ready"); - assertTrue(cluster.waitForActiveAndReadyMaster()); - log("Master is ready"); - - // Wait until SSH processing completed for dead server. - while (master.getServerManager().areDeadServersInProgress()) { - Thread.sleep(10); - } - - // Failover should be completed, now wait for no RIT - log("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); - log("No more RIT in ZK"); - long now = System.currentTimeMillis(); - long maxTime = 120000; - boolean done = master.assignmentManager.waitUntilNoRegionsInTransition(maxTime); - if (!done) { - RegionStates regionStates = master.getAssignmentManager().getRegionStates(); - LOG.info("rit=" + regionStates.getRegionsInTransition()); - } - long elapsed = System.currentTimeMillis() - now; - assertTrue("Elapsed=" + elapsed + ", maxTime=" + maxTime + ", done=" + done, - elapsed < maxTime); - log("No more RIT in RIT map, doing final test verification"); - - // Grab all the regions that are online across RSs - Set onlineRegions = new TreeSet(); - now = System.currentTimeMillis(); - maxTime = 30000; - for (JVMClusterUtil.RegionServerThread rst : - cluster.getRegionServerThreads()) { - try { - HRegionServer rs = rst.getRegionServer(); - while (!rs.getRegionsInTransitionInRS().isEmpty()) { - elapsed = System.currentTimeMillis() - now; - assertTrue("Test timed out in getting online regions", elapsed < maxTime); - if (rs.isAborted() || rs.isStopped()) { - // This region server is stopped, skip it. - break; - } - Thread.sleep(100); - } - onlineRegions.addAll(ProtobufUtil.getOnlineRegions(rs.getRSRpcServices())); - } catch (RegionServerStoppedException e) { - LOG.info("Got RegionServerStoppedException", e); - } - } - - // Now, everything that should be online should be online - for (HRegionInfo hri : regionsThatShouldBeOnline) { - assertTrue("region=" + hri.getRegionNameAsString() + ", " + onlineRegions.toString(), - onlineRegions.contains(hri)); - } - - // Everything that should be offline should not be online - for (HRegionInfo hri : regionsThatShouldBeOffline) { - assertFalse(onlineRegions.contains(hri)); - } - - log("Done with verification, all passed, shutting down cluster"); - - // Done, shutdown the cluster - TEST_UTIL.shutdownMiniCluster(); - } - - /** - * Verify regions are on the expected region server - */ - private void verifyRegionLocation(HRegionServer hrs, List regions) - throws IOException { - List tmpOnlineRegions = - ProtobufUtil.getOnlineRegions(hrs.getRSRpcServices()); - Iterator itr = regions.iterator(); - while (itr.hasNext()) { - HRegionInfo tmp = itr.next(); - if (!tmpOnlineRegions.contains(tmp)) { - itr.remove(); - } - } - } - HRegion createRegion(final HRegionInfo hri, final Path rootdir, final Configuration c, final HTableDescriptor htd) throws IOException { @@ -940,131 +81,6 @@ public class TestMasterFailover { LOG.info("\n\n" + string + " \n\n"); } - @Test (timeout=180000) - public void testShouldCheckMasterFailOverWhenMETAIsInOpenedState() - throws Exception { - LOG.info("Starting testShouldCheckMasterFailOverWhenMETAIsInOpenedState"); - final int NUM_MASTERS = 1; - final int NUM_RS = 2; - - // Start the cluster - HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - Configuration conf = TEST_UTIL.getConfiguration(); - conf.setInt("hbase.master.info.port", -1); - conf.setBoolean("hbase.assignment.usezk", true); - - TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - - // Find regionserver carrying meta. - List regionServerThreads = - cluster.getRegionServerThreads(); - HRegion metaRegion = null; - HRegionServer metaRegionServer = null; - for (RegionServerThread regionServerThread : regionServerThreads) { - HRegionServer regionServer = regionServerThread.getRegionServer(); - metaRegion = regionServer.getOnlineRegion(HRegionInfo.FIRST_META_REGIONINFO.getRegionName()); - regionServer.abort(""); - if (null != metaRegion) { - metaRegionServer = regionServer; - break; - } - } - - TEST_UTIL.shutdownMiniHBaseCluster(); - - // Create a ZKW to use in the test - ZooKeeperWatcher zkw = - HBaseTestingUtility.createAndForceNodeToOpenedState(TEST_UTIL, - metaRegion, metaRegionServer.getServerName()); - - LOG.info("Staring cluster for second time"); - TEST_UTIL.startMiniHBaseCluster(NUM_MASTERS, NUM_RS); - - HMaster master = TEST_UTIL.getHBaseCluster().getMaster(); - while (!master.isInitialized()) { - Thread.sleep(100); - } - // Failover should be completed, now wait for no RIT - log("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); - - zkw.close(); - // Stop the cluster - TEST_UTIL.shutdownMiniCluster(); - } - - /** - * This tests a RIT in offline state will get re-assigned after a master restart - */ - @Test(timeout=240000) - public void testOfflineRegionReAssginedAfterMasterRestart() throws Exception { - final TableName table = TableName.valueOf("testOfflineRegionReAssginedAfterMasterRestart"); - final int NUM_MASTERS = 1; - final int NUM_RS = 2; - - // Create config to use for this cluster - Configuration conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", true); - - // Start the cluster - final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf); - TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); - log("Cluster started"); - - TEST_UTIL.createTable(table, Bytes.toBytes("family")); - HMaster master = TEST_UTIL.getHBaseCluster().getMaster(); - RegionStates regionStates = master.getAssignmentManager().getRegionStates(); - HRegionInfo hri = regionStates.getRegionsOfTable(table).get(0); - ServerName serverName = regionStates.getRegionServerOfRegion(hri); - TEST_UTIL.assertRegionOnServer(hri, serverName, 200); - - ServerName dstName = null; - for (ServerName tmpServer : master.serverManager.getOnlineServers().keySet()) { - if (!tmpServer.equals(serverName)) { - dstName = tmpServer; - break; - } - } - // find a different server - assertTrue(dstName != null); - // shutdown HBase cluster - TEST_UTIL.shutdownMiniHBaseCluster(); - // create a RIT node in offline state - ZooKeeperWatcher zkw = TEST_UTIL.getZooKeeperWatcher(); - ZKAssign.createNodeOffline(zkw, hri, dstName); - Stat stat = new Stat(); - byte[] data = - ZKAssign.getDataNoWatch(zkw, hri.getEncodedName(), stat); - assertTrue(data != null); - RegionTransition rt = RegionTransition.parseFrom(data); - assertTrue(rt.getEventType() == EventType.M_ZK_REGION_OFFLINE); - - LOG.info(hri.getEncodedName() + " region is in offline state with source server=" + serverName - + " and dst server=" + dstName); - - // start HBase cluster - TEST_UTIL.startMiniHBaseCluster(NUM_MASTERS, NUM_RS); - - while (true) { - master = TEST_UTIL.getHBaseCluster().getMaster(); - if (master != null && master.isInitialized()) { - ServerManager serverManager = master.getServerManager(); - if (!serverManager.areDeadServersInProgress()) { - break; - } - } - Thread.sleep(200); - } - - // verify the region is assigned - master = TEST_UTIL.getHBaseCluster().getMaster(); - master.getAssignmentManager().waitForAssignment(hri); - regionStates = master.getAssignmentManager().getRegionStates(); - RegionState newState = regionStates.getRegionState(hri); - assertTrue(newState.isOpened()); - } - /** * Simple test of master failover. *

    @@ -1136,7 +152,7 @@ public class TestMasterFailover { assertEquals(2, masterThreads.size()); int rsCount = masterThreads.get(activeIndex).getMaster().getClusterStatus().getServersSize(); LOG.info("Active master " + active.getServerName() + " managing " + rsCount + " regions servers"); - assertEquals(3, rsCount); + assertEquals(4, rsCount); // Check that ClusterStatus reports the correct active and backup masters assertNotNull(active); @@ -1169,7 +185,7 @@ public class TestMasterFailover { int rss = status.getServersSize(); LOG.info("Active master " + mastername.getServerName() + " managing " + rss + " region servers"); - assertEquals(3, rss); + assertEquals(4, rss); // Stop the cluster TEST_UTIL.shutdownMiniCluster(); @@ -1179,14 +195,12 @@ public class TestMasterFailover { * Test region in pending_open/close when master failover */ @Test (timeout=180000) - @SuppressWarnings("deprecation") public void testPendingOpenOrCloseWhenMasterFailover() throws Exception { final int NUM_MASTERS = 1; final int NUM_RS = 1; // Create config to use for this cluster Configuration conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", false); // Start the cluster HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf); @@ -1261,7 +275,7 @@ public class TestMasterFailover { log("Master is ready"); // Wait till no region in transition any more - master.getAssignmentManager().waitUntilNoRegionsInTransition(60000); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); // Get new region states since master restarted regionStates = master.getAssignmentManager().getRegionStates(); @@ -1285,9 +299,7 @@ public class TestMasterFailover { final int NUM_RS = 1; // Start the cluster - Configuration conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", false); - HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(conf); + HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); log("Cluster started"); @@ -1378,32 +390,7 @@ public class TestMasterFailover { log("Master has aborted"); rs.getRSRpcServices().closeRegion(null, RequestConverter.buildCloseRegionRequest( - rs.getServerName(), HRegionInfo.FIRST_META_REGIONINFO.getEncodedName(), false)); - - // Start up a new master - log("Starting up a new master"); - activeMaster = cluster.startMaster().getMaster(); - log("Waiting for master to be ready"); - cluster.waitForActiveAndReadyMaster(); - log("Master is ready"); - - TEST_UTIL.waitUntilNoRegionsInTransition(60000); - log("Meta was assigned"); - - rs.getRSRpcServices().closeRegion( - null, - RequestConverter.buildCloseRegionRequest(rs.getServerName(), - HRegionInfo.FIRST_META_REGIONINFO.getEncodedName(), false)); - - // Set a dummy server to check if master reassigns meta on restart - MetaTableLocator.setMetaLocation(activeMaster.getZooKeeper(), - ServerName.valueOf("dummyserver.example.org", 1234, -1L), State.OPEN); - - log("Aborting master"); - activeMaster.stop("test-kill"); - - cluster.waitForMasterToStop(activeMaster.getServerName(), 30000); - log("Master has aborted"); + rs.getServerName(), HRegionInfo.FIRST_META_REGIONINFO.getEncodedName())); // Start up a new master log("Starting up a new master"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailoverBalancerPersistence.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailoverBalancerPersistence.java index edecfdd..395fc31 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailoverBalancerPersistence.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailoverBalancerPersistence.java @@ -20,9 +20,10 @@ package org.apache.hadoop.hbase.master; import org.apache.hadoop.hbase.ClusterStatus; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MasterNotRunningException; import org.apache.hadoop.hbase.MiniHBaseCluster; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -33,7 +34,7 @@ import java.util.List; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestMasterFailoverBalancerPersistence { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFileSystem.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFileSystem.java index dae361d..0534643 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFileSystem.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFileSystem.java @@ -30,9 +30,10 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.SplitLogTask; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.ZKSplitLog; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; /** * Test the master filesystem in a local cluster */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMasterFileSystem { private static final Log LOG = LogFactory.getLog(TestMasterFileSystem.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java index b7e77fa..8a55ce3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetrics.java @@ -25,20 +25,21 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.CompatibilityFactory; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos; import org.apache.hadoop.hbase.test.MetricsAssertHelper; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.zookeeper.KeeperException; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMasterMetrics { private static final Log LOG = LogFactory.getLog(TestMasterMetrics.class); @@ -116,7 +117,7 @@ public class TestMasterMetrics { @Test public void testDefaultMasterMetrics() throws Exception { MetricsMasterSource masterSource = master.getMasterMetrics().getMetricsSource(); - metricsHelper.assertGauge( "numRegionServers", 1, masterSource); + metricsHelper.assertGauge( "numRegionServers", 2, masterSource); metricsHelper.assertGauge( "averageLoad", 2, masterSource); metricsHelper.assertGauge( "numDeadRegionServers", 0, masterSource); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetricsWrapper.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetricsWrapper.java index 1232a40..efaa111 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetricsWrapper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterMetricsWrapper.java @@ -22,6 +22,7 @@ import static org.junit.Assert.*; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Threads; import org.junit.AfterClass; @@ -29,7 +30,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMasterMetricsWrapper { private static final Log LOG = LogFactory.getLog(TestMasterMetricsWrapper.class); @@ -55,7 +56,7 @@ public class TestMasterMetricsWrapper { assertEquals(master.getMasterStartTime(), info.getStartTime()); assertEquals(master.getMasterCoprocessors().length, info.getCoprocessors().length); assertEquals(master.getServerManager().getOnlineServersList().size(), info.getNumRegionServers()); - assertEquals(4, info.getNumRegionServers()); + assertEquals(5, info.getNumRegionServers()); String zkServers = info.getZookeeperQuorum(); assertEquals(zkServers.split(",").length, TEST_UTIL.getZkCluster().getZooKeeperServerNum()); @@ -67,10 +68,10 @@ public class TestMasterMetricsWrapper { // We stopped the regionserver but could take a while for the master to notice it so hang here // until it does... then move forward to see if metrics wrapper notices. while (TEST_UTIL.getHBaseCluster().getMaster().getServerManager().getOnlineServers().size() != - 3) { + 4) { Threads.sleep(10); } - assertEquals(3, info.getNumRegionServers()); + assertEquals(4, info.getNumRegionServers()); assertEquals(1, info.getNumDeadRegionServers()); } } \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java index d899cc2..fc7f136 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterNoCluster.java @@ -38,7 +38,6 @@ import org.apache.hadoop.hbase.CoordinatedStateManagerFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MetaMockingUtil; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerLoad; @@ -51,10 +50,11 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.monitoring.MonitoredTask; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionServerReportRequest; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; @@ -76,7 +76,7 @@ import com.google.protobuf.ServiceException; * TODO: Speed up the zk connection by Master. It pauses 5 seconds establishing * session. */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMasterNoCluster { private static final Log LOG = LogFactory.getLog(TestMasterNoCluster.class); private static final HBaseTestingUtility TESTUTIL = new HBaseTestingUtility(); @@ -201,7 +201,7 @@ public class TestMasterNoCluster { // Fake a successful close. Mockito.doReturn(true).when(spy). sendRegionClose((ServerName)Mockito.any(), (HRegionInfo)Mockito.any(), - Mockito.anyInt(), (ServerName)Mockito.any(), Mockito.anyBoolean()); + (ServerName)Mockito.any()); return spy; } @@ -236,13 +236,8 @@ public class TestMasterNoCluster { request.setLoad(ServerLoad.EMPTY_SERVERLOAD.obtainServerLoadPB()); master.getMasterRpcServices().regionServerReport(null, request.build()); } - ZooKeeperWatcher zkw = master.getZooKeeper(); - // Master should now come up. + // Master should now come up. while (!master.isInitialized()) { - // Fake meta is closed on rs0, try several times in case the event is lost - // due to race with HMaster#assignMeta - ZKAssign.transitionNodeClosed(zkw, - HRegionInfo.FIRST_META_REGIONINFO, sn0, -1); Threads.sleep(100); } assertTrue(master.isInitialized()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java index 6f880ef..846f8e6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterOperationsForRegionReplicas.java @@ -38,7 +38,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -48,17 +47,20 @@ import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestMasterOperationsForRegionReplicas { final static Log LOG = LogFactory.getLog(TestRegionPlacement.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -98,7 +100,7 @@ public class TestMasterOperationsForRegionReplicas { ADMIN.createTable(desc, Bytes.toBytes("A"), Bytes.toBytes("Z"), numRegions); validateNumberOfRowsInMeta(table, numRegions, ADMIN.getConnection()); - List hris = MetaTableAccessor.getTableRegions(TEST_UTIL.getZooKeeperWatcher(), + List hris = MetaTableAccessor.getTableRegions( ADMIN.getConnection(), table); assert(hris.size() == numRegions * numReplica); } finally { @@ -120,8 +122,7 @@ public class TestMasterOperationsForRegionReplicas { TEST_UTIL.waitTableEnabled(table); validateNumberOfRowsInMeta(table, numRegions, ADMIN.getConnection()); - List hris = MetaTableAccessor.getTableRegions( - TEST_UTIL.getZooKeeperWatcher(), ADMIN.getConnection(), table); + List hris = MetaTableAccessor.getTableRegions(ADMIN.getConnection(), table); assert(hris.size() == numRegions * numReplica); // check that the master created expected number of RegionState objects for (int i = 0; i < numRegions; i++) { @@ -156,7 +157,7 @@ public class TestMasterOperationsForRegionReplicas { ServerName master = TEST_UTIL.getHBaseClusterInterface().getClusterStatus().getMaster(); TEST_UTIL.getHBaseClusterInterface().stopMaster(master); TEST_UTIL.getHBaseClusterInterface().waitForMasterToStop(master, 30000); - TEST_UTIL.getHBaseClusterInterface().startMaster(master.getHostname()); + TEST_UTIL.getHBaseClusterInterface().startMaster(master.getHostname(), master.getPort()); TEST_UTIL.getHBaseClusterInterface().waitForActiveAndReadyMaster(); for (int i = 0; i < numRegions; i++) { for (int j = 0; j < numReplica; j++) { @@ -210,8 +211,7 @@ public class TestMasterOperationsForRegionReplicas { .getAssignmentManager().getRegionStates().getRegionsOfTable(table); assert(regions.size() == numRegions * numReplica); //also make sure the meta table has the replica locations removed - hris = MetaTableAccessor.getTableRegions(TEST_UTIL.getZooKeeperWatcher(), - ADMIN.getConnection(), table); + hris = MetaTableAccessor.getTableRegions(ADMIN.getConnection(), table); assert(hris.size() == numRegions * numReplica); //just check that the number of default replica regions in the meta table are the same //as the number of regions the table was created with, and the count of the @@ -246,8 +246,7 @@ public class TestMasterOperationsForRegionReplicas { ADMIN.createTable(desc, Bytes.toBytes("A"), Bytes.toBytes("Z"), numRegions); TEST_UTIL.waitTableEnabled(table); Set tableRows = new HashSet(); - List hris = MetaTableAccessor.getTableRegions(TEST_UTIL.getZooKeeperWatcher(), - ADMIN.getConnection(), table); + List hris = MetaTableAccessor.getTableRegions(ADMIN.getConnection(), table); for (HRegionInfo hri : hris) { tableRows.add(hri.getRegionName()); } @@ -337,12 +336,12 @@ public class TestMasterOperationsForRegionReplicas { Map regionToServerMap = snapshot.getRegionToRegionServerMap(); assertEquals(regionToServerMap.size(), numRegions * numReplica + 1); //'1' for the namespace Map> serverToRegionMap = snapshot.getRegionServerToRegionMap(); - assertEquals(serverToRegionMap.keySet().size(), 1); // master by default not used + assertEquals(serverToRegionMap.keySet().size(), 2); // 1 rs + 1 master for (Map.Entry> entry : serverToRegionMap.entrySet()) { if (entry.getKey().equals(TEST_UTIL.getHBaseCluster().getMaster().getServerName())) { continue; } - assertEquals(entry.getValue().size(), numRegions * numReplica + 1); //'1' for the namespace + assertEquals(entry.getValue().size(), numRegions * numReplica); } } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java index bd5fc29..56961d5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterRestartAfterDisablingTable.java @@ -29,22 +29,20 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; +import org.apache.hadoop.hbase.client.TableState; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestMasterRestartAfterDisablingTable { private static final Log LOG = LogFactory.getLog(TestMasterRestartAfterDisablingTable.class); @@ -64,8 +62,6 @@ public class TestMasterRestartAfterDisablingTable { MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); log("Waiting for active/ready master"); cluster.waitForActiveAndReadyMaster(); - ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testmasterRestart", null); - HMaster master = cluster.getMaster(); // Create a table with regions TableName table = TableName.valueOf("tableRestart"); @@ -76,7 +72,7 @@ public class TestMasterRestartAfterDisablingTable { NUM_REGIONS_TO_CREATE); numRegions += 1; // catalogs log("Waiting for no more RIT\n"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Disabling table\n"); TEST_UTIL.getHBaseAdmin().disableTable(table); @@ -99,15 +95,15 @@ public class TestMasterRestartAfterDisablingTable { assertTrue("The table should not be in enabled state", cluster.getMaster() .getAssignmentManager().getTableStateManager().isTableState( - TableName.valueOf("tableRestart"), ZooKeeperProtos.Table.State.DISABLED, - ZooKeeperProtos.Table.State.DISABLING)); + TableName.valueOf("tableRestart"), TableState.State.DISABLED, + TableState.State.DISABLING)); log("Enabling table\n"); // Need a new Admin, the previous one is on the old master Admin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); admin.enableTable(table); admin.close(); log("Waiting for no more RIT\n"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Verifying there are " + numRegions + " assigned on cluster\n"); regions = HBaseTestingUtility.getAllOnlineRegions(cluster); assertEquals("The assigned regions were not onlined after master" @@ -115,7 +111,7 @@ public class TestMasterRestartAfterDisablingTable { 6, regions.size()); assertTrue("The table should be in enabled state", cluster.getMaster() .getAssignmentManager().getTableStateManager() - .isTableState(TableName.valueOf("tableRestart"), ZooKeeperProtos.Table.State.ENABLED)); + .isTableState(TableName.valueOf("tableRestart"), TableState.State.ENABLED)); ht.close(); TEST_UTIL.shutdownMiniCluster(); } @@ -123,11 +119,5 @@ public class TestMasterRestartAfterDisablingTable { private void log(String msg) { LOG.debug("\n\nTRR: " + msg + "\n"); } - - private void blockUntilNoRIT(ZooKeeperWatcher zkw, HMaster master) - throws KeeperException, InterruptedException { - ZKAssign.blockUntilNoRIT(zkw); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); - } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java index 22626b5..39ad442 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterShutdown.java @@ -31,17 +31,18 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.ClusterStatus; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.LocalHBaseCluster; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestMasterShutdown { public static final Log LOG = LogFactory.getLog(TestMasterShutdown.class); @@ -150,4 +151,4 @@ public class TestMasterShutdown { util.shutdownMiniDFSCluster(); util.cleanupTestDir(); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java index 29bb9cb..b23ca78 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java @@ -32,6 +32,7 @@ import java.util.regex.Pattern; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; @@ -51,7 +52,7 @@ import com.google.common.collect.Maps; /** * Tests for the master status page and its template. */ -@Category(MediumTests.class) +@Category({MasterTests.class,MediumTests.class}) public class TestMasterStatusServlet { private HMaster master; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java index 882f57d..374366e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java @@ -26,7 +26,6 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -34,6 +33,8 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.Assert; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; * Test transitions of state across the master. Sets up the cluster once and * then runs a couple of tests. */ -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestMasterTransitions { private static final Log LOG = LogFactory.getLog(TestMasterTransitions.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestOpenedRegionHandler.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestOpenedRegionHandler.java deleted file mode 100644 index 4ae6b24..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestOpenedRegionHandler.java +++ /dev/null @@ -1,228 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.mockito.Mockito.when; - -import java.io.IOException; -import java.util.Collection; -import java.util.Iterator; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.*; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.master.handler.OpenedRegionHandler; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.HRegionServer; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.MockServer; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKTableStateManager; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.data.Stat; -import org.junit.After; -import org.junit.Before; -import org.junit.Test; -import org.junit.experimental.categories.Category; -import org.mockito.Mockito; - -@Category(MediumTests.class) -public class TestOpenedRegionHandler { - - private static final Log LOG = LogFactory - .getLog(TestOpenedRegionHandler.class); - - private HBaseTestingUtility TEST_UTIL; - private final int NUM_MASTERS = 1; - private final int NUM_RS = 1; - private Configuration conf; - private Configuration resetConf; - private ZooKeeperWatcher zkw; - - @Before - public void setUp() throws Exception { - conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", true); - TEST_UTIL = HBaseTestingUtility.createLocalHTU(conf); - } - - @After - public void tearDown() throws Exception { - // Stop the cluster - TEST_UTIL.shutdownMiniCluster(); - TEST_UTIL = new HBaseTestingUtility(resetConf); - } - - @Test - public void testOpenedRegionHandlerOnMasterRestart() throws Exception { - // Start the cluster - log("Starting cluster"); - conf = HBaseConfiguration.create(); - conf.setBoolean("hbase.assignment.usezk", true); - resetConf = conf; - TEST_UTIL = new HBaseTestingUtility(conf); - TEST_UTIL.startMiniCluster(NUM_MASTERS, NUM_RS); - String tableName = "testOpenedRegionHandlerOnMasterRestart"; - MiniHBaseCluster cluster = createRegions(tableName); - abortMaster(cluster); - - HRegionServer regionServer = cluster.getRegionServer(0); - HRegion region = getRegionBeingServed(cluster, regionServer); - - // forcefully move a region to OPENED state in zk - // Create a ZKW to use in the test - zkw = HBaseTestingUtility.createAndForceNodeToOpenedState(TEST_UTIL, - region, regionServer.getServerName()); - - // Start up a new master - log("Starting up a new master"); - cluster.startMaster().getMaster(); - log("Waiting for master to be ready"); - cluster.waitForActiveAndReadyMaster(); - log("Master is ready"); - - // Failover should be completed, now wait for no RIT - log("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); - } - @Test - public void testShouldNotCompeleteOpenedRegionSuccessfullyIfVersionMismatches() - throws Exception { - HRegion region = null; - try { - int testIndex = 0; - TEST_UTIL.startMiniZKCluster(); - final Server server = new MockServer(TEST_UTIL); - HTableDescriptor htd = new HTableDescriptor( - TableName.valueOf("testShouldNotCompeleteOpenedRegionSuccessfullyIfVersionMismatches")); - HRegionInfo hri = new HRegionInfo(htd.getTableName(), - Bytes.toBytes(testIndex), Bytes.toBytes(testIndex + 1)); - region = HRegion.createHRegion(hri, TEST_UTIL.getDataTestDir(), TEST_UTIL.getConfiguration(), htd); - assertNotNull(region); - AssignmentManager am = Mockito.mock(AssignmentManager.class); - RegionStates rsm = Mockito.mock(RegionStates.class); - Mockito.doReturn(rsm).when(am).getRegionStates(); - when(rsm.isRegionInTransition(hri)).thenReturn(false); - when(rsm.getRegionState(hri)).thenReturn( - new RegionState(region.getRegionInfo(), RegionState.State.OPEN, - System.currentTimeMillis(), server.getServerName())); - // create a node with OPENED state - zkw = HBaseTestingUtility.createAndForceNodeToOpenedState(TEST_UTIL, - region, server.getServerName()); - when(am.getTableStateManager()).thenReturn(new ZKTableStateManager(zkw)); - Stat stat = new Stat(); - String nodeName = ZKAssign.getNodeName(zkw, region.getRegionInfo() - .getEncodedName()); - ZKUtil.getDataAndWatch(zkw, nodeName, stat); - - // use the version for the OpenedRegionHandler - BaseCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - OpenRegionCoordination orc = csm.getOpenRegionCoordination(); - ZkOpenRegionCoordination.ZkOpenRegionDetails zkOrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkOrd.setServerName(server.getServerName()); - zkOrd.setVersion(stat.getVersion()); - OpenedRegionHandler handler = new OpenedRegionHandler(server, am, region - .getRegionInfo(), orc, zkOrd); - // Once again overwrite the same znode so that the version changes. - ZKAssign.transitionNode(zkw, region.getRegionInfo(), server - .getServerName(), EventType.RS_ZK_REGION_OPENED, - EventType.RS_ZK_REGION_OPENED, stat.getVersion()); - - // Should not invoke assignmentmanager.regionOnline. If it is - // invoked as per current mocking it will throw null pointer exception. - boolean expectedException = false; - try { - handler.process(); - } catch (Exception e) { - expectedException = true; - } - assertFalse("The process method should not throw any exception.", - expectedException); - List znodes = ZKUtil.listChildrenAndWatchForNewChildren(zkw, - zkw.assignmentZNode); - String regionName = znodes.get(0); - assertEquals("The region should not be opened successfully.", regionName, - region.getRegionInfo().getEncodedName()); - } finally { - HRegion.closeHRegion(region); - TEST_UTIL.shutdownMiniZKCluster(); - } - } - private MiniHBaseCluster createRegions(String tableName) - throws InterruptedException, ZooKeeperConnectionException, IOException, - KeeperException { - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - log("Waiting for active/ready master"); - cluster.waitForActiveAndReadyMaster(); - zkw = new ZooKeeperWatcher(conf, "testOpenedRegionHandler", null); - - // Create a table with regions - byte[] table = Bytes.toBytes(tableName); - byte[] family = Bytes.toBytes("family"); - TEST_UTIL.createTable(table, family); - - //wait till the regions are online - log("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); - - return cluster; - } - private void abortMaster(MiniHBaseCluster cluster) { - // Stop the master - log("Aborting master"); - cluster.abortMaster(0); - cluster.waitOnMaster(0); - log("Master has aborted"); - } - private HRegion getRegionBeingServed(MiniHBaseCluster cluster, - HRegionServer regionServer) { - Collection onlineRegionsLocalContext = regionServer - .getOnlineRegionsLocalContext(); - Iterator iterator = onlineRegionsLocalContext.iterator(); - HRegion region = null; - while (iterator.hasNext()) { - region = iterator.next(); - if (!region.getRegionInfo().isMetaTable()) { - break; - } - } - return region; - } - private void log(String msg) { - LOG.debug("\n\nTRR: " + msg + "\n"); - } - -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java index ccb809d..25dd13e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement.java @@ -42,12 +42,13 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.MetaScanner; @@ -59,6 +60,8 @@ import org.apache.hadoop.hbase.master.balancer.FavoredNodesPlan; import org.apache.hadoop.hbase.master.balancer.FavoredNodesPlan.Position; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.apache.zookeeper.KeeperException; @@ -67,12 +70,12 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; - -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestRegionPlacement { final static Log LOG = LogFactory.getLog(TestRegionPlacement.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final static int SLAVES = 10; + private static Connection CONNECTION; private static Admin admin; private static RegionPlacementMaintainer rp; private static Position[] positions = Position.values(); @@ -89,7 +92,8 @@ public class TestRegionPlacement { FavoredNodeLoadBalancer.class, LoadBalancer.class); conf.setBoolean("hbase.tests.use.shortcircuit.reads", false); TEST_UTIL.startMiniCluster(SLAVES); - admin = new HBaseAdmin(conf); + CONNECTION = TEST_UTIL.getConnection(); + admin = CONNECTION.getAdmin(); rp = new RegionPlacementMaintainer(conf); } @@ -522,7 +526,7 @@ public class TestRegionPlacement { @Override public void close() throws IOException {} }; - MetaScanner.metaScan(TEST_UTIL.getConfiguration(), visitor); + MetaScanner.metaScan(CONNECTION, visitor); LOG.info("There are " + regionOnPrimaryNum.intValue() + " out of " + totalRegionNum.intValue() + " regions running on the primary" + " region servers" ); @@ -549,8 +553,7 @@ public class TestRegionPlacement { desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); admin.createTable(desc, splitKeys); - @SuppressWarnings("deprecation") - HTable ht = new HTable(TEST_UTIL.getConfiguration(), tableName); + HTable ht = (HTable) CONNECTION.getTable(tableName); @SuppressWarnings("deprecation") Map regions = ht.getRegionLocations(); assertEquals("Tried to create " + expectedRegions + " regions " diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement2.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement2.java index 86549dd..3f34bc4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlacement2.java @@ -37,13 +37,14 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.master.balancer.FavoredNodeLoadBalancer; import org.apache.hadoop.hbase.master.balancer.LoadBalancerFactory; import org.apache.hadoop.hbase.master.balancer.FavoredNodesPlan.Position; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category({MediumTests.class}) +@Category({MasterTests.class, MediumTests.class}) public class TestRegionPlacement2 { final static Log LOG = LogFactory.getLog(TestRegionPlacement2.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java index e5b1ca5..388924b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionPlan.java @@ -22,12 +22,13 @@ import static org.junit.Assert.assertNotEquals; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestRegionPlan { @Test public void test() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionState.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionState.java index a09e4ed..d9845e1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionState.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRegionState.java @@ -18,16 +18,16 @@ package org.apache.hadoop.hbase.master; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.protobuf.generated.ClusterStatusProtos; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotEquals; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestRegionState { @Test public void test() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java index 9565e53..ad22fe9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRestartCluster.java @@ -28,27 +28,30 @@ import java.util.Map; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.MiniHBaseCluster; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.TableExistsException; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.MetaScanner; -import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.After; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestRestartCluster { private static final Log LOG = LogFactory.getLog(TestRestartCluster.class); private HBaseTestingUtility UTIL = new HBaseTestingUtility(); - private static final byte[] TABLENAME = Bytes.toBytes("master_transitions"); - private static final byte [][] FAMILIES = {Bytes.toBytes("a")}; private static final TableName[] TABLES = { TableName.valueOf("restartTableOne"), TableName.valueOf("restartTableTwo"), @@ -60,38 +63,11 @@ public class TestRestartCluster { UTIL.shutdownMiniCluster(); } - @Test (timeout=300000) public void testRestartClusterAfterKill() - throws Exception { - UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", true); - UTIL.startMiniZKCluster(); - ZooKeeperWatcher zooKeeper = - new ZooKeeperWatcher(UTIL.getConfiguration(), "cluster1", null, true); - - // create the unassigned region, throw up a region opened state for META - String unassignedZNode = zooKeeper.assignmentZNode; - ZKUtil.createAndFailSilent(zooKeeper, unassignedZNode); - - ServerName sn = ServerName.valueOf(HMaster.MASTER, 1, System.currentTimeMillis()); - - ZKAssign.createNodeOffline(zooKeeper, HRegionInfo.FIRST_META_REGIONINFO, sn); - - LOG.debug("Created UNASSIGNED zNode for ROOT and hbase:meta regions in state " + - EventType.M_ZK_REGION_OFFLINE); - - // start the HB cluster - LOG.info("Starting HBase cluster..."); - UTIL.startMiniCluster(2); - - UTIL.createTable(TABLENAME, FAMILIES); - LOG.info("Created a table, waiting for table to be available..."); - UTIL.waitTableAvailable(TABLENAME, 60*1000); - - LOG.info("Master deleted unassigned region and started up successfully."); - } - @Test (timeout=300000) public void testClusterRestart() throws Exception { UTIL.startMiniCluster(3); + Connection connection = UTIL.getConnection(); + while (!UTIL.getMiniHBaseCluster().getMaster().isInitialized()) { Threads.sleep(1); } @@ -104,7 +80,7 @@ public class TestRestartCluster { } List allRegions = - MetaScanner.listAllRegions(UTIL.getConfiguration(), true); + MetaScanner.listAllRegions(UTIL.getConfiguration(), connection, true); assertEquals(4, allRegions.size()); LOG.info("\n\nShutting down cluster"); @@ -119,7 +95,8 @@ public class TestRestartCluster { // Need to use a new 'Configuration' so we make a new HConnection. // Otherwise we're reusing an HConnection that has gone stale because // the shutdown of the cluster also called shut of the connection. - allRegions = MetaScanner.listAllRegions(new Configuration(UTIL.getConfiguration()), true); + allRegions = + MetaScanner.listAllRegions(new Configuration(UTIL.getConfiguration()), connection, true); assertEquals(4, allRegions.size()); LOG.info("\n\nWaiting for tables to be available"); for(TableName TABLE: TABLES) { @@ -154,8 +131,7 @@ public class TestRestartCluster { } HMaster master = UTIL.getMiniHBaseCluster().getMaster(); - AssignmentManager am = master.getAssignmentManager(); - am.waitUntilNoRegionsInTransition(120000); + UTIL.waitUntilNoRegionsInTransition(120000); // We don't have to use SnapshotOfRegionAssignmentFromMeta. // We use it here because AM used to use it to load all user region placements @@ -168,13 +144,14 @@ public class TestRestartCluster { MiniHBaseCluster cluster = UTIL.getHBaseCluster(); List threads = cluster.getLiveRegionServerThreads(); assertEquals(2, threads.size()); - int[] rsPorts = new int[2]; + int[] rsPorts = new int[3]; for (int i = 0; i < 2; i++) { rsPorts[i] = threads.get(i).getRegionServer().getServerName().getPort(); } + rsPorts[2] = cluster.getMaster().getServerName().getPort(); for (ServerName serverName: regionToRegionServerMap.values()) { boolean found = false; // Test only, no need to optimize - for (int k = 0; k < 2 && !found; k++) { + for (int k = 0; k < 3 && !found; k++) { found = serverName.getPort() == rsPorts[k]; } assertTrue(found); @@ -190,9 +167,9 @@ public class TestRestartCluster { LOG.info("\n\nStarting cluster the second time with the same ports"); try { cluster.getConf().setInt( - ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART, 2); + ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART, 4); master = cluster.startMaster().getMaster(); - for (int i = 0; i < 2; i++) { + for (int i = 0; i < 3; i++) { cluster.getConf().setInt(HConstants.REGIONSERVER_PORT, rsPorts[i]); cluster.startRegionServer(); } @@ -200,13 +177,13 @@ public class TestRestartCluster { // Reset region server port so as not to conflict with other tests cluster.getConf().setInt(HConstants.REGIONSERVER_PORT, 0); cluster.getConf().setInt( - ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART, 1); + ServerManager.WAIT_ON_REGIONSERVERS_MINTOSTART, 2); } // Make sure live regionservers are on the same host/port List localServers = master.getServerManager().getOnlineServersList(); - assertEquals(2, localServers.size()); - for (int i = 0; i < 2; i++) { + assertEquals(4, localServers.size()); + for (int i = 0; i < 3; i++) { boolean found = false; for (ServerName serverName: localServers) { if (serverName.getPort() == rsPorts[i]) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java index ee4d611..d58b689 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestRollingRestart.java @@ -32,25 +32,23 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; import org.junit.Test; import org.junit.experimental.categories.Category; /** * Tests the restarting of everything as done during rolling restarts. */ -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestRollingRestart { private static final Log LOG = LogFactory.getLog(TestRollingRestart.class); @@ -72,9 +70,6 @@ public class TestRollingRestart { MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); log("Waiting for active/ready master"); cluster.waitForActiveAndReadyMaster(); - ZooKeeperWatcher zkw = new ZooKeeperWatcher(conf, "testRollingRestart", - null); - HMaster master = cluster.getMaster(); // Create a table with regions TableName table = TableName.valueOf("tableRestart"); @@ -85,11 +80,11 @@ public class TestRollingRestart { NUM_REGIONS_TO_CREATE); numRegions += 1; // catalogs log("Waiting for no more RIT\n"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Disabling table\n"); TEST_UTIL.getHBaseAdmin().disableTable(table); log("Waiting for no more RIT\n"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); NavigableSet regions = HBaseTestingUtility.getAllOnlineRegions(cluster); log("Verifying only catalog and namespace regions are assigned\n"); if (regions.size() != 2) { @@ -99,7 +94,7 @@ public class TestRollingRestart { log("Enabling table\n"); TEST_UTIL.getHBaseAdmin().enableTable(table); log("Waiting for no more RIT\n"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Verifying there are " + numRegions + " assigned on cluster\n"); regions = HBaseTestingUtility.getAllOnlineRegions(cluster); assertRegionsAssigned(cluster, regions); @@ -112,7 +107,7 @@ public class TestRollingRestart { restarted.waitForServerOnline(); log("Additional RS is online"); log("Waiting for no more RIT"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Verifying there are " + numRegions + " assigned on cluster"); assertRegionsAssigned(cluster, regions); assertEquals(expectedNumRS, cluster.getRegionServerThreads().size()); @@ -144,7 +139,6 @@ public class TestRollingRestart { log("Restarting primary master\n\n"); activeMaster = cluster.startMaster(); cluster.waitForActiveAndReadyMaster(); - master = activeMaster.getMaster(); // Start backup master log("Restarting backup master\n\n"); @@ -168,7 +162,7 @@ public class TestRollingRestart { log("Waiting for RS shutdown to be handled by master"); waitForRSShutdownToStartAndFinish(activeMaster, serverName); log("RS shutdown done, waiting for no more RIT"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Verifying there are " + numRegions + " assigned on cluster"); assertRegionsAssigned(cluster, regions); expectedNumRS--; @@ -179,7 +173,7 @@ public class TestRollingRestart { expectedNumRS++; log("Region server " + num + " is back online"); log("Waiting for no more RIT"); - blockUntilNoRIT(zkw, master); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); log("Verifying there are " + numRegions + " assigned on cluster"); assertRegionsAssigned(cluster, regions); assertEquals(expectedNumRS, cluster.getRegionServerThreads().size()); @@ -195,12 +189,6 @@ public class TestRollingRestart { TEST_UTIL.shutdownMiniCluster(); } - private void blockUntilNoRIT(ZooKeeperWatcher zkw, HMaster master) - throws KeeperException, InterruptedException { - ZKAssign.blockUntilNoRIT(zkw); - master.assignmentManager.waitUntilNoRegionsInTransition(60000); - } - private void waitForRSShutdownToStartAndFinish(MasterThread activeMaster, ServerName serverName) throws InterruptedException { ServerManager sm = activeMaster.getMaster().getServerManager(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java index f6a7953..71f3ed3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSplitLogManager.java @@ -52,7 +52,6 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.SplitLogCounters; @@ -65,6 +64,8 @@ import org.apache.hadoop.hbase.master.SplitLogManager.Task; import org.apache.hadoop.hbase.master.SplitLogManager.TaskBatch; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; import org.apache.hadoop.hbase.regionserver.TestMasterAddressTracker.NodeCreationListener; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZKSplitLog; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -81,7 +82,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestSplitLogManager { private static final Log LOG = LogFactory.getLog(TestSplitLogManager.class); private final ServerName DUMMY_MASTER = ServerName.valueOf("dummy-master,1,1"); @@ -206,7 +207,7 @@ public class TestSplitLogManager { conf.setInt("hbase.splitlog.manager.unassigned.timeout", 2 * to); conf.setInt("hbase.splitlog.manager.timeoutmonitor.period", 100); - to = to + 4 * 100; + to = to + 16 * 100; this.mode = (conf.getBoolean(HConstants.DISTRIBUTED_LOG_REPLAY_KEY, false) ? RecoveryMode.LOG_REPLAY @@ -456,7 +457,7 @@ public class TestSplitLogManager { SplitLogTask slt = new SplitLogTask.Resigned(worker1, this.mode); assertEquals(tot_mgr_resubmit.get(), 0); ZKUtil.setData(zkw, tasknode, slt.toByteArray()); - int version = ZKUtil.checkExists(zkw, tasknode); + ZKUtil.checkExists(zkw, tasknode); // Could be small race here. if (tot_mgr_resubmit.get() == 0) { waitForCounter(tot_mgr_resubmit, 0, 1, to/2); @@ -654,4 +655,4 @@ public class TestSplitLogManager { LOG.info("Mode3=" + slm.getRecoveryMode()); assertTrue("Mode4=" + slm.getRecoveryMode(), slm.isLogReplaying()); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableLockManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableLockManager.java index 2aba875..54f0691 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableLockManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestTableLockManager.java @@ -43,17 +43,19 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableNotDisabledException; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.TableState; import org.apache.hadoop.hbase.coprocessor.BaseMasterObserver; import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.exceptions.LockTimeoutException; import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.LoadTestTool; import org.apache.hadoop.hbase.util.StoppableImplementation; @@ -67,7 +69,7 @@ import org.junit.experimental.categories.Category; /** * Tests the default table lock manager */ -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestTableLockManager { private static final Log LOG = @@ -398,7 +400,8 @@ public class TestTableLockManager { LOG.info(String.format("Table #regions: %d regions: %s:", regions.size(), regions)); assertEquals(admin.getTableDescriptor(tableName), desc); for (HRegion region : TEST_UTIL.getMiniHBaseCluster().getRegions(tableName)) { - assertEquals(desc, region.getTableDesc()); + HTableDescriptor regionTableDesc = region.getTableDesc(); + assertEquals(desc, regionTableDesc); } if (regions.size() >= 5) { break; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java deleted file mode 100644 index aaef080..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java +++ /dev/null @@ -1,302 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; - -import java.io.IOException; -import java.util.Collection; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.MiniHBaseCluster; -import org.apache.hadoop.hbase.TableDescriptors; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.client.Durability; -import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.regionserver.HRegionServer; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.Threads; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; -import org.mockito.Mockito; -import org.mockito.internal.util.reflection.Whitebox; - -/** - * Test open and close of regions using zk. - */ -@Category(MediumTests.class) -public class TestZKBasedOpenCloseRegion { - private static final Log LOG = LogFactory.getLog(TestZKBasedOpenCloseRegion.class); - private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private static final TableName TABLENAME = - TableName.valueOf("TestZKBasedOpenCloseRegion"); - private static final byte [][] FAMILIES = new byte [][] {Bytes.toBytes("a"), - Bytes.toBytes("b"), Bytes.toBytes("c")}; - private static int countOfRegions; - - @BeforeClass public static void beforeAllTests() throws Exception { - Configuration c = TEST_UTIL.getConfiguration(); - c.setBoolean("hbase.assignment.usezk", true); - c.setBoolean("dfs.support.append", true); - c.setInt("hbase.regionserver.info.port", 0); - TEST_UTIL.startMiniCluster(2); - TEST_UTIL.createTable(TABLENAME, FAMILIES); - HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME); - countOfRegions = TEST_UTIL.createMultiRegions(t, getTestFamily()); - waitUntilAllRegionsAssigned(); - addToEachStartKey(countOfRegions); - t.close(); - TEST_UTIL.getHBaseCluster().getMaster().assignmentManager.initializeHandlerTrackers(); - } - - @AfterClass public static void afterAllTests() throws Exception { - TEST_UTIL.shutdownMiniCluster(); - } - - @Before public void setup() throws IOException { - if (TEST_UTIL.getHBaseCluster().getLiveRegionServerThreads().size() < 2) { - // Need at least two servers. - LOG.info("Started new server=" + - TEST_UTIL.getHBaseCluster().startRegionServer()); - - } - waitUntilAllRegionsAssigned(); - waitOnRIT(); - } - - /** - * Test we reopen a region once closed. - * @throws Exception - */ - @Test (timeout=300000) public void testReOpenRegion() - throws Exception { - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - LOG.info("Number of region servers = " + - cluster.getLiveRegionServerThreads().size()); - - int rsIdx = 0; - HRegionServer regionServer = - TEST_UTIL.getHBaseCluster().getRegionServer(rsIdx); - HRegionInfo hri = getNonMetaRegion( - ProtobufUtil.getOnlineRegions(regionServer.getRSRpcServices())); - LOG.debug("Asking RS to close region " + hri.getRegionNameAsString()); - - LOG.info("Unassign " + hri.getRegionNameAsString()); - cluster.getMaster().assignmentManager.unassign(hri); - - while (!cluster.getMaster().assignmentManager.wasClosedHandlerCalled(hri)) { - Threads.sleep(100); - } - - while (!cluster.getMaster().assignmentManager.wasOpenedHandlerCalled(hri)) { - Threads.sleep(100); - } - - LOG.info("Done with testReOpenRegion"); - } - - private HRegionInfo getNonMetaRegion(final Collection regions) { - HRegionInfo hri = null; - for (HRegionInfo i: regions) { - LOG.info(i.getRegionNameAsString()); - if (!i.isMetaRegion()) { - hri = i; - break; - } - } - return hri; - } - - /** - * This test shows how a region won't be able to be assigned to a RS - * if it's already "processing" it. - * @throws Exception - */ - @Test - public void testRSAlreadyProcessingRegion() throws Exception { - LOG.info("starting testRSAlreadyProcessingRegion"); - MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - - HRegionServer hr0 = - cluster.getLiveRegionServerThreads().get(0).getRegionServer(); - HRegionServer hr1 = - cluster.getLiveRegionServerThreads().get(1).getRegionServer(); - HRegionInfo hri = getNonMetaRegion(ProtobufUtil.getOnlineRegions(hr0.getRSRpcServices())); - - // fake that hr1 is processing the region - hr1.getRegionsInTransitionInRS().putIfAbsent(hri.getEncodedNameAsBytes(), true); - - // now ask the master to move the region to hr1, will fail - TEST_UTIL.getHBaseAdmin().move(hri.getEncodedNameAsBytes(), - Bytes.toBytes(hr1.getServerName().toString())); - - // make sure the region came back - assertEquals(hr1.getOnlineRegion(hri.getEncodedNameAsBytes()), null); - - // remove the block and reset the boolean - hr1.getRegionsInTransitionInRS().remove(hri.getEncodedNameAsBytes()); - - // now try moving a region when there is no region in transition. - hri = getNonMetaRegion(ProtobufUtil.getOnlineRegions(hr1.getRSRpcServices())); - - TEST_UTIL.getHBaseAdmin().move(hri.getEncodedNameAsBytes(), - Bytes.toBytes(hr0.getServerName().toString())); - - while (!cluster.getMaster().assignmentManager.wasOpenedHandlerCalled(hri)) { - Threads.sleep(100); - } - - // make sure the region has moved from the original RS - assertTrue(hr1.getOnlineRegion(hri.getEncodedNameAsBytes()) == null); - - } - - private void waitOnRIT() { - // Close worked but we are going to open the region elsewhere. Before going on, make sure - // this completes. - while (TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager(). - getRegionStates().isRegionsInTransition()) { - LOG.info("Waiting on regions in transition: " + - TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager(). - getRegionStates().getRegionsInTransition()); - Threads.sleep(10); - } - } - - /** - * If region open fails with IOException in openRegion() while doing tableDescriptors.get() - * the region should not add into regionsInTransitionInRS map - * @throws Exception - */ - @Test - public void testRegionOpenFailsDueToIOException() throws Exception { - HRegionInfo REGIONINFO = new HRegionInfo(TableName.valueOf("t"), - HConstants.EMPTY_START_ROW, HConstants.EMPTY_START_ROW); - HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(0); - TableDescriptors htd = Mockito.mock(TableDescriptors.class); - Object orizinalState = Whitebox.getInternalState(regionServer,"tableDescriptors"); - Whitebox.setInternalState(regionServer, "tableDescriptors", htd); - Mockito.doThrow(new IOException()).when(htd).get((TableName) Mockito.any()); - try { - ProtobufUtil.openRegion(regionServer.getRSRpcServices(), - regionServer.getServerName(), REGIONINFO); - fail("It should throw IOException "); - } catch (IOException e) { - } - Whitebox.setInternalState(regionServer, "tableDescriptors", orizinalState); - assertFalse("Region should not be in RIT", - regionServer.getRegionsInTransitionInRS().containsKey(REGIONINFO.getEncodedNameAsBytes())); - } - - private static void waitUntilAllRegionsAssigned() - throws IOException { - HTable meta = new HTable(TEST_UTIL.getConfiguration(), TableName.META_TABLE_NAME); - while (true) { - int rows = 0; - Scan scan = new Scan(); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER); - ResultScanner s = meta.getScanner(scan); - for (Result r = null; (r = s.next()) != null;) { - byte [] b = - r.getValue(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER); - if (b == null || b.length <= 0) { - break; - } - rows++; - } - s.close(); - // If I get to here and all rows have a Server, then all have been assigned. - if (rows >= countOfRegions) { - break; - } - LOG.info("Found=" + rows); - Threads.sleep(1000); - } - meta.close(); - } - - /* - * Add to each of the regions in hbase:meta a value. Key is the startrow of the - * region (except its 'aaa' for first region). Actual value is the row name. - * @param expected - * @return - * @throws IOException - */ - private static int addToEachStartKey(final int expected) throws IOException { - HTable t = new HTable(TEST_UTIL.getConfiguration(), TABLENAME); - HTable meta = new HTable(TEST_UTIL.getConfiguration(), - TableName.META_TABLE_NAME); - int rows = 0; - Scan scan = new Scan(); - scan.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); - ResultScanner s = meta.getScanner(scan); - for (Result r = null; (r = s.next()) != null;) { - HRegionInfo hri = HRegionInfo.getHRegionInfo(r); - if (hri == null) break; - if(!hri.getTable().equals(TABLENAME)) { - continue; - } - // If start key, add 'aaa'. - byte [] row = getStartKey(hri); - Put p = new Put(row); - p.setDurability(Durability.SKIP_WAL); - p.add(getTestFamily(), getTestQualifier(), row); - t.put(p); - rows++; - } - s.close(); - Assert.assertEquals(expected, rows); - t.close(); - meta.close(); - return rows; - } - - private static byte [] getStartKey(final HRegionInfo hri) { - return Bytes.equals(HConstants.EMPTY_START_ROW, hri.getStartKey())? - Bytes.toBytes("aaa"): hri.getStartKey(); - } - - private static byte [] getTestFamily() { - return FAMILIES[0]; - } - - private static byte [] getTestQualifier() { - return getTestFamily(); - } -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKLessAMOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKLessAMOnCluster.java deleted file mode 100644 index 3d13d54..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKLessAMOnCluster.java +++ /dev/null @@ -1,42 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.master; - -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.experimental.categories.Category; - -/** - * This tests AssignmentManager with a testing cluster. - */ -@Category(MediumTests.class) -public class TestZKLessAMOnCluster extends TestAssignmentManagerOnCluster { - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - // Don't use ZK for region assignment - conf.setBoolean("hbase.assignment.usezk", false); - setupOnce(); - } - - @AfterClass - public static void tearDownAfterClass() throws Exception { - TestAssignmentManagerOnCluster.tearDownAfterClass(); - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java index 3bdae33..cf79368 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestBaseLoadBalancer.java @@ -18,6 +18,8 @@ package org.apache.hadoop.hbase.master.balancer; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotEquals; +import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; @@ -37,7 +39,6 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.RegionReplicaUtil; @@ -47,6 +48,8 @@ import org.apache.hadoop.hbase.master.RackManager; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.Cluster; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.Cluster.MoveRegionAction; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.net.DNSToSwitchMapping; import org.junit.BeforeClass; import org.junit.Test; @@ -55,7 +58,7 @@ import org.mockito.Mockito; import com.google.common.collect.Lists; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestBaseLoadBalancer extends BalancerTestBase { private static LoadBalancer loadBalancer; @@ -116,6 +119,17 @@ public class TestBaseLoadBalancer extends BalancerTestBase { */ @Test (timeout=30000) public void testImmediateAssignment() throws Exception { + List tmp = getListOfServerNames(randomServers(1, 0)); + tmp.add(master); + ServerName sn = loadBalancer.randomAssignment(HRegionInfo.FIRST_META_REGIONINFO, tmp); + assertEquals(master, sn); + HRegionInfo hri = randomRegions(1, -1).get(0); + sn = loadBalancer.randomAssignment(hri, tmp); + assertNotEquals(master, sn); + tmp = new ArrayList(); + tmp.add(master); + sn = loadBalancer.randomAssignment(hri, tmp); + assertNull("Should not assign user regions on master", sn); for (int[] mock : regionsAndServersMocks) { LOG.debug("testImmediateAssignment with " + mock[0] + " regions and " + mock[1] + " servers"); List regions = randomRegions(mock[0]); @@ -151,6 +165,18 @@ public class TestBaseLoadBalancer extends BalancerTestBase { */ @Test (timeout=180000) public void testBulkAssignment() throws Exception { + List tmp = getListOfServerNames(randomServers(5, 0)); + List hris = randomRegions(20); + hris.add(HRegionInfo.FIRST_META_REGIONINFO); + tmp.add(master); + Map> plans = loadBalancer.roundRobinAssignment(hris, tmp); + assertTrue(plans.get(master).contains(HRegionInfo.FIRST_META_REGIONINFO)); + assertEquals(1, plans.get(master).size()); + int totalRegion = 0; + for (List regions: plans.values()) { + totalRegion += regions.size(); + } + assertEquals(hris.size(), totalRegion); for (int[] mock : regionsAndServersMocks) { LOG.debug("testBulkAssignment with " + mock[0] + " regions and " + mock[1] + " servers"); List regions = randomRegions(mock[0]); @@ -497,4 +523,4 @@ public class TestBaseLoadBalancer extends BalancerTestBase { assertEquals(1, cluster.regionLocations[r43].length); assertEquals(-1, cluster.regionLocations[r43][0]); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java index 355fc9a..c1e8692 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestDefaultLoadBalancer.java @@ -25,10 +25,11 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.master.LoadBalancer; import org.apache.hadoop.hbase.master.RegionPlan; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.net.DNSToSwitchMapping; import org.junit.BeforeClass; import org.junit.Test; @@ -37,7 +38,7 @@ import org.junit.experimental.categories.Category; /** * Test the load balancer that is created by default. */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestDefaultLoadBalancer extends BalancerTestBase { private static final Log LOG = LogFactory.getLog(TestDefaultLoadBalancer.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeAssignmentHelper.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeAssignmentHelper.java index a218399..4dc7d32 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeAssignmentHelper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestFavoredNodeAssignmentHelper.java @@ -31,9 +31,10 @@ import java.util.TreeMap; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.master.RackManager; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Triple; import org.junit.BeforeClass; @@ -41,7 +42,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestFavoredNodeAssignmentHelper { private static List servers = new ArrayList(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestServerAndLoad.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestServerAndLoad.java index 0b48ade..2cfaf4e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestServerAndLoad.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestServerAndLoad.java @@ -22,10 +22,11 @@ import static org.junit.Assert.assertNotEquals; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestServerAndLoad { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java index ae40583..000e331 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java @@ -40,7 +40,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.ClusterStatus; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.RegionLoad; import org.apache.hadoop.hbase.ServerLoad; import org.apache.hadoop.hbase.ServerName; @@ -48,13 +47,15 @@ import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.master.RackManager; import org.apache.hadoop.hbase.master.RegionPlan; import org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer.Cluster; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.net.DNSToSwitchMapping; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({FlakeyTests.class, MediumTests.class}) public class TestStochasticLoadBalancer extends BalancerTestBase { public static final String REGION_KEY = "testRegion"; private static StochasticLoadBalancer loadBalancer; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestCleanerChore.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestCleanerChore.java index 0bd0da5..92c7bb6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestCleanerChore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestCleanerChore.java @@ -29,8 +29,9 @@ import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.StoppableImplementation; import org.junit.After; @@ -40,7 +41,7 @@ import org.mockito.Mockito; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestCleanerChore { private static final Log LOG = LogFactory.getLog(TestCleanerChore.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java index d1d26ed..b045c72 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileCleaner.java @@ -32,10 +32,11 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.EnvironmentEdge; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; @@ -45,7 +46,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestHFileCleaner { private static final Log LOG = LogFactory.getLog(TestHFileCleaner.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java index 72ce7b1..a004134 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestHFileLinkCleaner.java @@ -31,11 +31,13 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.io.HFileLink; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.HFileArchiveUtil; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; @@ -47,7 +49,7 @@ import org.junit.experimental.categories.Category; * Test the HFileLink Cleaner. * HFiles with links cannot be deleted until a link is present. */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestHFileLinkCleaner { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -182,4 +184,4 @@ public class TestHFileLinkCleaner { return false; } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java index d7f29b9..4e8ec09 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestLogsCleaner.java @@ -29,7 +29,6 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.Waiter; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.replication.ReplicationFactory; import org.apache.hadoop.hbase.replication.ReplicationQueues; import org.apache.hadoop.hbase.replication.regionserver.Replication; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.AfterClass; @@ -44,7 +45,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestLogsCleaner { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -193,4 +194,4 @@ public class TestLogsCleaner { return false; } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java index ed43774..9a72e77 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/cleaner/TestSnapshotFromMaster.java @@ -36,7 +36,6 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; @@ -56,6 +55,8 @@ import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils; import org.apache.hadoop.hbase.snapshot.UnknownSnapshotException; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; @@ -73,7 +74,7 @@ import com.google.protobuf.ServiceException; /** * Test the master-related aspects of a snapshot */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestSnapshotFromMaster { private static final Log LOG = LogFactory.getLog(TestSnapshotFromMaster.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java index f25e45c..70da886 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestCreateTableHandler.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.TableName; @@ -41,6 +40,8 @@ import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.RegionStates; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -48,7 +49,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestCreateTableHandler { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final Log LOG = LogFactory.getLog(TestCreateTableHandler.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDeleteFamilyHandler.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDeleteFamilyHandler.java index 463fc54..ce6abda 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDeleteFamilyHandler.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDeleteFamilyHandler.java @@ -34,9 +34,10 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.wal.WALSplitter; @@ -46,7 +47,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestTableDeleteFamilyHandler { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDescriptorModification.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDescriptorModification.java index d061101..0d51875 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDescriptorModification.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/handler/TestTableDescriptorModification.java @@ -29,10 +29,12 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.master.MasterFileSystem; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; @@ -48,7 +50,7 @@ import org.junit.rules.TestName; * Verify that the HTableDescriptor is updated after * addColumn(), deleteColumn() and modifyTable() operations. */ -@Category(LargeTests.class) +@Category({MasterTests.class, LargeTests.class}) public class TestTableDescriptorModification { @Rule public TestName name = new TestName(); @@ -153,8 +155,9 @@ public class TestTableDescriptorModification { // Verify descriptor from HDFS MasterFileSystem mfs = TEST_UTIL.getMiniHBaseCluster().getMaster().getMasterFileSystem(); Path tableDir = FSUtils.getTableDir(mfs.getRootDir(), tableName); - htd = FSTableDescriptors.getTableDescriptorFromFs(mfs.getFileSystem(), tableDir); - verifyTableDescriptor(htd, tableName, families); + TableDescriptor td = + FSTableDescriptors.getTableDescriptorFromFs(mfs.getFileSystem(), tableDir); + verifyTableDescriptor(td.getHTableDescriptor(), tableName, families); } private void verifyTableDescriptor(final HTableDescriptor htd, diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotFileCache.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotFileCache.java index d9ba1f6..efaef9d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotFileCache.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotFileCache.java @@ -32,6 +32,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.snapshot.SnapshotReferenceUtil; @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category; /** * Test that we correctly reload the cache, filter directories, etc. */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestSnapshotFileCache { private static final Log LOG = LogFactory.getLog(TestSnapshotFileCache.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotHFileCleaner.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotHFileCleaner.java index 65a057d..5e5b004 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotHFileCleaner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotHFileCleaner.java @@ -29,6 +29,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; @@ -39,7 +40,7 @@ import org.junit.experimental.categories.Category; /** * Test that the snapshot hfile cleaner finds hfiles referenced in a snapshot */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestSnapshotHFileCleaner { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotLogCleaner.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotLogCleaner.java index 734ce8c..9a7d469 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotLogCleaner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotLogCleaner.java @@ -27,6 +27,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; @@ -37,7 +38,7 @@ import org.junit.experimental.categories.Category; /** * Test that the snapshot log cleaner finds logs referenced in a snapshot */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestSnapshotLogCleaner { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotManager.java index ffb0bb2..7dd6377 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/master/snapshot/TestSnapshotManager.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.executor.ExecutorService; import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.MasterServices; @@ -46,7 +47,7 @@ import org.mockito.Mockito; /** * Test basic snapshot manager functionality */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestSnapshotManager { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestNamespaceUpgrade.java hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestNamespaceUpgrade.java deleted file mode 100644 index 983b1ba..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestNamespaceUpgrade.java +++ /dev/null @@ -1,348 +0,0 @@ -/** - * Copyright The Apache Software Foundation - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.migration; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.assertFalse; - -import java.io.File; -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.FileUtil; -import org.apache.hadoop.fs.FsShell; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HColumnDescriptor; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.NamespaceDescriptor; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.Waiter; -import org.apache.hadoop.hbase.client.Get; -import org.apache.hadoop.hbase.client.HTable; -import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.client.Result; -import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; -import org.apache.hadoop.hbase.security.access.AccessControlLists; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSTableDescriptors; -import org.apache.hadoop.hbase.util.FSUtils; -import org.apache.hadoop.util.ToolRunner; -import org.junit.AfterClass; -import org.junit.Assert; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -/** - * Test upgrade from no namespace in 0.94 to namespace directory structure. - * Mainly tests that tables are migrated and consistent. Also verifies - * that snapshots have been migrated correctly. - * - *

    Uses a tarball which is an image of an 0.94 hbase.rootdir. - * - *

    Contains tables with currentKeys as the stored keys: - * foo, ns1.foo, ns2.foo - * - *

    Contains snapshots with snapshot{num}Keys as the contents: - * snapshot1Keys, snapshot2Keys - * - * Image also contains _acl_ table with one region and two storefiles. - * This is needed to test the acl table migration. - * - */ -@Category(MediumTests.class) -public class TestNamespaceUpgrade { - static final Log LOG = LogFactory.getLog(TestNamespaceUpgrade.class); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - private final static String snapshot1Keys[] = - {"1","10","2","3","4","5","6","7","8","9"}; - private final static String snapshot2Keys[] = - {"1","2","3","4","5","6","7","8","9"}; - private final static String currentKeys[] = - {"1","2","3","4","5","6","7","8","9","A"}; - private final static TableName tables[] = - {TableName.valueOf("data"), TableName.valueOf("foo"), - TableName.valueOf("ns1.foo"), TableName.valueOf("ns.two.foo")}; - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - // Start up our mini cluster on top of an 0.94 root.dir that has data from - // a 0.94 hbase run and see if we can migrate to 0.96 - TEST_UTIL.startMiniZKCluster(); - TEST_UTIL.startMiniDFSCluster(1); - Path testdir = TEST_UTIL.getDataTestDir("TestNamespaceUpgrade"); - // Untar our test dir. - File untar = untar(new File(testdir.toString())); - // Now copy the untar up into hdfs so when we start hbase, we'll run from it. - Configuration conf = TEST_UTIL.getConfiguration(); - FsShell shell = new FsShell(conf); - FileSystem fs = FileSystem.get(conf); - // find where hbase will root itself, so we can copy filesystem there - Path hbaseRootDir = TEST_UTIL.getDefaultRootDirPath(); - if (!fs.isDirectory(hbaseRootDir.getParent())) { - // mkdir at first - fs.mkdirs(hbaseRootDir.getParent()); - } - if(org.apache.hadoop.util.VersionInfo.getVersion().startsWith("2.")) { - LOG.info("Hadoop version is 2.x, pre-migrating snapshot dir"); - FileSystem localFS = FileSystem.getLocal(conf); - if(!localFS.rename(new Path(untar.toString(), HConstants.OLD_SNAPSHOT_DIR_NAME), - new Path(untar.toString(), HConstants.SNAPSHOT_DIR_NAME))) { - throw new IllegalStateException("Failed to move snapshot dir to 2.x expectation"); - } - } - doFsCommand(shell, - new String [] {"-put", untar.toURI().toString(), hbaseRootDir.toString()}); - doFsCommand(shell, new String [] {"-lsr", "/"}); - // See whats in minihdfs. - Configuration toolConf = TEST_UTIL.getConfiguration(); - conf.set(HConstants.HBASE_DIR, TEST_UTIL.getDefaultRootDirPath().toString()); - ToolRunner.run(toolConf, new NamespaceUpgrade(), new String[]{"--upgrade"}); - assertTrue(FSUtils.getVersion(fs, hbaseRootDir).equals(HConstants.FILE_SYSTEM_VERSION)); - doFsCommand(shell, new String [] {"-lsr", "/"}); - TEST_UTIL.startMiniHBaseCluster(1, 1); - - for(TableName table: tables) { - int count = 0; - for(Result res: new HTable(TEST_UTIL.getConfiguration(), table).getScanner(new Scan())) { - assertEquals(currentKeys[count++], Bytes.toString(res.getRow())); - } - Assert.assertEquals(currentKeys.length, count); - } - assertEquals(2, TEST_UTIL.getHBaseAdmin().listNamespaceDescriptors().length); - - //verify ACL table is migrated - HTable secureTable = new HTable(conf, AccessControlLists.ACL_TABLE_NAME); - ResultScanner scanner = secureTable.getScanner(new Scan()); - int count = 0; - for(Result r : scanner) { - count++; - } - assertEquals(3, count); - assertFalse(TEST_UTIL.getHBaseAdmin().tableExists(TableName.valueOf("_acl_"))); - - //verify ACL table was compacted - List regions = TEST_UTIL.getMiniHBaseCluster().getRegions(secureTable.getName()); - for(HRegion region : regions) { - assertEquals(1, region.getStores().size()); - } - } - - static File untar(final File testdir) throws IOException { - // Find the src data under src/test/data - final String datafile = "TestNamespaceUpgrade"; - File srcTarFile = new File( - System.getProperty("project.build.testSourceDirectory", "src/test") + - File.separator + "data" + File.separator + datafile + ".tgz"); - File homedir = new File(testdir.toString()); - File tgtUntarDir = new File(homedir, "hbase"); - if (tgtUntarDir.exists()) { - if (!FileUtil.fullyDelete(tgtUntarDir)) { - throw new IOException("Failed delete of " + tgtUntarDir.toString()); - } - } - if (!srcTarFile.exists()) { - throw new IOException(srcTarFile+" does not exist"); - } - LOG.info("Untarring " + srcTarFile + " into " + homedir.toString()); - FileUtil.unTar(srcTarFile, homedir); - Assert.assertTrue(tgtUntarDir.exists()); - return tgtUntarDir; - } - - private static void doFsCommand(final FsShell shell, final String [] args) - throws Exception { - // Run the 'put' command. - int errcode = shell.run(args); - if (errcode != 0) throw new IOException("Failed put; errcode=" + errcode); - } - - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniCluster(); - } - - @Test (timeout=300000) - public void testSnapshots() throws IOException, InterruptedException { - String snapshots[][] = {snapshot1Keys, snapshot2Keys}; - for(int i = 1; i <= snapshots.length; i++) { - for(TableName table: tables) { - TEST_UTIL.getHBaseAdmin().cloneSnapshot(table+"_snapshot"+i, TableName.valueOf(table+"_clone"+i)); - FSUtils.logFileSystemState(FileSystem.get(TEST_UTIL.getConfiguration()), - FSUtils.getRootDir(TEST_UTIL.getConfiguration()), - LOG); - int count = 0; - for(Result res: new HTable(TEST_UTIL.getConfiguration(), table+"_clone"+i).getScanner(new - Scan())) { - assertEquals(snapshots[i-1][count++], Bytes.toString(res.getRow())); - } - Assert.assertEquals(table+"_snapshot"+i, snapshots[i-1].length, count); - } - } - } - - @Test (timeout=300000) - public void testRenameUsingSnapshots() throws Exception { - String newNS = "newNS"; - TEST_UTIL.getHBaseAdmin().createNamespace(NamespaceDescriptor.create(newNS).build()); - for(TableName table: tables) { - int count = 0; - for(Result res: new HTable(TEST_UTIL.getConfiguration(), table).getScanner(new - Scan())) { - assertEquals(currentKeys[count++], Bytes.toString(res.getRow())); - } - TEST_UTIL.getHBaseAdmin().snapshot(table + "_snapshot3", table); - final TableName newTableName = - TableName.valueOf(newNS + TableName.NAMESPACE_DELIM + table + "_clone3"); - TEST_UTIL.getHBaseAdmin().cloneSnapshot(table + "_snapshot3", newTableName); - Thread.sleep(1000); - count = 0; - for(Result res: new HTable(TEST_UTIL.getConfiguration(), newTableName).getScanner(new - Scan())) { - assertEquals(currentKeys[count++], Bytes.toString(res.getRow())); - } - FSUtils.logFileSystemState(TEST_UTIL.getTestFileSystem(), TEST_UTIL.getDefaultRootDirPath() - , LOG); - Assert.assertEquals(newTableName + "", currentKeys.length, count); - TEST_UTIL.getHBaseAdmin().flush(newTableName); - TEST_UTIL.getHBaseAdmin().majorCompact(newTableName); - TEST_UTIL.waitFor(30000, new Waiter.Predicate() { - @Override - public boolean evaluate() throws IOException { - return TEST_UTIL.getHBaseAdmin().getCompactionState(newTableName) == - AdminProtos.GetRegionInfoResponse.CompactionState.NONE; - } - }); - } - - String nextNS = "nextNS"; - TEST_UTIL.getHBaseAdmin().createNamespace(NamespaceDescriptor.create(nextNS).build()); - for(TableName table: tables) { - TableName srcTable = TableName.valueOf(newNS + TableName.NAMESPACE_DELIM + table + "_clone3"); - TEST_UTIL.getHBaseAdmin().snapshot(table + "_snapshot4", srcTable); - TableName newTableName = - TableName.valueOf(nextNS + TableName.NAMESPACE_DELIM + table + "_clone4"); - TEST_UTIL.getHBaseAdmin().cloneSnapshot(table+"_snapshot4", newTableName); - FSUtils.logFileSystemState(TEST_UTIL.getTestFileSystem(), TEST_UTIL.getDefaultRootDirPath(), - LOG); - int count = 0; - for(Result res: new HTable(TEST_UTIL.getConfiguration(), newTableName).getScanner(new - Scan())) { - assertEquals(currentKeys[count++], Bytes.toString(res.getRow())); - } - Assert.assertEquals(newTableName + "", currentKeys.length, count); - } - } - - @Test (timeout=300000) - public void testOldDirsAreGonePostMigration() throws IOException { - FileSystem fs = FileSystem.get(TEST_UTIL.getConfiguration()); - Path hbaseRootDir = TEST_UTIL.getDefaultRootDirPath(); - List dirs = new ArrayList(NamespaceUpgrade.NON_USER_TABLE_DIRS); - // Remove those that are not renamed - dirs.remove(HConstants.HBCK_SIDELINEDIR_NAME); - dirs.remove(HConstants.SNAPSHOT_DIR_NAME); - dirs.remove(HConstants.HBASE_TEMP_DIRECTORY); - for (String dir: dirs) { - assertFalse(fs.exists(new Path(hbaseRootDir, dir))); - } - } - - @Test (timeout=300000) - public void testNewDirsArePresentPostMigration() throws IOException { - FileSystem fs = FileSystem.get(TEST_UTIL.getConfiguration()); - // Below list does not include 'corrupt' because there is no 'corrupt' in the tgz - String [] newdirs = new String [] {HConstants.BASE_NAMESPACE_DIR, - HConstants.HREGION_LOGDIR_NAME}; - Path hbaseRootDir = TEST_UTIL.getDefaultRootDirPath(); - for (String dir: newdirs) { - assertTrue(dir, fs.exists(new Path(hbaseRootDir, dir))); - } - } - - @Test (timeout = 300000) - public void testACLTableMigration() throws IOException { - Path rootDir = TEST_UTIL.getDataTestDirOnTestFS("testACLTable"); - FileSystem fs = TEST_UTIL.getTestFileSystem(); - Configuration conf = TEST_UTIL.getConfiguration(); - byte[] FAMILY = Bytes.toBytes("l"); - byte[] QUALIFIER = Bytes.toBytes("testUser"); - byte[] VALUE = Bytes.toBytes("RWCA"); - - // Create a Region - HTableDescriptor aclTable = new HTableDescriptor(TableName.valueOf("testACLTable")); - aclTable.addFamily(new HColumnDescriptor(FAMILY)); - FSTableDescriptors fstd = new FSTableDescriptors(conf, fs, rootDir); - fstd.createTableDescriptor(aclTable); - HRegionInfo hriAcl = new HRegionInfo(aclTable.getTableName(), null, null); - HRegion region = HRegion.createHRegion(hriAcl, rootDir, conf, aclTable); - try { - // Create rows - Put p = new Put(Bytes.toBytes("-ROOT-")); - p.addImmutable(FAMILY, QUALIFIER, VALUE); - region.put(p); - p = new Put(Bytes.toBytes(".META.")); - p.addImmutable(FAMILY, QUALIFIER, VALUE); - region.put(p); - p = new Put(Bytes.toBytes("_acl_")); - p.addImmutable(FAMILY, QUALIFIER, VALUE); - region.put(p); - - NamespaceUpgrade upgrade = new NamespaceUpgrade(); - upgrade.updateAcls(region); - - // verify rows -ROOT- is removed - Get g = new Get(Bytes.toBytes("-ROOT-")); - Result r = region.get(g); - assertTrue(r == null || r.size() == 0); - - // verify rows _acl_ is renamed to hbase:acl - g = new Get(AccessControlLists.ACL_TABLE_NAME.toBytes()); - r = region.get(g); - assertTrue(r != null && r.size() == 1); - assertTrue(Bytes.compareTo(VALUE, r.getValue(FAMILY, QUALIFIER)) == 0); - - // verify rows .META. is renamed to hbase:meta - g = new Get(TableName.META_TABLE_NAME.toBytes()); - r = region.get(g); - assertTrue(r != null && r.size() == 1); - assertTrue(Bytes.compareTo(VALUE, r.getValue(FAMILY, QUALIFIER)) == 0); - } finally { - region.close(); - // Delete the region - HRegionFileSystem.deleteRegionFromFileSystem(conf, fs, - FSUtils.getTableDir(rootDir, hriAcl.getTable()), hriAcl); - } - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java deleted file mode 100644 index d3e93d2..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/migration/TestUpgradeTo96.java +++ /dev/null @@ -1,270 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.migration; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; - -import java.io.File; -import java.io.IOException; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.fs.FileStatus; -import org.apache.hadoop.fs.FileSystem; -import org.apache.hadoop.fs.FsShell; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.io.FileLink; -import org.apache.hadoop.hbase.io.HFileLink; -import org.apache.hadoop.hbase.protobuf.ProtobufUtil; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.ReplicationPeer; -import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table.State; -import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSUtils; -import org.apache.hadoop.hbase.util.HFileV1Detector; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.hadoop.util.ToolRunner; -import org.apache.zookeeper.KeeperException; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -import com.google.protobuf.InvalidProtocolBufferException; - -/** - * Upgrade to 0.96 involves detecting HFileV1 in existing cluster, updating namespace and - * updating znodes. This class tests for HFileV1 detection and upgrading znodes. - * Uprading namespace is tested in {@link TestNamespaceUpgrade}. - */ -@Category(MediumTests.class) -public class TestUpgradeTo96 { - - static final Log LOG = LogFactory.getLog(TestUpgradeTo96.class); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - - /** - * underlying file system instance - */ - private static FileSystem fs; - /** - * hbase root dir - */ - private static Path hbaseRootDir; - private static ZooKeeperWatcher zkw; - /** - * replication peer znode (/hbase/replication/peers) - */ - private static String replicationPeerZnode; - /** - * znode of a table - */ - private static String tableAZnode; - private static ReplicationPeer peer1; - /** - * znode for replication peer1 (/hbase/replication/peers/1) - */ - private static String peer1Znode; - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - // Start up the mini cluster on top of an 0.94 root.dir that has data from - // a 0.94 hbase run and see if we can migrate to 0.96 - TEST_UTIL.startMiniZKCluster(); - TEST_UTIL.startMiniDFSCluster(1); - - hbaseRootDir = TEST_UTIL.getDefaultRootDirPath(); - fs = FileSystem.get(TEST_UTIL.getConfiguration()); - FSUtils.setRootDir(TEST_UTIL.getConfiguration(), hbaseRootDir); - zkw = TEST_UTIL.getZooKeeperWatcher(); - - Path testdir = TEST_UTIL.getDataTestDir("TestUpgradeTo96"); - // get the untar 0.94 file structure - - set94FSLayout(testdir); - setUp94Znodes(); - } - - /** - * Lays out 0.94 file system layout using {@link TestNamespaceUpgrade} apis. - * @param testdir - * @throws IOException - * @throws Exception - */ - private static void set94FSLayout(Path testdir) throws IOException, Exception { - File untar = TestNamespaceUpgrade.untar(new File(testdir.toString())); - if (!fs.exists(hbaseRootDir.getParent())) { - // mkdir at first - fs.mkdirs(hbaseRootDir.getParent()); - } - FsShell shell = new FsShell(TEST_UTIL.getConfiguration()); - shell.run(new String[] { "-put", untar.toURI().toString(), hbaseRootDir.toString() }); - // See whats in minihdfs. - shell.run(new String[] { "-lsr", "/" }); - } - - /** - * Sets znodes used in 0.94 version. Only table and replication znodes will be upgraded to PB, - * others would be deleted. - * @throws KeeperException - */ - private static void setUp94Znodes() throws IOException, KeeperException { - // add some old znodes, which would be deleted after upgrade. - String rootRegionServerZnode = ZKUtil.joinZNode(zkw.baseZNode, "root-region-server"); - ZKUtil.createWithParents(zkw, rootRegionServerZnode); - ZKUtil.createWithParents(zkw, zkw.backupMasterAddressesZNode); - // add table znode, data of its children would be protobuffized - tableAZnode = ZKUtil.joinZNode(zkw.tableZNode, "a"); - ZKUtil.createWithParents(zkw, tableAZnode, - Bytes.toBytes(ZooKeeperProtos.Table.State.ENABLED.toString())); - // add replication znodes, data of its children would be protobuffized - String replicationZnode = ZKUtil.joinZNode(zkw.baseZNode, "replication"); - replicationPeerZnode = ZKUtil.joinZNode(replicationZnode, "peers"); - peer1Znode = ZKUtil.joinZNode(replicationPeerZnode, "1"); - peer1 = ReplicationPeer.newBuilder().setClusterkey("abc:123:/hbase").build(); - ZKUtil.createWithParents(zkw, peer1Znode, Bytes.toBytes(peer1.getClusterkey())); - } - - /** - * Tests a 0.94 filesystem for any HFileV1. - * @throws Exception - */ - @Test - public void testHFileV1Detector() throws Exception { - assertEquals(0, ToolRunner.run(TEST_UTIL.getConfiguration(), new HFileV1Detector(), null)); - } - - /** - * Creates a corrupt file, and run HFileV1 detector tool - * @throws Exception - */ - @Test - public void testHFileV1DetectorWithCorruptFiles() throws Exception { - // add a corrupt file. - Path tablePath = new Path(hbaseRootDir, "foo"); - FileStatus[] regionsDir = fs.listStatus(tablePath); - if (regionsDir == null) throw new IOException("No Regions found for table " + "foo"); - Path columnFamilyDir = null; - Path targetRegion = null; - for (FileStatus s : regionsDir) { - if (fs.exists(new Path(s.getPath(), HRegionFileSystem.REGION_INFO_FILE))) { - targetRegion = s.getPath(); - break; - } - } - FileStatus[] cfs = fs.listStatus(targetRegion); - for (FileStatus f : cfs) { - if (f.isDirectory()) { - columnFamilyDir = f.getPath(); - break; - } - } - LOG.debug("target columnFamilyDir: " + columnFamilyDir); - // now insert a corrupt file in the columnfamily. - Path corruptFile = new Path(columnFamilyDir, "corrupt_file"); - if (!fs.createNewFile(corruptFile)) throw new IOException("Couldn't create corrupt file: " - + corruptFile); - assertEquals(1, ToolRunner.run(TEST_UTIL.getConfiguration(), new HFileV1Detector(), null)); - // remove the corrupt file - FileSystem.get(TEST_UTIL.getConfiguration()).delete(corruptFile, false); - } - - @Test - public void testHFileLink() throws Exception { - // pass a link, and verify that correct paths are returned. - Path rootDir = FSUtils.getRootDir(TEST_UTIL.getConfiguration()); - Path aFileLink = new Path(rootDir, "table/2086db948c48/cf/table=21212abcdc33-0906db948c48"); - Path preNamespaceTablePath = new Path(rootDir, "table/21212abcdc33/cf/0906db948c48"); - Path preNamespaceArchivePath = - new Path(rootDir, ".archive/table/21212abcdc33/cf/0906db948c48"); - Path preNamespaceTempPath = new Path(rootDir, ".tmp/table/21212abcdc33/cf/0906db948c48"); - boolean preNSTablePathExists = false; - boolean preNSArchivePathExists = false; - boolean preNSTempPathExists = false; - assertTrue(HFileLink.isHFileLink(aFileLink)); - HFileLink hFileLink = new HFileLink(TEST_UTIL.getConfiguration(), aFileLink); - assertTrue(hFileLink.getArchivePath().toString().startsWith(rootDir.toString())); - - HFileV1Detector t = new HFileV1Detector(); - t.setConf(TEST_UTIL.getConfiguration()); - FileLink fileLink = t.getFileLinkWithPreNSPath(aFileLink); - //assert it has 6 paths (2 NS, 2 Pre NS, and 2 .tmp) to look. - assertTrue(fileLink.getLocations().length == 6); - for (Path p : fileLink.getLocations()) { - if (p.equals(preNamespaceArchivePath)) preNSArchivePathExists = true; - if (p.equals(preNamespaceTablePath)) preNSTablePathExists = true; - if (p.equals(preNamespaceTempPath)) preNSTempPathExists = true; - } - assertTrue(preNSArchivePathExists & preNSTablePathExists & preNSTempPathExists); - } - - @Test - public void testADirForHFileV1() throws Exception { - Path tablePath = new Path(hbaseRootDir, "foo"); - System.out.println("testADirForHFileV1: " + tablePath.makeQualified(fs)); - System.out.println("Passed: " + hbaseRootDir + "/foo"); - assertEquals(0, - ToolRunner.run(TEST_UTIL.getConfiguration(), new HFileV1Detector(), new String[] { "-p" - + "foo" })); - } - - @Test - public void testZnodeMigration() throws Exception { - String rootRSZnode = ZKUtil.joinZNode(zkw.baseZNode, "root-region-server"); - assertTrue(ZKUtil.checkExists(zkw, rootRSZnode) > -1); - ToolRunner.run(TEST_UTIL.getConfiguration(), new UpgradeTo96(), new String[] { "-execute" }); - assertEquals(-1, ZKUtil.checkExists(zkw, rootRSZnode)); - byte[] data = ZKUtil.getData(zkw, tableAZnode); - assertTrue(ProtobufUtil.isPBMagicPrefix(data)); - checkTableState(data, ZooKeeperProtos.Table.State.ENABLED); - // ensure replication znodes are there, and protobuffed. - data = ZKUtil.getData(zkw, peer1Znode); - assertTrue(ProtobufUtil.isPBMagicPrefix(data)); - checkReplicationPeerData(data, peer1); - } - - private void checkTableState(byte[] data, State expectedState) - throws InvalidProtocolBufferException { - ZooKeeperProtos.Table.Builder builder = ZooKeeperProtos.Table.newBuilder(); - int magicLen = ProtobufUtil.lengthOfPBMagic(); - ZooKeeperProtos.Table t = builder.mergeFrom(data, magicLen, data.length - magicLen).build(); - assertTrue(t.getState() == expectedState); - } - - private void checkReplicationPeerData(byte[] data, ReplicationPeer peer) - throws InvalidProtocolBufferException { - int magicLen = ProtobufUtil.lengthOfPBMagic(); - ZooKeeperProtos.ReplicationPeer.Builder builder = ZooKeeperProtos.ReplicationPeer.newBuilder(); - assertEquals(builder.mergeFrom(data, magicLen, data.length - magicLen).build().getClusterkey(), - peer.getClusterkey()); - - } - - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniHBaseCluster(); - TEST_UTIL.shutdownMiniDFSCluster(); - TEST_UTIL.shutdownMiniZKCluster(); - } - -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java index c55be75..f64b297 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertTrue; import java.io.PrintWriter; import java.io.StringWriter; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -33,7 +34,7 @@ import org.junit.experimental.categories.Category; * Ensures that it uses no more memory than it's supposed to, * and that it properly deals with multibyte encodings. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestMemoryBoundedLogMessageBuffer { private static final long TEN_KB = 10 * 1024; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestTaskMonitor.java hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestTaskMonitor.java index ff9bd57..e54d0f6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestTaskMonitor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestTaskMonitor.java @@ -22,11 +22,12 @@ import static org.junit.Assert.*; import java.util.concurrent.atomic.AtomicBoolean; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestTaskMonitor { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java index efacd97..c424b6d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedure.java @@ -29,6 +29,7 @@ import java.util.ArrayList; import java.util.List; import java.util.concurrent.CountDownLatch; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; @@ -39,7 +40,7 @@ import org.junit.experimental.categories.Category; /** * Demonstrate how Procedure handles single members, multiple members, and errors semantics */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestProcedure { ProcedureCoordinator coord; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java index a149e09..710e631 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureCoordinator.java @@ -41,6 +41,7 @@ import java.util.List; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; @@ -59,7 +60,7 @@ import com.google.common.collect.Lists; * This only works correctly when we do class level parallelization of tests. If we do method * level serialization this class will likely throw all kinds of errors. */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestProcedureCoordinator { // general test constants private static final long WAKE_FREQUENCY = 1000; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java index 2e46f3f..a2c86a1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java @@ -26,6 +26,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Admin; import org.junit.AfterClass; @@ -33,7 +34,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestProcedureManager { static final Log LOG = LogFactory.getLog(TestProcedureManager.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java index 454b55f..2d7a68f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureMember.java @@ -34,6 +34,7 @@ import static org.mockito.Mockito.when; import java.io.IOException; import java.util.concurrent.ThreadPoolExecutor; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; @@ -50,7 +51,7 @@ import org.mockito.stubbing.Answer; /** * Test the procedure member, and it's error handling mechanisms. */ -@Category(SmallTests.class) +@Category({MasterTests.class, SmallTests.class}) public class TestProcedureMember { private static final long WAKE_FREQUENCY = 100; private static final long TIMEOUT = 100000; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java index 3798ab7..211e9e6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedure.java @@ -38,6 +38,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; @@ -60,7 +61,7 @@ import com.google.common.collect.Lists; /** * Cluster-wide testing of a distributed three-phase commit using a 'real' zookeeper cluster */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestZKProcedure { private static final Log LOG = LogFactory.getLog(TestZKProcedure.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java index 3cb6a5f..52d4552 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java @@ -31,6 +31,7 @@ import java.util.concurrent.CountDownLatch; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.MasterTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.errorhandling.ForeignException; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; @@ -52,7 +53,7 @@ import com.google.common.collect.Lists; /** * Test zookeeper-based, procedure controllers */ -@Category(MediumTests.class) +@Category({MasterTests.class, MediumTests.class}) public class TestZKProcedureControllers { static final Log LOG = LogFactory.getLog(TestZKProcedureControllers.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java index ddbbb74..b2d8b38 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestProtobufUtil.java @@ -22,9 +22,10 @@ import static org.junit.Assert.assertEquals; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; @@ -47,7 +48,7 @@ import com.google.protobuf.ByteString; /** * Class to test ProtobufUtil. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestProtobufUtil { @Test public void testException() throws IOException { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestReplicationProtobuf.java hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestReplicationProtobuf.java index 987941a..057a35d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestReplicationProtobuf.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/protobuf/TestReplicationProtobuf.java @@ -26,13 +26,14 @@ import java.util.List; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestReplicationProtobuf { /** * Little test to check we can basically convert list of a list of KVs into a CellScanner diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaAdmin.java hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaAdmin.java new file mode 100644 index 0000000..18dd5ae --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaAdmin.java @@ -0,0 +1,218 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.concurrent.TimeUnit; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; +import org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge; +import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; + +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; + +/** + * minicluster tests that validate that quota entries are properly set in the quota table + */ +@Category({ClientTests.class, MediumTests.class}) +public class TestQuotaAdmin { + final Log LOG = LogFactory.getLog(getClass()); + + private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + TEST_UTIL.getConfiguration().setBoolean(QuotaUtil.QUOTA_CONF_KEY, true); + TEST_UTIL.getConfiguration().setInt(QuotaCache.REFRESH_CONF_KEY, 2000); + TEST_UTIL.getConfiguration().setInt("hbase.hstore.compactionThreshold", 10); + TEST_UTIL.getConfiguration().setInt("hbase.regionserver.msginterval", 100); + TEST_UTIL.getConfiguration().setInt("hbase.client.pause", 250); + TEST_UTIL.getConfiguration().setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 6); + TEST_UTIL.getConfiguration().setBoolean("hbase.master.enabletable.roundrobin", true); + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.waitTableAvailable(QuotaTableUtil.QUOTA_TABLE_NAME); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + TEST_UTIL.shutdownMiniCluster(); + } + + @Test + public void testSimpleScan() throws Exception { + Admin admin = TEST_UTIL.getHBaseAdmin(); + String userName = User.getCurrent().getShortName(); + + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + admin.setQuota(QuotaSettingsFactory.bypassGlobals(userName, true)); + + QuotaRetriever scanner = QuotaRetriever.open(TEST_UTIL.getConfiguration()); + try { + int countThrottle = 0; + int countGlobalBypass = 0; + for (QuotaSettings settings: scanner) { + LOG.debug(settings); + switch (settings.getQuotaType()) { + case THROTTLE: + ThrottleSettings throttle = (ThrottleSettings)settings; + assertEquals(userName, throttle.getUserName()); + assertEquals(null, throttle.getTableName()); + assertEquals(null, throttle.getNamespace()); + assertEquals(6, throttle.getSoftLimit()); + assertEquals(TimeUnit.MINUTES, throttle.getTimeUnit()); + countThrottle++; + break; + case GLOBAL_BYPASS: + countGlobalBypass++; + break; + default: + fail("unexpected settings type: " + settings.getQuotaType()); + } + } + assertEquals(1, countThrottle); + assertEquals(1, countGlobalBypass); + } finally { + scanner.close(); + } + + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName)); + assertNumResults(1, null); + admin.setQuota(QuotaSettingsFactory.bypassGlobals(userName, false)); + assertNumResults(0, null); + } + + @Test + public void testQuotaRetrieverFilter() throws Exception { + Admin admin = TEST_UTIL.getHBaseAdmin(); + TableName[] tables = new TableName[] { + TableName.valueOf("T0"), TableName.valueOf("T01"), TableName.valueOf("NS0:T2"), + }; + String[] namespaces = new String[] { "NS0", "NS01", "NS2" }; + String[] users = new String[] { "User0", "User01", "User2" }; + + for (String user: users) { + admin.setQuota(QuotaSettingsFactory + .throttleUser(user, ThrottleType.REQUEST_NUMBER, 1, TimeUnit.MINUTES)); + + for (TableName table: tables) { + admin.setQuota(QuotaSettingsFactory + .throttleUser(user, table, ThrottleType.REQUEST_NUMBER, 2, TimeUnit.MINUTES)); + } + + for (String ns: namespaces) { + admin.setQuota(QuotaSettingsFactory + .throttleUser(user, ns, ThrottleType.REQUEST_NUMBER, 3, TimeUnit.MINUTES)); + } + } + assertNumResults(21, null); + + for (TableName table: tables) { + admin.setQuota(QuotaSettingsFactory + .throttleTable(table, ThrottleType.REQUEST_NUMBER, 4, TimeUnit.MINUTES)); + } + assertNumResults(24, null); + + for (String ns: namespaces) { + admin.setQuota(QuotaSettingsFactory + .throttleNamespace(ns, ThrottleType.REQUEST_NUMBER, 5, TimeUnit.MINUTES)); + } + assertNumResults(27, null); + + assertNumResults(7, new QuotaFilter().setUserFilter("User0")); + assertNumResults(0, new QuotaFilter().setUserFilter("User")); + assertNumResults(21, new QuotaFilter().setUserFilter("User.*")); + assertNumResults(3, new QuotaFilter().setUserFilter("User.*").setTableFilter("T0")); + assertNumResults(3, new QuotaFilter().setUserFilter("User.*").setTableFilter("NS.*")); + assertNumResults(0, new QuotaFilter().setUserFilter("User.*").setTableFilter("T")); + assertNumResults(6, new QuotaFilter().setUserFilter("User.*").setTableFilter("T.*")); + assertNumResults(3, new QuotaFilter().setUserFilter("User.*").setNamespaceFilter("NS0")); + assertNumResults(0, new QuotaFilter().setUserFilter("User.*").setNamespaceFilter("NS")); + assertNumResults(9, new QuotaFilter().setUserFilter("User.*").setNamespaceFilter("NS.*")); + assertNumResults(6, new QuotaFilter().setUserFilter("User.*") + .setTableFilter("T0").setNamespaceFilter("NS0")); + assertNumResults(1, new QuotaFilter().setTableFilter("T0")); + assertNumResults(0, new QuotaFilter().setTableFilter("T")); + assertNumResults(2, new QuotaFilter().setTableFilter("T.*")); + assertNumResults(3, new QuotaFilter().setTableFilter(".*T.*")); + assertNumResults(1, new QuotaFilter().setNamespaceFilter("NS0")); + assertNumResults(0, new QuotaFilter().setNamespaceFilter("NS")); + assertNumResults(3, new QuotaFilter().setNamespaceFilter("NS.*")); + + for (String user: users) { + admin.setQuota(QuotaSettingsFactory.unthrottleUser(user)); + for (TableName table: tables) { + admin.setQuota(QuotaSettingsFactory.unthrottleUser(user, table)); + } + for (String ns: namespaces) { + admin.setQuota(QuotaSettingsFactory.unthrottleUser(user, ns)); + } + } + assertNumResults(6, null); + + for (TableName table: tables) { + admin.setQuota(QuotaSettingsFactory.unthrottleTable(table)); + } + assertNumResults(3, null); + + for (String ns: namespaces) { + admin.setQuota(QuotaSettingsFactory.unthrottleNamespace(ns)); + } + assertNumResults(0, null); + } + + private void assertNumResults(int expected, final QuotaFilter filter) throws Exception { + assertEquals(expected, countResults(filter)); + } + + private int countResults(final QuotaFilter filter) throws Exception { + QuotaRetriever scanner = QuotaRetriever.open(TEST_UTIL.getConfiguration(), filter); + try { + int count = 0; + for (QuotaSettings settings: scanner) { + LOG.debug(settings); + count++; + } + return count; + } finally { + scanner.close(); + } + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaState.java hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaState.java new file mode 100644 index 0000000..c1c842b --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaState.java @@ -0,0 +1,236 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.io.IOException; +import java.util.concurrent.TimeUnit; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle; +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; + +import org.junit.Assert; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.fail; + +@Category({RegionServerTests.class, SmallTests.class}) +public class TestQuotaState { + private static final TableName UNKNOWN_TABLE_NAME = TableName.valueOf("unknownTable"); + + @Test(timeout=60000) + public void testQuotaStateBypass() { + QuotaState quotaInfo = new QuotaState(); + assertTrue(quotaInfo.isBypass()); + assertNoopLimiter(quotaInfo.getGlobalLimiter()); + + UserQuotaState userQuotaState = new UserQuotaState(); + assertTrue(userQuotaState.isBypass()); + assertNoopLimiter(userQuotaState.getTableLimiter(UNKNOWN_TABLE_NAME)); + } + + @Test(timeout=60000) + public void testSimpleQuotaStateOperation() { + final TableName table = TableName.valueOf("testSimpleQuotaStateOperationTable"); + final int NUM_GLOBAL_THROTTLE = 3; + final int NUM_TABLE_THROTTLE = 2; + + UserQuotaState quotaInfo = new UserQuotaState(); + assertTrue(quotaInfo.isBypass()); + + // Set global quota + quotaInfo.setQuotas(buildReqNumThrottle(NUM_GLOBAL_THROTTLE)); + assertFalse(quotaInfo.isBypass()); + + // Set table quota + quotaInfo.setQuotas(table, buildReqNumThrottle(NUM_TABLE_THROTTLE)); + assertFalse(quotaInfo.isBypass()); + assertTrue(quotaInfo.getGlobalLimiter() == quotaInfo.getTableLimiter(UNKNOWN_TABLE_NAME)); + assertThrottleException(quotaInfo.getTableLimiter(UNKNOWN_TABLE_NAME), NUM_GLOBAL_THROTTLE); + assertThrottleException(quotaInfo.getTableLimiter(table), NUM_TABLE_THROTTLE); + } + + @Test(timeout=60000) + public void testQuotaStateUpdateBypassThrottle() { + final long LAST_UPDATE = 10; + + UserQuotaState quotaInfo = new UserQuotaState(); + assertEquals(0, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + + UserQuotaState otherQuotaState = new UserQuotaState(LAST_UPDATE); + assertEquals(LAST_UPDATE, otherQuotaState.getLastUpdate()); + assertTrue(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + assertTrue(quotaInfo.getGlobalLimiter() == quotaInfo.getTableLimiter(UNKNOWN_TABLE_NAME)); + assertNoopLimiter(quotaInfo.getTableLimiter(UNKNOWN_TABLE_NAME)); + } + + @Test(timeout=60000) + public void testQuotaStateUpdateGlobalThrottle() { + final int NUM_GLOBAL_THROTTLE_1 = 3; + final int NUM_GLOBAL_THROTTLE_2 = 11; + final long LAST_UPDATE_1 = 10; + final long LAST_UPDATE_2 = 20; + final long LAST_UPDATE_3 = 30; + + QuotaState quotaInfo = new QuotaState(); + assertEquals(0, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + + // Add global throttle + QuotaState otherQuotaState = new QuotaState(LAST_UPDATE_1); + otherQuotaState.setQuotas(buildReqNumThrottle(NUM_GLOBAL_THROTTLE_1)); + assertEquals(LAST_UPDATE_1, otherQuotaState.getLastUpdate()); + assertFalse(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_1, quotaInfo.getLastUpdate()); + assertFalse(quotaInfo.isBypass()); + assertThrottleException(quotaInfo.getGlobalLimiter(), NUM_GLOBAL_THROTTLE_1); + + // Update global Throttle + otherQuotaState = new QuotaState(LAST_UPDATE_2); + otherQuotaState.setQuotas(buildReqNumThrottle(NUM_GLOBAL_THROTTLE_2)); + assertEquals(LAST_UPDATE_2, otherQuotaState.getLastUpdate()); + assertFalse(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_2, quotaInfo.getLastUpdate()); + assertFalse(quotaInfo.isBypass()); + assertThrottleException(quotaInfo.getGlobalLimiter(), + NUM_GLOBAL_THROTTLE_2 - NUM_GLOBAL_THROTTLE_1); + + // Remove global throttle + otherQuotaState = new QuotaState(LAST_UPDATE_3); + assertEquals(LAST_UPDATE_3, otherQuotaState.getLastUpdate()); + assertTrue(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_3, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + assertNoopLimiter(quotaInfo.getGlobalLimiter()); + } + + @Test(timeout=60000) + public void testQuotaStateUpdateTableThrottle() { + final TableName TABLE_A = TableName.valueOf("TableA"); + final TableName TABLE_B = TableName.valueOf("TableB"); + final TableName TABLE_C = TableName.valueOf("TableC"); + final int TABLE_A_THROTTLE_1 = 3; + final int TABLE_A_THROTTLE_2 = 11; + final int TABLE_B_THROTTLE = 4; + final int TABLE_C_THROTTLE = 5; + final long LAST_UPDATE_1 = 10; + final long LAST_UPDATE_2 = 20; + final long LAST_UPDATE_3 = 30; + + UserQuotaState quotaInfo = new UserQuotaState(); + assertEquals(0, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + + // Add A B table limiters + UserQuotaState otherQuotaState = new UserQuotaState(LAST_UPDATE_1); + otherQuotaState.setQuotas(TABLE_A, buildReqNumThrottle(TABLE_A_THROTTLE_1)); + otherQuotaState.setQuotas(TABLE_B, buildReqNumThrottle(TABLE_B_THROTTLE)); + assertEquals(LAST_UPDATE_1, otherQuotaState.getLastUpdate()); + assertFalse(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_1, quotaInfo.getLastUpdate()); + assertFalse(quotaInfo.isBypass()); + assertThrottleException(quotaInfo.getTableLimiter(TABLE_A), TABLE_A_THROTTLE_1); + assertThrottleException(quotaInfo.getTableLimiter(TABLE_B), TABLE_B_THROTTLE); + assertNoopLimiter(quotaInfo.getTableLimiter(TABLE_C)); + + // Add C, Remove B, Update A table limiters + otherQuotaState = new UserQuotaState(LAST_UPDATE_2); + otherQuotaState.setQuotas(TABLE_A, buildReqNumThrottle(TABLE_A_THROTTLE_2)); + otherQuotaState.setQuotas(TABLE_C, buildReqNumThrottle(TABLE_C_THROTTLE)); + assertEquals(LAST_UPDATE_2, otherQuotaState.getLastUpdate()); + assertFalse(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_2, quotaInfo.getLastUpdate()); + assertFalse(quotaInfo.isBypass()); + assertThrottleException(quotaInfo.getTableLimiter(TABLE_A), + TABLE_A_THROTTLE_2 - TABLE_A_THROTTLE_1); + assertThrottleException(quotaInfo.getTableLimiter(TABLE_C), TABLE_C_THROTTLE); + assertNoopLimiter(quotaInfo.getTableLimiter(TABLE_B)); + + // Remove table limiters + otherQuotaState = new UserQuotaState(LAST_UPDATE_3); + assertEquals(LAST_UPDATE_3, otherQuotaState.getLastUpdate()); + assertTrue(otherQuotaState.isBypass()); + + quotaInfo.update(otherQuotaState); + assertEquals(LAST_UPDATE_3, quotaInfo.getLastUpdate()); + assertTrue(quotaInfo.isBypass()); + assertNoopLimiter(quotaInfo.getTableLimiter(UNKNOWN_TABLE_NAME)); + } + + private Quotas buildReqNumThrottle(final long limit) { + return Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqNum(ProtobufUtil.toTimedQuota(limit, TimeUnit.MINUTES, QuotaScope.MACHINE)) + .build()) + .build(); + } + + private void assertThrottleException(final QuotaLimiter limiter, final int availReqs) { + assertNoThrottleException(limiter, availReqs); + try { + limiter.checkQuota(1, 1); + fail("Should have thrown ThrottlingException"); + } catch (ThrottlingException e) { + // expected + } + } + + private void assertNoThrottleException(final QuotaLimiter limiter, final int availReqs) { + for (int i = 0; i < availReqs; ++i) { + try { + limiter.checkQuota(1, 1); + } catch (ThrottlingException e) { + fail("Unexpected ThrottlingException after " + i + " requests. limit=" + availReqs); + } + limiter.grabQuota(1, 1); + } + } + + private void assertNoopLimiter(final QuotaLimiter limiter) { + assertTrue(limiter == NoopQuotaLimiter.get()); + assertNoThrottleException(limiter, 100); + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaTableUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaTableUtil.java new file mode 100644 index 0000000..34239c0 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaTableUtil.java @@ -0,0 +1,185 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import static org.junit.Assert.assertEquals; + +import java.io.IOException; +import java.util.concurrent.TimeUnit; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Quotas; +import org.apache.hadoop.hbase.protobuf.generated.QuotaProtos.Throttle; +import org.apache.hadoop.hbase.testclassification.MasterTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +/** + * Test the quota table helpers (e.g. CRUD operations) + */ +@Category({MasterTests.class, MediumTests.class}) +public class TestQuotaTableUtil { + final Log LOG = LogFactory.getLog(getClass()); + + private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + private Connection connection; + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + TEST_UTIL.getConfiguration().setBoolean(QuotaUtil.QUOTA_CONF_KEY, true); + TEST_UTIL.getConfiguration().setInt(QuotaCache.REFRESH_CONF_KEY, 2000); + TEST_UTIL.getConfiguration().setInt("hbase.hstore.compactionThreshold", 10); + TEST_UTIL.getConfiguration().setInt("hbase.regionserver.msginterval", 100); + TEST_UTIL.getConfiguration().setInt("hbase.client.pause", 250); + TEST_UTIL.getConfiguration().setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 6); + TEST_UTIL.getConfiguration().setBoolean("hbase.master.enabletable.roundrobin", true); + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.waitTableAvailable(QuotaTableUtil.QUOTA_TABLE_NAME); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + TEST_UTIL.shutdownMiniCluster(); + } + + @Before + public void before() throws IOException { + this.connection = ConnectionFactory.createConnection(TEST_UTIL.getConfiguration()); + } + + @After + public void after() throws IOException { + this.connection.close(); + } + + @Test + public void testTableQuotaUtil() throws Exception { + final TableName table = TableName.valueOf("testTableQuotaUtilTable"); + + Quotas quota = Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqNum(ProtobufUtil.toTimedQuota(1000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setWriteNum(ProtobufUtil.toTimedQuota(600, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setReadSize(ProtobufUtil.toTimedQuota(8192, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .build()) + .build(); + + // Add user quota and verify it + QuotaUtil.addTableQuota(this.connection, table, quota); + Quotas resQuota = QuotaUtil.getTableQuota(this.connection, table); + assertEquals(quota, resQuota); + + // Remove user quota and verify it + QuotaUtil.deleteTableQuota(this.connection, table); + resQuota = QuotaUtil.getTableQuota(this.connection, table); + assertEquals(null, resQuota); + } + + @Test + public void testNamespaceQuotaUtil() throws Exception { + final String namespace = "testNamespaceQuotaUtilNS"; + + Quotas quota = Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqNum(ProtobufUtil.toTimedQuota(1000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setWriteNum(ProtobufUtil.toTimedQuota(600, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setReadSize(ProtobufUtil.toTimedQuota(8192, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .build()) + .build(); + + // Add user quota and verify it + QuotaUtil.addNamespaceQuota(this.connection, namespace, quota); + Quotas resQuota = QuotaUtil.getNamespaceQuota(this.connection, namespace); + assertEquals(quota, resQuota); + + // Remove user quota and verify it + QuotaUtil.deleteNamespaceQuota(this.connection, namespace); + resQuota = QuotaUtil.getNamespaceQuota(this.connection, namespace); + assertEquals(null, resQuota); + } + + @Test + public void testUserQuotaUtil() throws Exception { + final TableName table = TableName.valueOf("testUserQuotaUtilTable"); + final String namespace = "testNS"; + final String user = "testUser"; + + Quotas quotaNamespace = Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqNum(ProtobufUtil.toTimedQuota(50000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .build()) + .build(); + Quotas quotaTable = Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqNum(ProtobufUtil.toTimedQuota(1000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setWriteNum(ProtobufUtil.toTimedQuota(600, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setReadSize(ProtobufUtil.toTimedQuota(10000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .build()) + .build(); + Quotas quota = Quotas.newBuilder() + .setThrottle(Throttle.newBuilder() + .setReqSize(ProtobufUtil.toTimedQuota(8192, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setWriteSize(ProtobufUtil.toTimedQuota(4096, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .setReadNum(ProtobufUtil.toTimedQuota(1000, TimeUnit.SECONDS, QuotaScope.MACHINE)) + .build()) + .build(); + + // Add user global quota + QuotaUtil.addUserQuota(this.connection, user, quota); + Quotas resQuota = QuotaUtil.getUserQuota(this.connection, user); + assertEquals(quota, resQuota); + + // Add user quota for table + QuotaUtil.addUserQuota(this.connection, user, table, quotaTable); + Quotas resQuotaTable = QuotaUtil.getUserQuota(this.connection, user, table); + assertEquals(quotaTable, resQuotaTable); + + // Add user quota for namespace + QuotaUtil.addUserQuota(this.connection, user, namespace, quotaNamespace); + Quotas resQuotaNS = QuotaUtil.getUserQuota(this.connection, user, namespace); + assertEquals(quotaNamespace, resQuotaNS); + + // Delete user global quota + QuotaUtil.deleteUserQuota(this.connection, user); + resQuota = QuotaUtil.getUserQuota(this.connection, user); + assertEquals(null, resQuota); + + // Delete user quota for table + QuotaUtil.deleteUserQuota(this.connection, user, table); + resQuotaTable = QuotaUtil.getUserQuota(this.connection, user, table); + assertEquals(null, resQuotaTable); + + // Delete user quota for namespace + QuotaUtil.deleteUserQuota(this.connection, user, namespace); + resQuotaNS = QuotaUtil.getUserQuota(this.connection, user, namespace); + assertEquals(null, resQuotaNS); + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java new file mode 100644 index 0000000..0901d2f --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestQuotaThrottle.java @@ -0,0 +1,423 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.concurrent.TimeUnit; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException; +import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; +import org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge; +import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; +import org.apache.hadoop.security.UserGroupInformation; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; + +@Category({RegionServerTests.class, MediumTests.class}) +public class TestQuotaThrottle { + final Log LOG = LogFactory.getLog(getClass()); + + private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + + private final static byte[] FAMILY = Bytes.toBytes("cf"); + private final static byte[] QUALIFIER = Bytes.toBytes("q"); + + private final static TableName[] TABLE_NAMES = new TableName[] { + TableName.valueOf("TestQuotaAdmin0"), + TableName.valueOf("TestQuotaAdmin1"), + TableName.valueOf("TestQuotaAdmin2") + }; + + private static HTable[] tables; + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + TEST_UTIL.getConfiguration().setBoolean(QuotaUtil.QUOTA_CONF_KEY, true); + TEST_UTIL.getConfiguration().setInt("hbase.hstore.compactionThreshold", 10); + TEST_UTIL.getConfiguration().setInt("hbase.regionserver.msginterval", 100); + TEST_UTIL.getConfiguration().setInt("hbase.client.pause", 250); + TEST_UTIL.getConfiguration().setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 6); + TEST_UTIL.getConfiguration().setBoolean("hbase.master.enabletable.roundrobin", true); + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.waitTableAvailable(QuotaTableUtil.QUOTA_TABLE_NAME); + QuotaCache.TEST_FORCE_REFRESH = true; + + tables = new HTable[TABLE_NAMES.length]; + for (int i = 0; i < TABLE_NAMES.length; ++i) { + tables[i] = TEST_UTIL.createTable(TABLE_NAMES[i], FAMILY); + } + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + for (int i = 0; i < tables.length; ++i) { + if (tables[i] != null) { + tables[i].close(); + TEST_UTIL.deleteTable(TABLE_NAMES[i]); + } + } + + TEST_UTIL.shutdownMiniCluster(); + } + + @After + public void tearDown() throws Exception { + for (RegionServerThread rst: TEST_UTIL.getMiniHBaseCluster().getRegionServerThreads()) { + RegionServerQuotaManager quotaManager = rst.getRegionServer().getRegionServerQuotaManager(); + QuotaCache quotaCache = quotaManager.getQuotaCache(); + quotaCache.getNamespaceQuotaCache().clear(); + quotaCache.getTableQuotaCache().clear(); + quotaCache.getUserQuotaCache().clear(); + } + } + + @Test(timeout=60000) + public void testUserGlobalThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String userName = User.getCurrent().getShortName(); + + // Add 6req/min limit + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES); + + // should execute at max 6 requests + assertEquals(6, doPuts(100, tables)); + + // wait a minute and you should get other 6 requests executed + waitMinuteQuota(); + assertEquals(6, doPuts(100, tables)); + + // Remove all the limits + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName)); + triggerUserCacheRefresh(true, TABLE_NAMES); + assertEquals(60, doPuts(60, tables)); + assertEquals(60, doGets(60, tables)); + } + + @Test(timeout=60000) + public void testUserTableThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String userName = User.getCurrent().getShortName(); + + // Add 6req/min limit + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, TABLE_NAMES[0], ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES[0]); + + // should execute at max 6 requests on tables[0] and have no limit on tables[1] + assertEquals(6, doPuts(100, tables[0])); + assertEquals(30, doPuts(30, tables[1])); + + // wait a minute and you should get other 6 requests executed + waitMinuteQuota(); + assertEquals(6, doPuts(100, tables[0])); + + // Remove all the limits + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName, TABLE_NAMES[0])); + triggerUserCacheRefresh(true, TABLE_NAMES); + assertEquals(60, doPuts(60, tables)); + assertEquals(60, doGets(60, tables)); + } + + @Test(timeout=60000) + public void testUserNamespaceThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String userName = User.getCurrent().getShortName(); + final String NAMESPACE = "default"; + + // Add 6req/min limit + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, NAMESPACE, ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES[0]); + + // should execute at max 6 requests on tables[0] and have no limit on tables[1] + assertEquals(6, doPuts(100, tables[0])); + + // wait a minute and you should get other 6 requests executed + waitMinuteQuota(); + assertEquals(6, doPuts(100, tables[1])); + + // Remove all the limits + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName, NAMESPACE)); + triggerUserCacheRefresh(true, TABLE_NAMES); + assertEquals(60, doPuts(60, tables)); + assertEquals(60, doGets(60, tables)); + } + + @Test(timeout=60000) + public void testTableGlobalThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + + // Add 6req/min limit + admin.setQuota(QuotaSettingsFactory + .throttleTable(TABLE_NAMES[0], ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerTableCacheRefresh(false, TABLE_NAMES[0]); + + // should execute at max 6 requests + assertEquals(6, doPuts(100, tables[0])); + // should have no limits + assertEquals(30, doPuts(30, tables[1])); + + // wait a minute and you should get other 6 requests executed + waitMinuteQuota(); + assertEquals(6, doPuts(100, tables[0])); + + // Remove all the limits + admin.setQuota(QuotaSettingsFactory.unthrottleTable(TABLE_NAMES[0])); + triggerTableCacheRefresh(true, TABLE_NAMES[0]); + assertEquals(80, doGets(80, tables[0], tables[1])); + } + + @Test(timeout=60000) + public void testNamespaceGlobalThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String NAMESPACE = "default"; + + // Add 6req/min limit + admin.setQuota(QuotaSettingsFactory + .throttleNamespace(NAMESPACE, ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerNamespaceCacheRefresh(false, TABLE_NAMES[0]); + + // should execute at max 6 requests + assertEquals(6, doPuts(100, tables[0])); + + // wait a minute and you should get other 6 requests executed + waitMinuteQuota(); + assertEquals(6, doPuts(100, tables[1])); + + admin.setQuota(QuotaSettingsFactory.unthrottleNamespace(NAMESPACE)); + triggerNamespaceCacheRefresh(true, TABLE_NAMES[0]); + assertEquals(40, doPuts(40, tables[0])); + } + + @Test(timeout=60000) + public void testUserAndTableThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String userName = User.getCurrent().getShortName(); + + // Add 6req/min limit for the user on tables[0] + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, TABLE_NAMES[0], ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES[0]); + // Add 12req/min limit for the user + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, ThrottleType.REQUEST_NUMBER, 12, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES[1], TABLE_NAMES[2]); + // Add 8req/min limit for the tables[1] + admin.setQuota(QuotaSettingsFactory + .throttleTable(TABLE_NAMES[1], ThrottleType.REQUEST_NUMBER, 8, TimeUnit.MINUTES)); + triggerTableCacheRefresh(false, TABLE_NAMES[1]); + // Add a lower table level throttle on tables[0] + admin.setQuota(QuotaSettingsFactory + .throttleTable(TABLE_NAMES[0], ThrottleType.REQUEST_NUMBER, 3, TimeUnit.MINUTES)); + triggerTableCacheRefresh(false, TABLE_NAMES[0]); + + // should execute at max 12 requests + assertEquals(12, doGets(100, tables[2])); + + // should execute at max 8 requests + waitMinuteQuota(); + assertEquals(8, doGets(100, tables[1])); + + // should execute at max 3 requests + waitMinuteQuota(); + assertEquals(3, doPuts(100, tables[0])); + + // Remove all the throttling rules + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName, TABLE_NAMES[0])); + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName)); + triggerUserCacheRefresh(true, TABLE_NAMES[0], TABLE_NAMES[1]); + + admin.setQuota(QuotaSettingsFactory.unthrottleTable(TABLE_NAMES[1])); + triggerTableCacheRefresh(true, TABLE_NAMES[1]); + waitMinuteQuota(); + assertEquals(40, doGets(40, tables[1])); + + admin.setQuota(QuotaSettingsFactory.unthrottleTable(TABLE_NAMES[0])); + triggerTableCacheRefresh(true, TABLE_NAMES[0]); + waitMinuteQuota(); + assertEquals(40, doGets(40, tables[0])); + } + + @Test(timeout=60000) + public void testUserGlobalBypassThrottle() throws Exception { + final Admin admin = TEST_UTIL.getHBaseAdmin(); + final String userName = User.getCurrent().getShortName(); + final String NAMESPACE = "default"; + + // Add 6req/min limit for tables[0] + admin.setQuota(QuotaSettingsFactory + .throttleTable(TABLE_NAMES[0], ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerTableCacheRefresh(false, TABLE_NAMES[0]); + // Add 13req/min limit for the user + admin.setQuota(QuotaSettingsFactory + .throttleNamespace(NAMESPACE, ThrottleType.REQUEST_NUMBER, 13, TimeUnit.MINUTES)); + triggerNamespaceCacheRefresh(false, TABLE_NAMES[1]); + + // should execute at max 6 requests on table[0] and (13 - 6) on table[1] + assertEquals(6, doPuts(100, tables[0])); + assertEquals(7, doGets(100, tables[1])); + waitMinuteQuota(); + + // Set the global bypass for the user + admin.setQuota(QuotaSettingsFactory.bypassGlobals(userName, true)); + admin.setQuota(QuotaSettingsFactory + .throttleUser(userName, TABLE_NAMES[2], ThrottleType.REQUEST_NUMBER, 6, TimeUnit.MINUTES)); + triggerUserCacheRefresh(false, TABLE_NAMES[2]); + assertEquals(30, doGets(30, tables[0])); + assertEquals(30, doGets(30, tables[1])); + waitMinuteQuota(); + + // Remove the global bypass + // should execute at max 6 requests on table[0] and (13 - 6) on table[1] + admin.setQuota(QuotaSettingsFactory.bypassGlobals(userName, false)); + admin.setQuota(QuotaSettingsFactory.unthrottleUser(userName, TABLE_NAMES[2])); + triggerUserCacheRefresh(true, TABLE_NAMES[2]); + assertEquals(6, doPuts(100, tables[0])); + assertEquals(7, doGets(100, tables[1])); + + // unset throttle + admin.setQuota(QuotaSettingsFactory.unthrottleTable(TABLE_NAMES[0])); + admin.setQuota(QuotaSettingsFactory.unthrottleNamespace(NAMESPACE)); + waitMinuteQuota(); + triggerTableCacheRefresh(true, TABLE_NAMES[0]); + triggerNamespaceCacheRefresh(true, TABLE_NAMES[1]); + assertEquals(30, doGets(30, tables[0])); + assertEquals(30, doGets(30, tables[1])); + } + + private int doPuts(int maxOps, final HTable... tables) throws Exception { + int count = 0; + try { + while (count < maxOps) { + Put put = new Put(Bytes.toBytes("row-" + count)); + put.add(FAMILY, QUALIFIER, Bytes.toBytes("data-" + count)); + for (final HTable table: tables) { + table.put(put); + } + count += tables.length; + } + } catch (RetriesExhaustedWithDetailsException e) { + for (Throwable t: e.getCauses()) { + if (!(t instanceof ThrottlingException)) { + throw e; + } + } + LOG.error("put failed after nRetries=" + count, e); + } + return count; + } + + private long doGets(int maxOps, final HTable... tables) throws Exception { + int count = 0; + try { + while (count < maxOps) { + Get get = new Get(Bytes.toBytes("row-" + count)); + for (final HTable table: tables) { + table.get(get); + } + count += tables.length; + } + } catch (ThrottlingException e) { + LOG.error("get failed after nRetries=" + count, e); + } + return count; + } + + private void triggerUserCacheRefresh(boolean bypass, TableName... tables) throws Exception { + triggerCacheRefresh(bypass, true, false, false, tables); + } + + private void triggerTableCacheRefresh(boolean bypass, TableName... tables) throws Exception { + triggerCacheRefresh(bypass, false, true, false, tables); + } + + private void triggerNamespaceCacheRefresh(boolean bypass, TableName... tables) throws Exception { + triggerCacheRefresh(bypass, false, false, true, tables); + } + + private void triggerCacheRefresh(boolean bypass, boolean userLimiter, boolean tableLimiter, + boolean nsLimiter, final TableName... tables) throws Exception { + for (RegionServerThread rst: TEST_UTIL.getMiniHBaseCluster().getRegionServerThreads()) { + RegionServerQuotaManager quotaManager = rst.getRegionServer().getRegionServerQuotaManager(); + QuotaCache quotaCache = quotaManager.getQuotaCache(); + + quotaCache.triggerCacheRefresh(); + Thread.sleep(250); + + for (TableName table: tables) { + quotaCache.getTableLimiter(table); + } + + boolean isUpdated = false; + while (!isUpdated) { + isUpdated = true; + for (TableName table: tables) { + boolean isBypass = true; + if (userLimiter) { + isBypass &= quotaCache.getUserLimiter(User.getCurrent().getUGI(), table).isBypass(); + } + if (tableLimiter) { + isBypass &= quotaCache.getTableLimiter(table).isBypass(); + } + if (nsLimiter) { + isBypass &= quotaCache.getNamespaceLimiter(table.getNamespaceAsString()).isBypass(); + } + if (isBypass != bypass) { + isUpdated = false; + Thread.sleep(250); + break; + } + } + } + + LOG.debug("QuotaCache"); + LOG.debug(quotaCache.getNamespaceQuotaCache()); + LOG.debug(quotaCache.getTableQuotaCache()); + LOG.debug(quotaCache.getUserQuotaCache()); + } + } + + private void waitMinuteQuota() { + EnvironmentEdgeManagerTestHelper.injectEdge( + new IncrementingEnvironmentEdge( + EnvironmentEdgeManager.currentTime() + 70000)); + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestRateLimiter.java hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestRateLimiter.java new file mode 100644 index 0000000..50897a2 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestRateLimiter.java @@ -0,0 +1,115 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.quotas; + +import java.util.concurrent.TimeUnit; + +import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; + +import org.junit.Assert; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertFalse; +import static org.junit.Assert.assertTrue; + +/** + * Verify the behaviour of the Rate Limiter. + */ +@Category({RegionServerTests.class, SmallTests.class}) +public class TestRateLimiter { + @Test + public void testWaitIntervalTimeUnitSeconds() { + testWaitInterval(TimeUnit.SECONDS, 10, 100); + } + + @Test + public void testWaitIntervalTimeUnitMinutes() { + testWaitInterval(TimeUnit.MINUTES, 10, 6000); + } + + @Test + public void testWaitIntervalTimeUnitHours() { + testWaitInterval(TimeUnit.HOURS, 10, 360000); + } + + @Test + public void testWaitIntervalTimeUnitDays() { + testWaitInterval(TimeUnit.DAYS, 10, 8640000); + } + + private void testWaitInterval(final TimeUnit timeUnit, final long limit, + final long expectedWaitInterval) { + RateLimiter limiter = new RateLimiter(); + limiter.set(limit, timeUnit); + + long nowTs = 0; + long lastTs = 0; + + // consume all the available resources, one request at the time. + // the wait interval should be 0 + for (int i = 0; i < (limit - 1); ++i) { + assertTrue(limiter.canExecute(nowTs, lastTs)); + limiter.consume(); + long waitInterval = limiter.waitInterval(); + assertEquals(0, waitInterval); + } + + for (int i = 0; i < (limit * 4); ++i) { + // There is one resource available, so we should be able to + // consume it without waiting. + assertTrue(limiter.canExecute(nowTs, lastTs)); + assertEquals(0, limiter.waitInterval()); + limiter.consume(); + lastTs = nowTs; + + // No more resources are available, we should wait for at least an interval. + long waitInterval = limiter.waitInterval(); + assertEquals(expectedWaitInterval, waitInterval); + + // set the nowTs to be the exact time when resources should be available again. + nowTs += waitInterval; + + // artificially go into the past to prove that when too early we should fail. + assertFalse(limiter.canExecute(nowTs - 500, lastTs)); + } + } + + @Test + public void testOverconsumption() { + RateLimiter limiter = new RateLimiter(); + limiter.set(10, TimeUnit.SECONDS); + + // 10 resources are available, but we need to consume 20 resources + // Verify that we have to wait at least 1.1sec to have 1 resource available + assertTrue(limiter.canExecute(0, 0)); + limiter.consume(20); + assertEquals(1100, limiter.waitInterval()); + + // Verify that after 1sec we need to wait for another 0.1sec to get a resource available + assertFalse(limiter.canExecute(1000, 0)); + assertEquals(100, limiter.waitInterval()); + + // Verify that after 1.1sec the resource is available + assertTrue(limiter.canExecute(1100, 0)); + assertEquals(0, limiter.waitInterval()); + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java index cd22d86..e936789 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java @@ -242,6 +242,16 @@ public class MetricsRegionServerWrapperStub implements MetricsRegionServerWrappe } @Override + public long getHedgedReadOps() { + return 100; + } + + @Override + public long getHedgedReadWins() { + return 10; + } + + @Override public long getBlockedRequestsCount() { return 0; } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java index fd3f7ea..883e530 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestAtomicOperation.java @@ -43,7 +43,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MultithreadedTestUtil; import org.apache.hadoop.hbase.MultithreadedTestUtil.TestContext; import org.apache.hadoop.hbase.MultithreadedTestUtil.TestThread; @@ -62,6 +61,8 @@ import org.apache.hadoop.hbase.filter.BinaryComparator; import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; import org.apache.hadoop.hbase.io.HeapSize; import org.apache.hadoop.hbase.wal.WAL; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.Before; @@ -74,7 +75,7 @@ import org.junit.rules.TestName; * Testing of HRegion.incrementColumnValue, HRegion.increment, * and HRegion.append */ -@Category(MediumTests.class) // Starts 100 threads +@Category({VerySlowRegionServerTests.class, MediumTests.class}) // Starts 100 threads public class TestAtomicOperation { static final Log LOG = LogFactory.getLog(TestAtomicOperation.class); @Rule public TestName name = new TestName(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java index b22f3ac..2bb8076 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksRead.java @@ -36,6 +36,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; @@ -47,10 +48,12 @@ import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.io.hfile.HFile; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; +import org.junit.After; +import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestBlocksRead extends HBaseTestCase { static final Log LOG = LogFactory.getLog(TestBlocksRead.class); static final BloomType[] BLOOM_TYPE = new BloomType[] { BloomType.ROWCOL, @@ -74,13 +77,13 @@ public class TestBlocksRead extends HBaseTestCase { * @see org.apache.hadoop.hbase.HBaseTestCase#setUp() */ @SuppressWarnings("deprecation") - @Override + @Before protected void setUp() throws Exception { super.setUp(); } @SuppressWarnings("deprecation") - @Override + @After protected void tearDown() throws Exception { super.tearDown(); EnvironmentEdgeManagerTestHelper.reset(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksScanned.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksScanned.java index 24dc1e7..25330a8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksScanned.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestBlocksScanned.java @@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Scan; @@ -34,11 +35,12 @@ import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.io.hfile.CacheStats; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Assert; +import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; @SuppressWarnings("deprecation") -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestBlocksScanned extends HBaseTestCase { private static byte [] FAMILY = Bytes.toBytes("family"); private static byte [] COL = Bytes.toBytes("col"); @@ -48,7 +50,7 @@ public class TestBlocksScanned extends HBaseTestCase { private static HBaseTestingUtility TEST_UTIL = null; - @Override + @Before public void setUp() throws Exception { super.setUp(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java index 9a227ab..dc142d6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java @@ -38,6 +38,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; @@ -69,7 +70,7 @@ import org.junit.runners.Parameterized.Parameters; * index blocks, and Bloom filter blocks, as specified by the column family. */ @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestCacheOnWriteInSchema { private static final Log LOG = LogFactory.getLog(TestCacheOnWriteInSchema.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCellSkipListSet.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCellSkipListSet.java index e487c03..9b9db5a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCellSkipListSet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCellSkipListSet.java @@ -25,11 +25,12 @@ import junit.framework.TestCase; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestCellSkipListSet extends TestCase { private final CellSkipListSet csls = new CellSkipListSet(KeyValue.COMPARATOR); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestClusterId.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestClusterId.java index 7b87db8..baea563 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestClusterId.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestClusterId.java @@ -32,6 +32,7 @@ import org.apache.hadoop.hbase.CoordinatedStateManagerFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.JVMClusterUtil; @@ -45,7 +46,7 @@ import org.junit.experimental.categories.Category; /** * Test metrics incremented on region server operations. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestClusterId { private final HBaseTestingUtility TEST_UTIL = @@ -81,7 +82,7 @@ public class TestClusterId { //Make sure RS is in blocking state Thread.sleep(10000); - TEST_UTIL.startMiniHBaseCluster(1, 1); + TEST_UTIL.startMiniHBaseCluster(1, 0); rst.waitForServerOnline(); @@ -110,7 +111,7 @@ public class TestClusterId { } TEST_UTIL.startMiniHBaseCluster(1, 1); HMaster master = TEST_UTIL.getHBaseCluster().getMaster(); - assertEquals(1, master.getServerManager().getOnlineServersList().size()); + assertEquals(2, master.getServerManager().getOnlineServersList().size()); } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java index 56c8796..81ff370 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestColumnSeeking.java @@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Durability; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Rule; @@ -42,7 +43,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestColumnSeeking { @Rule public TestName name = new TestName(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java index 9f521e2..7cfa475 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java @@ -51,6 +51,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; @@ -77,7 +78,7 @@ import org.mockito.stubbing.Answer; /** * Test compaction framework and common functions */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestCompaction { @Rule public TestName name = new TestName(); static final Log LOG = LogFactory.getLog(TestCompaction.class.getName()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java index 6d49864..4311b29 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java @@ -30,11 +30,12 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -42,7 +43,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; /** Unit tests to test retrieving table/region compaction state*/ -@Category(LargeTests.class) +@Category({VerySlowRegionServerTests.class, LargeTests.class}) public class TestCompactionState { final static Log LOG = LogFactory.getLog(TestCompactionState.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java index 0c523bf..4ad92a3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionWithCoprocessor.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.junit.experimental.categories.Category; @@ -25,7 +26,7 @@ import org.junit.experimental.categories.Category; * Make sure compaction tests still pass with the preFlush and preCompact * overridden to implement the default behavior */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestCompactionWithCoprocessor extends TestCompaction { /** constructor */ public TestCompactionWithCoprocessor() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java index 9628623..d7b4a04 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompoundBloomFilter.java @@ -39,6 +39,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.io.hfile.BlockCache; import org.apache.hadoop.hbase.io.hfile.CacheConfig; @@ -60,7 +61,7 @@ import org.junit.experimental.categories.Category; * Tests writing Bloom filter blocks in the same part of the file as data * blocks. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestCompoundBloomFilter { private static final HBaseTestingUtility TEST_UTIL = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java index 9779e47..43bc9f1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultCompactSelection.java @@ -34,6 +34,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestDefaultCompactSelection extends TestCase { private final static Log LOG = LogFactory.getLog(TestDefaultCompactSelection.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -77,6 +78,8 @@ public class TestDefaultCompactSelection extends TestCase { this.conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, minSize); this.conf.setLong("hbase.hstore.compaction.max.size", maxSize); this.conf.setFloat("hbase.hstore.compaction.ratio", 1.0F); + // Test depends on this not being set to pass. Default breaks test. TODO: Revisit. + this.conf.unset("hbase.hstore.compaction.min.size"); //Setting up a Store final String id = TestDefaultCompactSelection.class.getName(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java index 1f79b4c..d6d82df 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java @@ -44,6 +44,7 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdge; @@ -55,7 +56,7 @@ import com.google.common.collect.Iterables; import com.google.common.collect.Lists; /** memstore test case */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestDefaultMemStore extends TestCase { private final Log LOG = LogFactory.getLog(this.getClass()); private DefaultMemStore memstore; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultStoreEngine.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultStoreEngine.java index 9e5d0bd..c185075 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultStoreEngine.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultStoreEngine.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy; import org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor; @@ -30,7 +31,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestDefaultStoreEngine { public static class DummyStoreFlusher extends DefaultStoreFlusher { public DummyStoreFlusher(Configuration conf, Store store) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java index 44daaed..941f6d2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionKeyRotation.java @@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Waiter.Predicate; import org.apache.hadoop.hbase.client.HTable; @@ -54,7 +55,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestEncryptionKeyRotation { private static final Log LOG = LogFactory.getLog(TestEncryptionKeyRotation.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java index 46d05a8..efae472 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEncryptionRandomKeying.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; @@ -45,7 +46,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestEncryptionRandomKeying { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static Configuration conf = TEST_UTIL.getConfiguration(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java index b970ca0..3f70a9b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestEndToEndSplitTransaction.java @@ -34,16 +34,16 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Chore; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.NotServingRegionException; -import org.apache.hadoop.hbase.client.Admin; -import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.Stoppable; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.HConnectionManager; @@ -52,15 +52,16 @@ import org.apache.hadoop.hbase.client.MetaScanner; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ScanRequest; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.ConfigUtil; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.PairOfSameType; import org.apache.hadoop.hbase.util.StoppableImplementation; @@ -74,7 +75,8 @@ import com.google.common.collect.Iterators; import com.google.common.collect.Sets; import com.google.protobuf.ServiceException; -@Category(LargeTests.class) +@Category({FlakeyTests.class, LargeTests.class}) +@SuppressWarnings("deprecation") public class TestEndToEndSplitTransaction { private static final Log LOG = LogFactory.getLog(TestEndToEndSplitTransaction.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -110,7 +112,6 @@ public class TestEndToEndSplitTransaction { .getRegionName(); HRegion region = server.getRegion(regionName); SplitTransaction split = new SplitTransaction(region, splitRow); - split.useZKForAssignment = ConfigUtil.useZKForAssignment(conf); split.prepare(); // 1. phase I @@ -127,14 +128,10 @@ public class TestEndToEndSplitTransaction { // 3. finish phase II // note that this replicates some code from SplitTransaction // 2nd daughter first - if (split.useZKForAssignment) { - server.postOpenDeployTasks(regions.getSecond()); - } else { server.reportRegionStateTransition( RegionServerStatusProtos.RegionStateTransition.TransitionCode.SPLIT, region.getRegionInfo(), regions.getFirst().getRegionInfo(), regions.getSecond().getRegionInfo()); - } // Add to online regions server.addToOnlineRegions(regions.getSecond()); @@ -144,21 +141,11 @@ public class TestEndToEndSplitTransaction { // past splitkey is ok. assertTrue(test(con, tableName, lastRow, server)); - // first daughter second - if (split.useZKForAssignment) { - server.postOpenDeployTasks(regions.getFirst()); - } // Add to online regions server.addToOnlineRegions(regions.getFirst()); assertTrue(test(con, tableName, firstRow, server)); assertTrue(test(con, tableName, lastRow, server)); - if (split.useZKForAssignment) { - // 4. phase III - ((BaseCoordinatedStateManager) server.getCoordinatedStateManager()) - .getSplitTransactionCoordination().completeSplitTransaction(server, regions.getFirst(), - regions.getSecond(), split.std, region); - } assertTrue(test(con, tableName, firstRow, server)); assertTrue(test(con, tableName, lastRow, server)); } @@ -231,6 +218,7 @@ public class TestEndToEndSplitTransaction { } static class RegionSplitter extends Thread { + final Connection connection; Throwable ex; Table table; TableName tableName; @@ -244,6 +232,7 @@ public class TestEndToEndSplitTransaction { this.family = table.getTableDescriptor().getFamiliesKeys().iterator().next(); admin = TEST_UTIL.getHBaseAdmin(); rs = TEST_UTIL.getMiniHBaseCluster().getRegionServer(0); + connection = TEST_UTIL.getConnection(); } @Override @@ -251,8 +240,8 @@ public class TestEndToEndSplitTransaction { try { Random random = new Random(); for (int i= 0; i< 5; i++) { - NavigableMap regions = MetaScanner.allTableRegions(conf, null, - tableName); + NavigableMap regions = + MetaScanner.allTableRegions(connection, tableName); if (regions.size() == 0) { continue; } @@ -309,27 +298,30 @@ public class TestEndToEndSplitTransaction { * Checks regions using MetaScanner, MetaTableAccessor and HTable methods */ static class RegionChecker extends Chore { + Connection connection; Configuration conf; TableName tableName; Throwable ex; - RegionChecker(Configuration conf, Stoppable stopper, TableName tableName) { + RegionChecker(Configuration conf, Stoppable stopper, TableName tableName) throws IOException { super("RegionChecker", 10, stopper); this.conf = conf; this.tableName = tableName; this.setDaemon(true); + + this.connection = ConnectionFactory.createConnection(conf); } /** verify region boundaries obtained from MetaScanner */ void verifyRegionsUsingMetaScanner() throws Exception { //MetaScanner.allTableRegions() - NavigableMap regions = MetaScanner.allTableRegions(conf, null, + NavigableMap regions = MetaScanner.allTableRegions(connection, tableName); verifyTableRegions(regions.keySet()); //MetaScanner.listAllRegions() - List regionList = MetaScanner.listAllRegions(conf, false); + List regionList = MetaScanner.listAllRegions(conf, connection, false); verifyTableRegions(Sets.newTreeSet(regionList)); } @@ -343,7 +335,7 @@ public class TestEndToEndSplitTransaction { verifyStartEndKeys(keys); //HTable.getRegionsInfo() - Map regions = table.getRegionLocations(); + Map regions = table.getRegionLocations(); verifyTableRegions(regions.keySet()); } finally { IOUtils.closeQuietly(table); @@ -400,7 +392,7 @@ public class TestEndToEndSplitTransaction { verify(); } catch (Throwable ex) { this.ex = ex; - stopper.stop("caught exception"); + getStopper().stop("caught exception"); } } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java index 401583d..72d7aa9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java @@ -29,13 +29,14 @@ import java.util.Arrays; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestExplicitColumnTracker { private final byte[] col1 = Bytes.toBytes("col1"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java index c508e70..9a2cc82 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -61,7 +62,7 @@ import org.junit.experimental.categories.Category; * Test cases that ensure that file system level errors are bubbled up * appropriately to clients, rather than swallowed. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestFSErrorsExposed { private static final Log LOG = LogFactory.getLog(TestFSErrorsExposed.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushRegionEntry.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushRegionEntry.java index 00bf09b..676885b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushRegionEntry.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFlushRegionEntry.java @@ -12,6 +12,7 @@ package org.apache.hadoop.hbase.regionserver; import static org.junit.Assert.*; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.regionserver.MemStoreFlusher.FlushRegionEntry; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -22,7 +23,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestFlushRegionEntry { @Before public void setUp() throws Exception { @@ -33,8 +34,8 @@ public class TestFlushRegionEntry { @Test public void test() { - FlushRegionEntry entry = new FlushRegionEntry(Mockito.mock(HRegion.class)); - FlushRegionEntry other = new FlushRegionEntry(Mockito.mock(HRegion.class)); + FlushRegionEntry entry = new FlushRegionEntry(Mockito.mock(HRegion.class), true); + FlushRegionEntry other = new FlushRegionEntry(Mockito.mock(HRegion.class), true); assertEquals(entry.hashCode(), other.hashCode()); assertEquals(entry, other); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java index 386bc9b..92351f4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestGetClosestAtOrBefore.java @@ -34,6 +34,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Delete; @@ -42,13 +43,14 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Test; import org.junit.experimental.categories.Category; /** * TestGet is a medley of tests of get all done up as a single test. * This class */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestGetClosestAtOrBefore extends HBaseTestCase { private static final Log LOG = LogFactory.getLog(TestGetClosestAtOrBefore.class); @@ -64,11 +66,13 @@ public class TestGetClosestAtOrBefore extends HBaseTestCase { + @Test public void testUsingMetaAndBinary() throws IOException { FileSystem filesystem = FileSystem.get(conf); Path rootdir = testDir; // Up flush size else we bind up when we use default catalog flush of 16k. fsTableDescriptors.get(TableName.META_TABLE_NAME).setMemStoreFlushSize(64 * 1024 * 1024); + HRegion mr = HRegion.createHRegion(HRegionInfo.FIRST_META_REGIONINFO, rootdir, this.conf, fsTableDescriptors.get(TableName.META_TABLE_NAME)); try { @@ -186,6 +190,7 @@ public class TestGetClosestAtOrBefore extends HBaseTestCase { * Test file of multiple deletes and with deletes as final key. * @see HBASE-751 */ + @Test public void testGetClosestRowBefore3() throws IOException{ HRegion region = null; byte [] c0 = COLUMNS[0]; @@ -294,6 +299,7 @@ public class TestGetClosestAtOrBefore extends HBaseTestCase { } /** For HBASE-694 */ + @Test public void testGetClosestRowBefore2() throws IOException{ HRegion region = null; byte [] c0 = COLUMNS[0]; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java index 5a7e002..992a978 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java @@ -84,7 +84,6 @@ import org.apache.hadoop.hbase.HDFSBlocksDistribution; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.MultithreadedTestUtil; import org.apache.hadoop.hbase.MultithreadedTestUtil.RepeatingTestThread; @@ -141,6 +140,8 @@ import org.apache.hadoop.hbase.wal.WALProvider; import org.apache.hadoop.hbase.wal.WALSplitter; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.test.MetricsAssertHelper; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; @@ -169,7 +170,7 @@ import com.google.protobuf.ByteString; * A lot of the meta information for an HRegion now lives inside other HRegions * or in the HBaseMaster, so only basic testing is possible. */ -@Category(MediumTests.class) +@Category({VerySlowRegionServerTests.class, MediumTests.class}) @SuppressWarnings("deprecation") public class TestHRegion { // Do not spin up clusters in here. If you need to spin up a cluster, do it @@ -4307,7 +4308,7 @@ public class TestHRegion { try { region.increment(inc); } catch (IOException e) { - e.printStackTrace(); + LOG.info("Count=" + count + ", " + e); break; } } @@ -4394,7 +4395,7 @@ public class TestHRegion { try { region.append(app); } catch (IOException e) { - e.printStackTrace(); + LOG.info("Count=" + count + ", max=" + appendCounter + ", " + e); break; } } @@ -4424,7 +4425,7 @@ public class TestHRegion { } }; - // after all append finished, the value will append to threadNum * + // After all append finished, the value will append to threadNum * // appendCounter Appender.CHAR int threadNum = 20; int appendCounter = 100; @@ -5008,6 +5009,7 @@ public class TestHRegion { Bytes.toString(CellUtil.cloneValue(kv))); } + @Test (timeout=60000) public void testReverseScanner_FromMemStore_SingleCF_Normal() throws IOException { byte[] rowC = Bytes.toBytes("rowC"); @@ -5063,6 +5065,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_FromMemStore_SingleCF_LargerKey() throws IOException { byte[] rowC = Bytes.toBytes("rowC"); @@ -5119,6 +5122,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_FromMemStore_SingleCF_FullScan() throws IOException { byte[] rowC = Bytes.toBytes("rowC"); @@ -5172,6 +5176,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_moreRowsMayExistAfter() throws IOException { // case for "INCLUDE_AND_SEEK_NEXT_ROW & SEEK_NEXT_ROW" endless loop byte[] rowA = Bytes.toBytes("rowA"); @@ -5249,6 +5254,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_smaller_blocksize() throws IOException { // case to ensure no conflict with HFile index optimization byte[] rowA = Bytes.toBytes("rowA"); @@ -5328,6 +5334,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_FromMemStoreAndHFiles_MultiCFs1() throws IOException { byte[] row0 = Bytes.toBytes("row0"); // 1 kv @@ -5488,6 +5495,7 @@ public class TestHRegion { } } + @Test (timeout=60000) public void testReverseScanner_FromMemStoreAndHFiles_MultiCFs2() throws IOException { byte[] row1 = Bytes.toBytes("row1"); @@ -5561,6 +5569,101 @@ public class TestHRegion { } } + @Test (timeout=60000) + public void testSplitRegionWithReverseScan() throws IOException { + byte [] tableName = Bytes.toBytes("testSplitRegionWithReverseScan"); + byte [] qualifier = Bytes.toBytes("qualifier"); + Configuration hc = initSplit(); + int numRows = 3; + byte [][] families = {fam1}; + + //Setting up region + String method = this.getName(); + this.region = initHRegion(tableName, method, hc, families); + + //Put data in region + int startRow = 100; + putData(startRow, numRows, qualifier, families); + int splitRow = startRow + numRows; + putData(splitRow, numRows, qualifier, families); + int endRow = splitRow + numRows; + region.flushcache(); + + HRegion [] regions = null; + try { + regions = splitRegion(region, Bytes.toBytes("" + splitRow)); + //Opening the regions returned. + for (int i = 0; i < regions.length; i++) { + regions[i] = HRegion.openHRegion(regions[i], null); + } + //Verifying that the region has been split + assertEquals(2, regions.length); + + //Verifying that all data is still there and that data is in the right + //place + verifyData(regions[0], startRow, numRows, qualifier, families); + verifyData(regions[1], splitRow, numRows, qualifier, families); + + //fire the reverse scan1: top range, and larger than the last row + Scan scan = new Scan(Bytes.toBytes(String.valueOf(startRow + 10 * numRows))); + scan.setReversed(true); + InternalScanner scanner = regions[1].getScanner(scan); + List currRow = new ArrayList(); + boolean more = false; + int verify = startRow + 2 * numRows - 1; + do { + more = scanner.next(currRow); + assertEquals(Bytes.toString(currRow.get(0).getRow()), verify + ""); + verify--; + currRow.clear(); + } while(more); + assertEquals(verify, startRow + numRows - 1); + scanner.close(); + //fire the reverse scan2: top range, and equals to the last row + scan = new Scan(Bytes.toBytes(String.valueOf(startRow + 2 * numRows - 1))); + scan.setReversed(true); + scanner = regions[1].getScanner(scan); + verify = startRow + 2 * numRows - 1; + do { + more = scanner.next(currRow); + assertEquals(Bytes.toString(currRow.get(0).getRow()), verify + ""); + verify--; + currRow.clear(); + } while(more); + assertEquals(verify, startRow + numRows - 1); + scanner.close(); + //fire the reverse scan3: bottom range, and larger than the last row + scan = new Scan(Bytes.toBytes(String.valueOf(startRow + numRows))); + scan.setReversed(true); + scanner = regions[0].getScanner(scan); + verify = startRow + numRows - 1; + do { + more = scanner.next(currRow); + assertEquals(Bytes.toString(currRow.get(0).getRow()), verify + ""); + verify--; + currRow.clear(); + } while(more); + assertEquals(verify, 99); + scanner.close(); + //fire the reverse scan4: bottom range, and equals to the last row + scan = new Scan(Bytes.toBytes(String.valueOf(startRow + numRows - 1))); + scan.setReversed(true); + scanner = regions[0].getScanner(scan); + verify = startRow + numRows - 1; + do { + more = scanner.next(currRow); + assertEquals(Bytes.toString(currRow.get(0).getRow()), verify + ""); + verify--; + currRow.clear(); + } while(more); + assertEquals(verify, startRow - 1); + scanner.close(); + } finally { + HRegion.closeHRegion(this.region); + this.region = null; + } + } + @Test public void testWriteRequestsCounter() throws IOException { byte[] fam = Bytes.toBytes("info"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java index dfb20da..5f792fa 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java @@ -39,6 +39,7 @@ import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.util.Progressable; @@ -46,7 +47,7 @@ import org.apache.hadoop.util.Progressable; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestHRegionFileSystem { private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final Log LOG = LogFactory.getLog(TestHRegionFileSystem.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java index e29bef8..c7142fd 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionInfo.java @@ -26,25 +26,29 @@ import static org.junit.Assert.fail; import java.io.IOException; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.RegionInfo; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.MD5Hash; +import org.junit.Assert; import org.junit.Test; import org.junit.experimental.categories.Category; import com.google.protobuf.ByteString; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestHRegionInfo { @Test public void testPb() throws DeserializationException { @@ -59,8 +63,8 @@ public class TestHRegionInfo { HBaseTestingUtility htu = new HBaseTestingUtility(); HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO; Path basedir = htu.getDataTestDir(); - FSTableDescriptors fsTableDescriptors = new FSTableDescriptors(htu.getConfiguration()); // Create a region. That'll write the .regioninfo file. + FSTableDescriptors fsTableDescriptors = new FSTableDescriptors(htu.getConfiguration()); HRegion r = HRegion.createHRegion(hri, basedir, htu.getConfiguration(), fsTableDescriptors.get(TableName.META_TABLE_NAME)); // Get modtime on the file. @@ -68,7 +72,7 @@ public class TestHRegionInfo { HRegion.closeHRegion(r); Thread.sleep(1001); r = HRegion.openHRegion(basedir, hri, fsTableDescriptors.get(TableName.META_TABLE_NAME), - null, htu.getConfiguration()); + null, htu.getConfiguration()); // Ensure the file is not written for a second time. long modtime2 = getModTime(r); assertEquals(modtime, modtime2); @@ -256,6 +260,71 @@ public class TestHRegionInfo { assertEquals(expectedHri, convertedHri); } + @Test + public void testRegionDetailsForDisplay() throws IOException { + byte[] startKey = new byte[] {0x01, 0x01, 0x02, 0x03}; + byte[] endKey = new byte[] {0x01, 0x01, 0x02, 0x04}; + Configuration conf = new Configuration(); + conf.setBoolean("hbase.display.keys", false); + HRegionInfo h = new HRegionInfo(TableName.valueOf("foo"), startKey, endKey); + checkEquality(h, conf); + // check HRIs with non-default replicaId + h = new HRegionInfo(TableName.valueOf("foo"), startKey, endKey, false, + System.currentTimeMillis(), 1); + checkEquality(h, conf); + Assert.assertArrayEquals(HRegionInfo.HIDDEN_END_KEY, + HRegionInfo.getEndKeyForDisplay(h, conf)); + Assert.assertArrayEquals(HRegionInfo.HIDDEN_START_KEY, + HRegionInfo.getStartKeyForDisplay(h, conf)); + + RegionState state = new RegionState(h, RegionState.State.OPEN); + String descriptiveNameForDisplay = + HRegionInfo.getDescriptiveNameFromRegionStateForDisplay(state, conf); + checkDescriptiveNameEquality(descriptiveNameForDisplay,state.toDescriptiveString(), startKey); + + conf.setBoolean("hbase.display.keys", true); + Assert.assertArrayEquals(endKey, HRegionInfo.getEndKeyForDisplay(h, conf)); + Assert.assertArrayEquals(startKey, HRegionInfo.getStartKeyForDisplay(h, conf)); + Assert.assertEquals(state.toDescriptiveString(), + HRegionInfo.getDescriptiveNameFromRegionStateForDisplay(state, conf)); + } + private void checkDescriptiveNameEquality(String descriptiveNameForDisplay, String origDesc, + byte[] startKey) { + // except for the "hidden-start-key" substring everything else should exactly match + String firstPart = descriptiveNameForDisplay.substring(0, + descriptiveNameForDisplay.indexOf(new String(HRegionInfo.HIDDEN_START_KEY))); + String secondPart = descriptiveNameForDisplay.substring( + descriptiveNameForDisplay.indexOf(new String(HRegionInfo.HIDDEN_START_KEY)) + + HRegionInfo.HIDDEN_START_KEY.length); + String firstPartOrig = origDesc.substring(0, + origDesc.indexOf(Bytes.toStringBinary(startKey))); + String secondPartOrig = origDesc.substring( + origDesc.indexOf(Bytes.toStringBinary(startKey)) + + Bytes.toStringBinary(startKey).length()); + assert(firstPart.equals(firstPartOrig)); + assert(secondPart.equals(secondPartOrig)); + } + + private void checkEquality(HRegionInfo h, Configuration conf) throws IOException { + byte[] modifiedRegionName = HRegionInfo.getRegionNameForDisplay(h, conf); + byte[][] modifiedRegionNameParts = HRegionInfo.parseRegionName(modifiedRegionName); + byte[][] regionNameParts = HRegionInfo.parseRegionName(h.getRegionName()); + + //same number of parts + assert(modifiedRegionNameParts.length == regionNameParts.length); + + for (int i = 0; i < regionNameParts.length; i++) { + // all parts should match except for [1] where in the modified one, + // we should have "hidden_start_key" + if (i != 1) { + Assert.assertArrayEquals(regionNameParts[i], modifiedRegionNameParts[i]); + } else { + Assert.assertNotEquals(regionNameParts[i][0], modifiedRegionNameParts[i][0]); + Assert.assertArrayEquals(modifiedRegionNameParts[1], + HRegionInfo.getStartKeyForDisplay(h, conf)); + } + } + } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionOnCluster.java index 3676f93..ce2869b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionOnCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionOnCluster.java @@ -37,8 +37,9 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.master.HMaster; -import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mortbay.log.Log; @@ -48,7 +49,7 @@ import org.mortbay.log.Log; * {@link TestHRegion} if you don't need a cluster, if you can test w/ a * standalone {@link HRegion}. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestHRegionOnCluster { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java index d6f4a67..0e94e68 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MultithreadedTestUtil.RepeatingTestThread; import org.apache.hadoop.hbase.MultithreadedTestUtil.TestContext; import org.apache.hadoop.hbase.TableExistsException; @@ -55,6 +54,8 @@ import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.CompactRegionRequest; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.BulkLoadHFileRequest; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Pair; import org.junit.Test; @@ -66,7 +67,7 @@ import com.google.common.collect.Lists; * Tests bulk loading of HFiles and shows the atomicity or lack of atomicity of * the region server's bullkLoad functionality. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestHRegionServerBulkLoad { final static Log LOG = LogFactory.getLog(TestHRegionServerBulkLoad.class); private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java index 2d63775..c1eeea0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java @@ -32,7 +32,6 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.io.hfile.BlockCache; import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; @@ -43,12 +42,14 @@ import org.apache.hadoop.hbase.io.hfile.ResizableBlockCache; import org.apache.hadoop.hbase.io.util.HeapMemorySizeUtil; import org.apache.hadoop.hbase.regionserver.HeapMemoryManager.TunerContext; import org.apache.hadoop.hbase.regionserver.HeapMemoryManager.TunerResult; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestHeapMemoryManager { private long maxHeapSize = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage().getMax(); @@ -112,11 +113,11 @@ public class TestHeapMemoryManager { long oldBlockCacheSize = blockCache.maxSize; heapMemoryManager.start(); memStoreFlusher.flushType = FlushType.ABOVE_HIGHER_MARK; - memStoreFlusher.requestFlush(null); - memStoreFlusher.requestFlush(null); - memStoreFlusher.requestFlush(null); + memStoreFlusher.requestFlush(null, false); + memStoreFlusher.requestFlush(null, false); + memStoreFlusher.requestFlush(null, false); memStoreFlusher.flushType = FlushType.ABOVE_LOWER_MARK; - memStoreFlusher.requestFlush(null); + memStoreFlusher.requestFlush(null, false); Thread.sleep(1500); // Allow the tuner to run once and do necessary memory up assertHeapSpaceDelta(DefaultHeapMemoryTuner.DEFAULT_STEP_VALUE, oldMemstoreHeapSize, memStoreFlusher.memstoreSize); @@ -126,8 +127,8 @@ public class TestHeapMemoryManager { oldBlockCacheSize = blockCache.maxSize; // Do some more flushes before the next run of HeapMemoryTuner memStoreFlusher.flushType = FlushType.ABOVE_HIGHER_MARK; - memStoreFlusher.requestFlush(null); - memStoreFlusher.requestFlush(null); + memStoreFlusher.requestFlush(null, false); + memStoreFlusher.requestFlush(null, false); Thread.sleep(1500); assertHeapSpaceDelta(DefaultHeapMemoryTuner.DEFAULT_STEP_VALUE, oldMemstoreHeapSize, memStoreFlusher.memstoreSize); @@ -314,7 +315,7 @@ public class TestHeapMemoryManager { private static class BlockCacheStub implements ResizableBlockCache { CacheStats stats = new CacheStats("test"); long maxSize = 0; - + public BlockCacheStub(long size){ this.maxSize = size; } @@ -407,12 +408,12 @@ public class TestHeapMemoryManager { } @Override - public void requestFlush(HRegion region) { + public void requestFlush(HRegion region, boolean forceFlushAllStores) { this.listener.flushRequested(flushType, region); } @Override - public void requestDelayedFlush(HRegion region, long delay) { + public void requestDelayedFlush(HRegion region, long delay, boolean forceFlushAllStores) { } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java index 08e11f4..b8e6382 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java @@ -38,7 +38,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; @@ -50,16 +49,17 @@ import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.filter.CompareFilter; import org.apache.hadoop.hbase.filter.SingleColumnValueFilter; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; - /** * Test performance improvement of joined scanners optimization: * https://issues.apache.org/jira/browse/HBASE-5416 */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestJoinedScanners { static final Log LOG = LogFactory.getLog(TestJoinedScanners.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java index 689f6ec..341b02c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeepDeletes.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeepDeletedCells; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; @@ -50,7 +51,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestKeepDeletes { HBaseTestingUtility hbu = HBaseTestingUtility.createLocalHTU(); private final byte[] T0 = Bytes.toBytes("0"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java index 852a24a..86a15ff 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueHeap.java @@ -27,12 +27,15 @@ import java.util.List; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseTestCase; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.CollectionBackedScanner; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestKeyValueHeap extends HBaseTestCase { private static final boolean PRINT = false; @@ -51,6 +54,7 @@ public class TestKeyValueHeap extends HBaseTestCase { private byte[] col4; private byte[] col5; + @Before public void setUp() throws Exception { super.setUp(); data = Bytes.toBytes("data"); @@ -65,6 +69,7 @@ public class TestKeyValueHeap extends HBaseTestCase { col5 = Bytes.toBytes("col5"); } + @Test public void testSorted() throws IOException{ //Cases that need to be checked are: //1. The "smallest" KeyValue is in the same scanners as current @@ -127,6 +132,7 @@ public class TestKeyValueHeap extends HBaseTestCase { } + @Test public void testSeek() throws IOException { //Cases: //1. Seek KeyValue that is not in scanner @@ -175,6 +181,7 @@ public class TestKeyValueHeap extends HBaseTestCase { } + @Test public void testScannerLeak() throws IOException { // Test for unclosed scanners (HBASE-1927) diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java index 4f27ca0..7cc1644 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestKeyValueScanFixture.java @@ -27,11 +27,12 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestKeyValueScanFixture extends TestCase { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java index 5729321..df43bd0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMajorCompaction.java @@ -45,6 +45,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; @@ -69,7 +70,7 @@ import org.junit.rules.TestName; /** * Test major compactions */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestMajorCompaction { @Rule public TestName name = new TestName(); static final Log LOG = LogFactory.getLog(TestMajorCompaction.class.getName()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressTracker.java index f205eec..4c4c940 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMasterAddressTracker.java @@ -28,6 +28,7 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperListener; @@ -37,7 +38,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestMasterAddressTracker { private static final Log LOG = LogFactory.getLog(TestMasterAddressTracker.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java index 4f8287c..80333e8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java @@ -26,6 +26,7 @@ import java.util.Random; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ByteRange; import org.apache.hadoop.hbase.util.Bytes; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; /** * Test the {@link MemStoreChunkPool} class */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMemStoreChunkPool { private final static Configuration conf = new Configuration(); private static MemStoreChunkPool chunkPool; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java index 41be1ae..170bdd4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreLAB.java @@ -28,6 +28,7 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.MultithreadedTestUtil; import org.apache.hadoop.hbase.MultithreadedTestUtil.TestThread; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.ByteRange; import org.junit.Test; @@ -38,7 +39,7 @@ import com.google.common.collect.Maps; import com.google.common.primitives.Ints; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMemStoreLAB { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegion.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegion.java index 0a9e427..ddaee3d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegion.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegion.java @@ -19,12 +19,13 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.CompatibilityFactory; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.test.MetricsAssertHelper; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMetricsRegion { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegionServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegionServer.java index d5beed8..e777c1d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegionServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMetricsRegionServer.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.CompatibilityFactory; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.test.MetricsAssertHelper; import org.junit.Before; @@ -30,7 +31,7 @@ import static org.junit.Assert.assertNotNull; /** * Unit test version of rs metrics tests. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMetricsRegionServer { public static MetricsAssertHelper HELPER = CompatibilityFactory.getInstance(MetricsAssertHelper.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java index 7f96cd5..d022acb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinVersions.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeepDeletedCells; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; @@ -46,7 +47,7 @@ import org.junit.rules.TestName; /** * Test Minimum Versions feature (HBASE-4071). */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMinVersions { HBaseTestingUtility hbu = HBaseTestingUtility.createLocalHTU(); private final byte[] T0 = Bytes.toBytes("0"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMiniBatchOperationInProgress.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMiniBatchOperationInProgress.java index 50bb16a..15931c6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMiniBatchOperationInProgress.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMiniBatchOperationInProgress.java @@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Mutation; import org.apache.hadoop.hbase.client.Put; @@ -30,7 +31,7 @@ import org.apache.hadoop.hbase.util.Pair; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMiniBatchOperationInProgress { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java index be13f49..7ac6eef 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMinorCompaction.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Result; @@ -49,7 +50,7 @@ import org.junit.rules.TestName; /** * Test minor compactions */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestMinorCompaction { @Rule public TestName name = new TestName(); static final Log LOG = LogFactory.getLog(TestMinorCompaction.class.getName()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java index a1df615..51cc9d5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiColumnScanner.java @@ -45,6 +45,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; @@ -61,7 +62,7 @@ import org.junit.runners.Parameterized.Parameters; * Tests optimized scanning of multiple columns. */ @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestMultiColumnScanner { private static final Log LOG = LogFactory.getLog(TestMultiColumnScanner.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java index e876a94..09b2226 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMultiVersionConsistencyControl.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.regionserver; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import java.util.concurrent.atomic.AtomicLong; * This is a hammer test that verifies MultiVersionConsistencyControl in a * multiple writer single reader scenario. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestMultiVersionConsistencyControl extends TestCase { static class Writer implements Runnable { final AtomicBoolean finished; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestParallelPut.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestParallelPut.java index 3b57132..ea668b0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestParallelPut.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestParallelPut.java @@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.HConstants.OperationStatusCode; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; @@ -54,7 +55,7 @@ import org.junit.rules.TestName; * Testing of multiPut in parallel. * */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestParallelPut { static final Log LOG = LogFactory.getLog(TestParallelPut.class); @Rule public TestName name = new TestName(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java new file mode 100644 index 0000000..ae8f64f --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java @@ -0,0 +1,658 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertTrue; + +import java.io.IOException; +import java.util.Arrays; +import java.util.List; +import java.util.Random; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HColumnDescriptor; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.MiniHBaseCluster; +import org.apache.hadoop.hbase.NamespaceDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.Get; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.regionserver.wal.FSHLog; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.JVMClusterUtil; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.Threads; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import com.google.common.hash.Hashing; + +/** + * This test verifies the correctness of the Per Column Family flushing strategy + */ +@Category(LargeTests.class) +public class TestPerColumnFamilyFlush { + private static final Log LOG = LogFactory.getLog(TestPerColumnFamilyFlush.class); + + HRegion region = null; + + private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + + private static final Path DIR = TEST_UTIL.getDataTestDir("TestHRegion"); + + public static final TableName TABLENAME = TableName.valueOf("TestPerColumnFamilyFlush", "t1"); + + public static final byte[][] families = { Bytes.toBytes("f1"), Bytes.toBytes("f2"), + Bytes.toBytes("f3"), Bytes.toBytes("f4"), Bytes.toBytes("f5") }; + + public static final byte[] FAMILY1 = families[0]; + + public static final byte[] FAMILY2 = families[1]; + + public static final byte[] FAMILY3 = families[2]; + + private void initHRegion(String callingMethod, Configuration conf) throws IOException { + HTableDescriptor htd = new HTableDescriptor(TABLENAME); + for (byte[] family : families) { + htd.addFamily(new HColumnDescriptor(family)); + } + HRegionInfo info = new HRegionInfo(TABLENAME, null, null, false); + Path path = new Path(DIR, callingMethod); + region = HRegion.createHRegion(info, path, conf, htd); + } + + // A helper function to create puts. + private Put createPut(int familyNum, int putNum) { + byte[] qf = Bytes.toBytes("q" + familyNum); + byte[] row = Bytes.toBytes("row" + familyNum + "-" + putNum); + byte[] val = Bytes.toBytes("val" + familyNum + "-" + putNum); + Put p = new Put(row); + p.add(families[familyNum - 1], qf, val); + return p; + } + + // A helper function to create puts. + private Get createGet(int familyNum, int putNum) { + byte[] row = Bytes.toBytes("row" + familyNum + "-" + putNum); + return new Get(row); + } + + // A helper function to verify edits. + void verifyEdit(int familyNum, int putNum, HTable table) throws IOException { + Result r = table.get(createGet(familyNum, putNum)); + byte[] family = families[familyNum - 1]; + byte[] qf = Bytes.toBytes("q" + familyNum); + byte[] val = Bytes.toBytes("val" + familyNum + "-" + putNum); + assertNotNull(("Missing Put#" + putNum + " for CF# " + familyNum), r.getFamilyMap(family)); + assertNotNull(("Missing Put#" + putNum + " for CF# " + familyNum), + r.getFamilyMap(family).get(qf)); + assertTrue(("Incorrect value for Put#" + putNum + " for CF# " + familyNum), + Arrays.equals(r.getFamilyMap(family).get(qf), val)); + } + + @Test (timeout=180000) + public void testSelectiveFlushWhenEnabled() throws IOException { + // Set up the configuration + Configuration conf = HBaseConfiguration.create(); + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, 200 * 1024); + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushLargeStoresPolicy.class.getName()); + conf.setLong(FlushLargeStoresPolicy.HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, 100 * 1024); + // Intialize the HRegion + initHRegion("testSelectiveFlushWhenEnabled", conf); + // Add 1200 entries for CF1, 100 for CF2 and 50 for CF3 + for (int i = 1; i <= 1200; i++) { + region.put(createPut(1, i)); + + if (i <= 100) { + region.put(createPut(2, i)); + if (i <= 50) { + region.put(createPut(3, i)); + } + } + } + + long totalMemstoreSize = region.getMemstoreSize().get(); + + // Find the smallest LSNs for edits wrt to each CF. + long smallestSeqCF1 = region.getOldestSeqIdOfStore(FAMILY1); + long smallestSeqCF2 = region.getOldestSeqIdOfStore(FAMILY2); + long smallestSeqCF3 = region.getOldestSeqIdOfStore(FAMILY3); + + // Find the sizes of the memstores of each CF. + long cf1MemstoreSize = region.getStore(FAMILY1).getMemStoreSize(); + long cf2MemstoreSize = region.getStore(FAMILY2).getMemStoreSize(); + long cf3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + + // Get the overall smallest LSN in the region's memstores. + long smallestSeqInRegionCurrentMemstore = + region.getWAL().getEarliestMemstoreSeqNum(region.getRegionInfo().getEncodedNameAsBytes()); + + // The overall smallest LSN in the region's memstores should be the same as + // the LSN of the smallest edit in CF1 + assertEquals(smallestSeqCF1, smallestSeqInRegionCurrentMemstore); + + // Some other sanity checks. + assertTrue(smallestSeqCF1 < smallestSeqCF2); + assertTrue(smallestSeqCF2 < smallestSeqCF3); + assertTrue(cf1MemstoreSize > 0); + assertTrue(cf2MemstoreSize > 0); + assertTrue(cf3MemstoreSize > 0); + + // The total memstore size should be the same as the sum of the sizes of + // memstores of CF1, CF2 and CF3. + assertEquals(totalMemstoreSize + 3 * DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize + + cf2MemstoreSize + cf3MemstoreSize); + + // Flush! + region.flushcache(false); + + // Will use these to check if anything changed. + long oldCF2MemstoreSize = cf2MemstoreSize; + long oldCF3MemstoreSize = cf3MemstoreSize; + + // Recalculate everything + cf1MemstoreSize = region.getStore(FAMILY1).getMemStoreSize(); + cf2MemstoreSize = region.getStore(FAMILY2).getMemStoreSize(); + cf3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + totalMemstoreSize = region.getMemstoreSize().get(); + smallestSeqInRegionCurrentMemstore = + region.getWAL().getEarliestMemstoreSeqNum(region.getRegionInfo().getEncodedNameAsBytes()); + + // We should have cleared out only CF1, since we chose the flush thresholds + // and number of puts accordingly. + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize); + // Nothing should have happened to CF2, ... + assertEquals(cf2MemstoreSize, oldCF2MemstoreSize); + // ... or CF3 + assertEquals(cf3MemstoreSize, oldCF3MemstoreSize); + // Now the smallest LSN in the region should be the same as the smallest + // LSN in the memstore of CF2. + assertEquals(smallestSeqInRegionCurrentMemstore, smallestSeqCF2); + // Of course, this should hold too. + assertEquals(totalMemstoreSize + 2 * DefaultMemStore.DEEP_OVERHEAD, cf2MemstoreSize + + cf3MemstoreSize); + + // Now add more puts (mostly for CF2), so that we only flush CF2 this time. + for (int i = 1200; i < 2400; i++) { + region.put(createPut(2, i)); + + // Add only 100 puts for CF3 + if (i - 1200 < 100) { + region.put(createPut(3, i)); + } + } + + // How much does the CF3 memstore occupy? Will be used later. + oldCF3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + + // Flush again + region.flushcache(false); + + // Recalculate everything + cf1MemstoreSize = region.getStore(FAMILY1).getMemStoreSize(); + cf2MemstoreSize = region.getStore(FAMILY2).getMemStoreSize(); + cf3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + totalMemstoreSize = region.getMemstoreSize().get(); + smallestSeqInRegionCurrentMemstore = + region.getWAL().getEarliestMemstoreSeqNum(region.getRegionInfo().getEncodedNameAsBytes()); + + // CF1 and CF2, both should be absent. + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize); + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf2MemstoreSize); + // CF3 shouldn't have been touched. + assertEquals(cf3MemstoreSize, oldCF3MemstoreSize); + assertEquals(totalMemstoreSize + DefaultMemStore.DEEP_OVERHEAD, cf3MemstoreSize); + assertEquals(smallestSeqInRegionCurrentMemstore, smallestSeqCF3); + + // What happens when we hit the memstore limit, but we are not able to find + // any Column Family above the threshold? + // In that case, we should flush all the CFs. + + // Clearing the existing memstores. + region.flushcache(true); + + // The memstore limit is 200*1024 and the column family flush threshold is + // around 50*1024. We try to just hit the memstore limit with each CF's + // memstore being below the CF flush threshold. + for (int i = 1; i <= 300; i++) { + region.put(createPut(1, i)); + region.put(createPut(2, i)); + region.put(createPut(3, i)); + region.put(createPut(4, i)); + region.put(createPut(5, i)); + } + + region.flushcache(false); + // Since we won't find any CF above the threshold, and hence no specific + // store to flush, we should flush all the memstores. + assertEquals(0, region.getMemstoreSize().get()); + } + + @Test (timeout=180000) + public void testSelectiveFlushWhenNotEnabled() throws IOException { + // Set up the configuration + Configuration conf = HBaseConfiguration.create(); + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, 200 * 1024); + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushAllStoresPolicy.class.getName()); + + // Intialize the HRegion + initHRegion("testSelectiveFlushWhenNotEnabled", conf); + // Add 1200 entries for CF1, 100 for CF2 and 50 for CF3 + for (int i = 1; i <= 1200; i++) { + region.put(createPut(1, i)); + if (i <= 100) { + region.put(createPut(2, i)); + if (i <= 50) { + region.put(createPut(3, i)); + } + } + } + + long totalMemstoreSize = region.getMemstoreSize().get(); + + // Find the sizes of the memstores of each CF. + long cf1MemstoreSize = region.getStore(FAMILY1).getMemStoreSize(); + long cf2MemstoreSize = region.getStore(FAMILY2).getMemStoreSize(); + long cf3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + + // Some other sanity checks. + assertTrue(cf1MemstoreSize > 0); + assertTrue(cf2MemstoreSize > 0); + assertTrue(cf3MemstoreSize > 0); + + // The total memstore size should be the same as the sum of the sizes of + // memstores of CF1, CF2 and CF3. + assertEquals(totalMemstoreSize + 3 * DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize + + cf2MemstoreSize + cf3MemstoreSize); + + // Flush! + region.flushcache(false); + + cf1MemstoreSize = region.getStore(FAMILY1).getMemStoreSize(); + cf2MemstoreSize = region.getStore(FAMILY2).getMemStoreSize(); + cf3MemstoreSize = region.getStore(FAMILY3).getMemStoreSize(); + totalMemstoreSize = region.getMemstoreSize().get(); + long smallestSeqInRegionCurrentMemstore = + region.getWAL().getEarliestMemstoreSeqNum(region.getRegionInfo().getEncodedNameAsBytes()); + + // Everything should have been cleared + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize); + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf2MemstoreSize); + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf3MemstoreSize); + assertEquals(0, totalMemstoreSize); + assertEquals(HConstants.NO_SEQNUM, smallestSeqInRegionCurrentMemstore); + } + + // Find the (first) region which has the specified name. + private static Pair getRegionWithName(TableName tableName) { + MiniHBaseCluster cluster = TEST_UTIL.getMiniHBaseCluster(); + List rsts = cluster.getRegionServerThreads(); + for (int i = 0; i < cluster.getRegionServerThreads().size(); i++) { + HRegionServer hrs = rsts.get(i).getRegionServer(); + for (HRegion region : hrs.getOnlineRegions(tableName)) { + return Pair.newPair(region, hrs); + } + } + return null; + } + + @Test (timeout=180000) + public void testLogReplay() throws Exception { + Configuration conf = TEST_UTIL.getConfiguration(); + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, 20000); + // Carefully chosen limits so that the memstore just flushes when we're done + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushLargeStoresPolicy.class.getName()); + conf.setLong(FlushLargeStoresPolicy.HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, 10000); + final int numRegionServers = 4; + try { + TEST_UTIL.startMiniCluster(numRegionServers); + TEST_UTIL.getHBaseAdmin().createNamespace( + NamespaceDescriptor.create(TABLENAME.getNamespaceAsString()).build()); + HTable table = TEST_UTIL.createTable(TABLENAME, families); + HTableDescriptor htd = table.getTableDescriptor(); + + for (byte[] family : families) { + if (!htd.hasFamily(family)) { + htd.addFamily(new HColumnDescriptor(family)); + } + } + + // Add 100 edits for CF1, 20 for CF2, 20 for CF3. + // These will all be interleaved in the log. + for (int i = 1; i <= 80; i++) { + table.put(createPut(1, i)); + if (i <= 10) { + table.put(createPut(2, i)); + table.put(createPut(3, i)); + } + } + table.flushCommits(); + Thread.sleep(1000); + + Pair desiredRegionAndServer = getRegionWithName(TABLENAME); + HRegion desiredRegion = desiredRegionAndServer.getFirst(); + assertTrue("Could not find a region which hosts the new region.", desiredRegion != null); + + // Flush the region selectively. + desiredRegion.flushcache(false); + + long totalMemstoreSize; + long cf1MemstoreSize, cf2MemstoreSize, cf3MemstoreSize; + totalMemstoreSize = desiredRegion.getMemstoreSize().get(); + + // Find the sizes of the memstores of each CF. + cf1MemstoreSize = desiredRegion.getStore(FAMILY1).getMemStoreSize(); + cf2MemstoreSize = desiredRegion.getStore(FAMILY2).getMemStoreSize(); + cf3MemstoreSize = desiredRegion.getStore(FAMILY3).getMemStoreSize(); + + // CF1 Should have been flushed + assertEquals(DefaultMemStore.DEEP_OVERHEAD, cf1MemstoreSize); + // CF2 and CF3 shouldn't have been flushed. + assertTrue(cf2MemstoreSize > 0); + assertTrue(cf3MemstoreSize > 0); + assertEquals(totalMemstoreSize + 2 * DefaultMemStore.DEEP_OVERHEAD, cf2MemstoreSize + + cf3MemstoreSize); + + // Wait for the RS report to go across to the master, so that the master + // is aware of which sequence ids have been flushed, before we kill the RS. + // If in production, the RS dies before the report goes across, we will + // safely replay all the edits. + Thread.sleep(2000); + + // Abort the region server where we have the region hosted. + HRegionServer rs = desiredRegionAndServer.getSecond(); + rs.abort("testing"); + + // The aborted region server's regions will be eventually assigned to some + // other region server, and the get RPC call (inside verifyEdit()) will + // retry for some time till the regions come back up. + + // Verify that all the edits are safe. + for (int i = 1; i <= 80; i++) { + verifyEdit(1, i, table); + if (i <= 10) { + verifyEdit(2, i, table); + verifyEdit(3, i, table); + } + } + } finally { + TEST_UTIL.shutdownMiniCluster(); + } + } + + // Test Log Replay with Distributed Replay on. + // In distributed log replay, the log splitters ask the master for the + // last flushed sequence id for a region. This test would ensure that we + // are doing the book-keeping correctly. + @Test (timeout=180000) + public void testLogReplayWithDistributedReplay() throws Exception { + TEST_UTIL.getConfiguration().setBoolean(HConstants.DISTRIBUTED_LOG_REPLAY_KEY, true); + testLogReplay(); + } + + /** + * When a log roll is about to happen, we do a flush of the regions who will be affected by the + * log roll. These flushes cannot be a selective flushes, otherwise we cannot roll the logs. This + * test ensures that we do a full-flush in that scenario. + * @throws IOException + */ + @Test (timeout=180000) + public void testFlushingWhenLogRolling() throws Exception { + TableName tableName = TableName.valueOf("testFlushingWhenLogRolling"); + Configuration conf = TEST_UTIL.getConfiguration(); + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, 300000); + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushLargeStoresPolicy.class.getName()); + conf.setLong(FlushLargeStoresPolicy.HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, 100000); + + // Also, let us try real hard to get a log roll to happen. + // Keeping the log roll period to 2s. + conf.setLong("hbase.regionserver.logroll.period", 2000); + // Keep the block size small so that we fill up the log files very fast. + conf.setLong("hbase.regionserver.hlog.blocksize", 6144); + // Make it 10 as max logs before a flush comes on. + final int walcount = 10; + conf.setInt("hbase.regionserver.maxlogs", walcount); + int maxLogs = conf.getInt("hbase.regionserver.maxlogs", walcount); + + final int numRegionServers = 4; + try { + TEST_UTIL.startMiniCluster(numRegionServers); + HTable table = null; + table = TEST_UTIL.createTable(tableName, families); + // Force flush the namespace table so edits to it are not hanging around as oldest + // edits. Otherwise, below, when we make maximum number of WAL files, then it will be + // the namespace region that is flushed and not the below 'desiredRegion'. + try (Admin admin = TEST_UTIL.getConnection().getAdmin()) { + admin.flush(TableName.NAMESPACE_TABLE_NAME); + } + HRegion desiredRegion = getRegionWithName(tableName).getFirst(); + assertTrue("Could not find a region which hosts the new region.", desiredRegion != null); + LOG.info("Writing to region=" + desiredRegion); + + // Add some edits. Most will be for CF1, some for CF2 and CF3. + for (int i = 1; i <= 10000; i++) { + table.put(createPut(1, i)); + if (i <= 200) { + table.put(createPut(2, i)); + table.put(createPut(3, i)); + } + table.flushCommits(); + // Keep adding until we exceed the number of log files, so that we are + // able to trigger the cleaning of old log files. + int currentNumLogFiles = ((FSHLog) (desiredRegion.getWAL())).getNumLogFiles(); + if (currentNumLogFiles > maxLogs) { + LOG.info("The number of log files is now: " + currentNumLogFiles + + ". Expect a log roll and memstore flush."); + break; + } + } + table.close(); + // Wait for some time till the flush caused by log rolling happens. + while (((FSHLog) (desiredRegion.getWAL())).getNumLogFiles() > maxLogs) Threads.sleep(100); + LOG.info("Finished waiting on flush after too many WALs..."); + + // We have artificially created the conditions for a log roll. When a + // log roll happens, we should flush all the column families. Testing that + // case here. + + // Individual families should have been flushed. + assertEquals(DefaultMemStore.DEEP_OVERHEAD, + desiredRegion.getStore(FAMILY1).getMemStoreSize()); + assertEquals(DefaultMemStore.DEEP_OVERHEAD, + desiredRegion.getStore(FAMILY2).getMemStoreSize()); + assertEquals(DefaultMemStore.DEEP_OVERHEAD, + desiredRegion.getStore(FAMILY3).getMemStoreSize()); + + // And of course, the total memstore should also be clean. + assertEquals(0, desiredRegion.getMemstoreSize().get()); + } finally { + TEST_UTIL.shutdownMiniCluster(); + } + } + + private void doPut(Table table, long memstoreFlushSize) throws IOException, InterruptedException { + HRegion region = getRegionWithName(table.getName()).getFirst(); + // cf1 4B per row, cf2 40B per row and cf3 400B per row + byte[] qf = Bytes.toBytes("qf"); + Random rand = new Random(); + byte[] value1 = new byte[100]; + byte[] value2 = new byte[200]; + byte[] value3 = new byte[400]; + for (int i = 0; i < 10000; i++) { + Put put = new Put(Bytes.toBytes("row-" + i)); + rand.setSeed(i); + rand.nextBytes(value1); + rand.nextBytes(value2); + rand.nextBytes(value3); + put.add(FAMILY1, qf, value1); + put.add(FAMILY2, qf, value2); + put.add(FAMILY3, qf, value3); + table.put(put); + // slow down to let regionserver flush region. + while (region.getMemstoreSize().get() > memstoreFlushSize) { + Thread.sleep(100); + } + } + } + + // Under the same write load, small stores should have less store files when + // percolumnfamilyflush enabled. + @Test (timeout=180000) + public void testCompareStoreFileCount() throws Exception { + long memstoreFlushSize = 1024L * 1024; + Configuration conf = TEST_UTIL.getConfiguration(); + conf.setLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE, memstoreFlushSize); + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushAllStoresPolicy.class.getName()); + conf.setLong(FlushLargeStoresPolicy.HREGION_COLUMNFAMILY_FLUSH_SIZE_LOWER_BOUND, 400 * 1024); + conf.setInt(HStore.BLOCKING_STOREFILES_KEY, 10000); + conf.set(HConstants.HBASE_REGION_SPLIT_POLICY_KEY, + ConstantSizeRegionSplitPolicy.class.getName()); + + HTableDescriptor htd = new HTableDescriptor(TABLENAME); + htd.setCompactionEnabled(false); + htd.addFamily(new HColumnDescriptor(FAMILY1)); + htd.addFamily(new HColumnDescriptor(FAMILY2)); + htd.addFamily(new HColumnDescriptor(FAMILY3)); + + LOG.info("==============Test with selective flush disabled==============="); + int cf1StoreFileCount = -1; + int cf2StoreFileCount = -1; + int cf3StoreFileCount = -1; + int cf1StoreFileCount1 = -1; + int cf2StoreFileCount1 = -1; + int cf3StoreFileCount1 = -1; + try { + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.getHBaseAdmin().createNamespace( + NamespaceDescriptor.create(TABLENAME.getNamespaceAsString()).build()); + TEST_UTIL.getHBaseAdmin().createTable(htd); + TEST_UTIL.waitTableAvailable(TABLENAME); + Connection conn = ConnectionFactory.createConnection(conf); + Table table = conn.getTable(TABLENAME); + doPut(table, memstoreFlushSize); + table.close(); + conn.close(); + + HRegion region = getRegionWithName(TABLENAME).getFirst(); + cf1StoreFileCount = region.getStore(FAMILY1).getStorefilesCount(); + cf2StoreFileCount = region.getStore(FAMILY2).getStorefilesCount(); + cf3StoreFileCount = region.getStore(FAMILY3).getStorefilesCount(); + } finally { + TEST_UTIL.shutdownMiniCluster(); + } + + LOG.info("==============Test with selective flush enabled==============="); + conf.set(FlushPolicyFactory.HBASE_FLUSH_POLICY_KEY, FlushLargeStoresPolicy.class.getName()); + try { + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.getHBaseAdmin().createNamespace( + NamespaceDescriptor.create(TABLENAME.getNamespaceAsString()).build()); + TEST_UTIL.getHBaseAdmin().createTable(htd); + Connection conn = ConnectionFactory.createConnection(conf); + Table table = conn.getTable(TABLENAME); + doPut(table, memstoreFlushSize); + table.close(); + conn.close(); + + region = getRegionWithName(TABLENAME).getFirst(); + cf1StoreFileCount1 = region.getStore(FAMILY1).getStorefilesCount(); + cf2StoreFileCount1 = region.getStore(FAMILY2).getStorefilesCount(); + cf3StoreFileCount1 = region.getStore(FAMILY3).getStorefilesCount(); + } finally { + TEST_UTIL.shutdownMiniCluster(); + } + + LOG.info("disable selective flush: " + Bytes.toString(FAMILY1) + "=>" + cf1StoreFileCount + + ", " + Bytes.toString(FAMILY2) + "=>" + cf2StoreFileCount + ", " + + Bytes.toString(FAMILY3) + "=>" + cf3StoreFileCount); + LOG.info("enable selective flush: " + Bytes.toString(FAMILY1) + "=>" + cf1StoreFileCount1 + + ", " + Bytes.toString(FAMILY2) + "=>" + cf2StoreFileCount1 + ", " + + Bytes.toString(FAMILY3) + "=>" + cf3StoreFileCount1); + // small CF will have less store files. + assertTrue(cf1StoreFileCount1 < cf1StoreFileCount); + assertTrue(cf2StoreFileCount1 < cf2StoreFileCount); + } + + public static void main(String[] args) throws Exception { + int numRegions = Integer.parseInt(args[0]); + long numRows = Long.parseLong(args[1]); + + HTableDescriptor htd = new HTableDescriptor(TABLENAME); + htd.setMaxFileSize(10L * 1024 * 1024 * 1024); + htd.setValue(HTableDescriptor.SPLIT_POLICY, ConstantSizeRegionSplitPolicy.class.getName()); + htd.addFamily(new HColumnDescriptor(FAMILY1)); + htd.addFamily(new HColumnDescriptor(FAMILY2)); + htd.addFamily(new HColumnDescriptor(FAMILY3)); + + Configuration conf = HBaseConfiguration.create(); + Connection conn = ConnectionFactory.createConnection(conf); + Admin admin = conn.getAdmin(); + if (admin.tableExists(TABLENAME)) { + admin.disableTable(TABLENAME); + admin.deleteTable(TABLENAME); + } + if (numRegions >= 3) { + byte[] startKey = new byte[16]; + byte[] endKey = new byte[16]; + Arrays.fill(endKey, (byte) 0xFF); + admin.createTable(htd, startKey, endKey, numRegions); + } else { + admin.createTable(htd); + } + admin.close(); + + Table table = conn.getTable(TABLENAME); + byte[] qf = Bytes.toBytes("qf"); + Random rand = new Random(); + byte[] value1 = new byte[16]; + byte[] value2 = new byte[256]; + byte[] value3 = new byte[4096]; + for (long i = 0; i < numRows; i++) { + Put put = new Put(Hashing.md5().hashLong(i).asBytes()); + rand.setSeed(i); + rand.nextBytes(value1); + rand.nextBytes(value2); + rand.nextBytes(value3); + put.add(FAMILY1, qf, value1); + put.add(FAMILY2, qf, value2); + put.add(FAMILY3, qf, value3); + table.put(put); + if (i % 10000 == 0) { + LOG.info(i + " rows put"); + } + } + table.close(); + conn.close(); + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java index d54d889..88aa4d1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPriorityRpc.java @@ -24,13 +24,14 @@ import static org.junit.Assert.assertTrue; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.CoordinatedStateManagerFactory; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.ipc.PriorityFunction; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.Get; @@ -49,7 +50,7 @@ import com.google.protobuf.ByteString; /** * Tests that verify certain RPCs get a higher QoS. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestPriorityRpc { private HRegionServer regionServer = null; private PriorityFunction priority = null; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQosFunction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQosFunction.java index 0998e1a..fcc5019 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQosFunction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQosFunction.java @@ -22,6 +22,7 @@ import static org.mockito.Mockito.when; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos.MultiRequest; import org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader; @@ -35,7 +36,7 @@ import com.google.protobuf.Message; * Basic test that qos function is sort of working; i.e. a change in method naming style * over in pb doesn't break it. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestQosFunction { @Test public void testPriority() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java index 04246f1..6476288 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestQueryMatcher.java @@ -33,15 +33,18 @@ import org.apache.hadoop.hbase.KeepDeletedCells; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; import org.apache.hadoop.hbase.KeyValue.Type; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestQueryMatcher extends HBaseTestCase { private static final boolean PRINT = false; @@ -64,6 +67,7 @@ public class TestQueryMatcher extends HBaseTestCase { KVComparator rowComparator; private Scan scan; + @Before public void setUp() throws Exception { super.setUp(); row1 = Bytes.toBytes("row1"); @@ -125,6 +129,7 @@ public class TestQueryMatcher extends HBaseTestCase { } } + @Test public void testMatch_ExplicitColumns() throws IOException { //Moving up from the Tracker by using Gets and List instead @@ -142,6 +147,7 @@ public class TestQueryMatcher extends HBaseTestCase { _testMatch_ExplicitColumns(scan, expected); } + @Test public void testMatch_ExplicitColumnsWithLookAhead() throws IOException { //Moving up from the Tracker by using Gets and List instead @@ -162,6 +168,7 @@ public class TestQueryMatcher extends HBaseTestCase { } + @Test public void testMatch_Wildcard() throws IOException { //Moving up from the Tracker by using Gets and List instead @@ -217,6 +224,7 @@ public class TestQueryMatcher extends HBaseTestCase { * * @throws IOException */ + @Test public void testMatch_ExpiredExplicit() throws IOException { @@ -271,6 +279,7 @@ public class TestQueryMatcher extends HBaseTestCase { * * @throws IOException */ + @Test public void testMatch_ExpiredWildcard() throws IOException { @@ -316,6 +325,7 @@ public class TestQueryMatcher extends HBaseTestCase { } } + @Test public void testMatch_PartialRangeDropDeletes() throws Exception { // Some ranges. testDropDeletes( diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java index 4ad2c31..97e69b7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSKilledWhenInitializing.java @@ -29,7 +29,6 @@ import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.LocalHBaseCluster; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.ServerManager; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameStringPair; import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionServerStartupResponse; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.JVMClusterUtil.MasterThread; import org.apache.hadoop.hbase.util.Threads; import org.junit.Test; @@ -45,7 +46,7 @@ import org.junit.experimental.categories.Category; /** * Tests region server termination during startup. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestRSKilledWhenInitializing { private static boolean masterActive = false; private static AtomicBoolean firstRS = new AtomicBoolean(true); @@ -56,7 +57,7 @@ public class TestRSKilledWhenInitializing { * @throws Exception */ @Test(timeout = 180000) - public void testRSTermnationAfterRegisteringToMasterBeforeCreatingEphemeralNod() throws Exception { + public void testRSTerminationAfterRegisteringToMasterBeforeCreatingEphemeralNod() throws Exception { final int NUM_MASTERS = 1; final int NUM_RS = 2; @@ -76,7 +77,7 @@ public class TestRSKilledWhenInitializing { master.start(); try { long startTime = System.currentTimeMillis(); - while (!master.getMaster().isActiveMaster()) { + while (!master.getMaster().isInitialized()) { try { Thread.sleep(100); } catch (InterruptedException ignored) { @@ -91,11 +92,11 @@ public class TestRSKilledWhenInitializing { Thread.sleep(10000); List onlineServersList = master.getMaster().getServerManager().getOnlineServersList(); - while (onlineServersList.size() > 1) { + while (onlineServersList.size() > 2) { Thread.sleep(100); onlineServersList = master.getMaster().getServerManager().getOnlineServersList(); } - assertEquals(onlineServersList.size(), 1); + assertEquals(onlineServersList.size(), 2); cluster.shutdown(); } finally { masterActive = false; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java index 2f1fee8..22a3546 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSStatusServlet.java @@ -26,12 +26,13 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.protobuf.ResponseConverter; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetOnlineRegionRequest; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetServerInfoRequest; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetServerInfoResponse; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MasterAddressTracker; @@ -48,7 +49,7 @@ import com.google.protobuf.ServiceException; /** * Tests for the region server status page and its template. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestRSStatusServlet { private HRegionServer rs; private RSRpcServices rpcServices; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionFavoredNodes.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionFavoredNodes.java index 46a4062..c89c0df 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionFavoredNodes.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionFavoredNodes.java @@ -33,8 +33,9 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.apache.hadoop.hdfs.server.datanode.DataNode; @@ -48,7 +49,7 @@ import org.junit.experimental.categories.Category; /** * Tests the ability to specify favored nodes for a region. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestRegionFavoredNodes { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransaction.java index 8c67656..8b5b4a3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransaction.java @@ -40,12 +40,13 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.zookeeper.KeeperException; @@ -61,7 +62,7 @@ import com.google.common.collect.ImmutableList; * Test the {@link RegionMergeTransaction} class against two HRegions (as * opposed to running cluster). */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestRegionMergeTransaction { private final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final Path testdir = TEST_UTIL.getDataTestDir(this.getClass() diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java index 9337786..f4b6f02 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionMergeTransactionOnCluster.java @@ -24,25 +24,29 @@ import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.io.IOException; +import java.util.ArrayList; import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; import org.apache.commons.lang.math.RandomUtils; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownRegionException; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; @@ -50,20 +54,30 @@ import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.exceptions.MergeRegionException; import org.apache.hadoop.hbase.master.AssignmentManager; import org.apache.hadoop.hbase.master.HMaster; +import org.apache.hadoop.hbase.master.MasterRpcServices; +import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.RegionStates; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionRequest; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionResponse; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Pair; import org.apache.hadoop.hbase.util.PairOfSameType; import org.apache.hadoop.util.StringUtils; +import org.apache.zookeeper.KeeperException; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; import com.google.common.base.Joiner; +import com.google.protobuf.RpcController; +import com.google.protobuf.ServiceException; /** * Like {@link TestRegionMergeTransaction} in that we're testing @@ -71,7 +85,7 @@ import com.google.common.base.Joiner; * cluster where {@link TestRegionMergeTransaction} is tests against bare * {@link HRegion}. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestRegionMergeTransactionOnCluster { private static final Log LOG = LogFactory .getLog(TestRegionMergeTransactionOnCluster.class); @@ -92,22 +106,16 @@ public class TestRegionMergeTransactionOnCluster { private static HMaster master; private static Admin admin; - static void setupOnce() throws Exception { + @BeforeClass + public static void beforeAllTests() throws Exception { // Start a cluster - TEST_UTIL.startMiniCluster(NB_SERVERS); + TEST_UTIL.startMiniCluster(1, NB_SERVERS, null, MyMaster.class, null); MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); master = cluster.getMaster(); master.balanceSwitch(false); admin = TEST_UTIL.getHBaseAdmin(); } - @BeforeClass - public static void beforeAllTests() throws Exception { - // Use ZK for region assignment - TEST_UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", true); - setupOnce(); - } - @AfterClass public static void afterAllTests() throws Exception { TEST_UTIL.shutdownMiniCluster(); @@ -146,13 +154,13 @@ public class TestRegionMergeTransactionOnCluster { } // We should not be able to assign it again - am.assign(hri, true, true); + am.assign(hri, true); assertFalse("Merged region can't be assigned", regionStates.isRegionInTransition(hri)); assertTrue(regionStates.isRegionInState(hri, State.MERGED)); // We should not be able to unassign it either - am.unassign(hri, true, null); + am.unassign(hri, null); assertFalse("Merged region can't be unassigned", regionStates.isRegionInTransition(hri)); assertTrue(regionStates.isRegionInState(hri, State.MERGED)); @@ -160,6 +168,32 @@ public class TestRegionMergeTransactionOnCluster { table.close(); } + /** + * Not really restarting the master. Simulate it by clear of new region + * state since it is not persisted, will be lost after master restarts. + */ + @Test + public void testMergeAndRestartingMaster() throws Exception { + LOG.info("Starting testMergeAndRestartingMaster"); + final TableName tableName = TableName.valueOf("testMergeAndRestartingMaster"); + + // Create table and load data. + Table table = createTableAndLoadData(master, tableName); + + try { + MyMasterRpcServices.enabled.set(true); + + // Merge 1st and 2nd region + mergeRegionsAndVerifyRegionNum(master, tableName, 0, 1, + INITIAL_REGION_NUM - 1); + } finally { + MyMasterRpcServices.enabled.set(false); + } + + table.close(); + } + + @SuppressWarnings("deprecation") @Test public void testCleanMergeReference() throws Exception { LOG.info("Starting testCleanMergeReference"); @@ -176,7 +210,7 @@ public class TestRegionMergeTransactionOnCluster { table.close(); List> tableRegions = MetaTableAccessor - .getTableRegionsAndLocations(master.getZooKeeper(), master.getConnection(), tableName); + .getTableRegionsAndLocations(master.getConnection(), tableName); HRegionInfo mergedRegionInfo = tableRegions.get(0).getFirst(); HTableDescriptor tableDescritor = master.getTableDescriptors().get( tableName); @@ -288,6 +322,45 @@ public class TestRegionMergeTransactionOnCluster { } } + @Test + public void testMergeWithReplicas() throws Exception { + final TableName tableName = TableName.valueOf("testMergeWithReplicas"); + // Create table and load data. + createTableAndLoadData(master, tableName, 5, 2); + List> initialRegionToServers = + MetaTableAccessor.getTableRegionsAndLocations( + master.getConnection(), tableName); + // Merge 1st and 2nd region + PairOfSameType mergedRegions = mergeRegionsAndVerifyRegionNum(master, tableName, + 0, 2, 5 * 2 - 2); + List> currentRegionToServers = + MetaTableAccessor.getTableRegionsAndLocations( + master.getConnection(), tableName); + List initialRegions = new ArrayList(); + for (Pair p : initialRegionToServers) { + initialRegions.add(p.getFirst()); + } + List currentRegions = new ArrayList(); + for (Pair p : currentRegionToServers) { + currentRegions.add(p.getFirst()); + } + assertTrue(initialRegions.contains(mergedRegions.getFirst())); //this is the first region + assertTrue(initialRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + mergedRegions.getFirst(), 1))); //this is the replica of the first region + assertTrue(initialRegions.contains(mergedRegions.getSecond())); //this is the second region + assertTrue(initialRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + mergedRegions.getSecond(), 1))); //this is the replica of the second region + assertTrue(!initialRegions.contains(currentRegions.get(0))); //this is the new region + assertTrue(!initialRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + currentRegions.get(0), 1))); //replica of the new region + assertTrue(currentRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + currentRegions.get(0), 1))); //replica of the new region + assertTrue(!currentRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + mergedRegions.getFirst(), 1))); //replica of the merged region + assertTrue(!currentRegions.contains(RegionReplicaUtil.getRegionInfoForReplica( + mergedRegions.getSecond(), 1))); //replica of the merged region + } + private PairOfSameType mergeRegionsAndVerifyRegionNum( HMaster master, TableName tablename, int regionAnum, int regionBnum, int expectedRegionNum) throws Exception { @@ -301,7 +374,7 @@ public class TestRegionMergeTransactionOnCluster { HMaster master, TableName tablename, int regionAnum, int regionBnum) throws Exception { List> tableRegions = MetaTableAccessor - .getTableRegionsAndLocations(master.getZooKeeper(), + .getTableRegionsAndLocations( master.getConnection(), tablename); HRegionInfo regionA = tableRegions.get(regionAnum).getFirst(); HRegionInfo regionB = tableRegions.get(regionBnum).getFirst(); @@ -317,7 +390,7 @@ public class TestRegionMergeTransactionOnCluster { List tableRegionsInMaster; long timeout = System.currentTimeMillis() + waitTime; while (System.currentTimeMillis() < timeout) { - tableRegionsInMeta = MetaTableAccessor.getTableRegionsAndLocations(master.getZooKeeper(), + tableRegionsInMeta = MetaTableAccessor.getTableRegionsAndLocations( master.getConnection(), tablename); tableRegionsInMaster = master.getAssignmentManager().getRegionStates() .getRegionsOfTable(tablename); @@ -328,7 +401,7 @@ public class TestRegionMergeTransactionOnCluster { Thread.sleep(250); } - tableRegionsInMeta = MetaTableAccessor.getTableRegionsAndLocations(master.getZooKeeper(), + tableRegionsInMeta = MetaTableAccessor.getTableRegionsAndLocations( master.getConnection(), tablename); LOG.info("Regions after merge:" + Joiner.on(',').join(tableRegionsInMeta)); assertEquals(expectedRegionNum, tableRegionsInMeta.size()); @@ -336,11 +409,11 @@ public class TestRegionMergeTransactionOnCluster { private Table createTableAndLoadData(HMaster master, TableName tablename) throws Exception { - return createTableAndLoadData(master, tablename, INITIAL_REGION_NUM); + return createTableAndLoadData(master, tablename, INITIAL_REGION_NUM, 1); } private Table createTableAndLoadData(HMaster master, TableName tablename, - int numRegions) throws Exception { + int numRegions, int replication) throws Exception { assertTrue("ROWSIZE must > numregions:" + numRegions, ROWSIZE > numRegions); byte[][] splitRows = new byte[numRegions - 1][]; for (int i = 0; i < splitRows.length; i++) { @@ -348,6 +421,9 @@ public class TestRegionMergeTransactionOnCluster { } Table table = TEST_UTIL.createTable(tablename, FAMILYNAME, splitRows); + if (replication > 1) { + HBaseTestingUtility.setReplicas(admin, tablename, replication); + } loadData(table); verifyRowCount(table, ROWSIZE); @@ -355,18 +431,17 @@ public class TestRegionMergeTransactionOnCluster { long timeout = System.currentTimeMillis() + waitTime; List> tableRegions; while (System.currentTimeMillis() < timeout) { - tableRegions = MetaTableAccessor.getTableRegionsAndLocations(master.getZooKeeper(), + tableRegions = MetaTableAccessor.getTableRegionsAndLocations( master.getConnection(), tablename); - if (tableRegions.size() == numRegions) + if (tableRegions.size() == numRegions * replication) break; Thread.sleep(250); } tableRegions = MetaTableAccessor.getTableRegionsAndLocations( - master.getZooKeeper(), master.getConnection(), tablename); LOG.info("Regions after load: " + Joiner.on(',').join(tableRegions)); - assertEquals(numRegions, tableRegions.size()); + assertEquals(numRegions * replication, tableRegions.size()); return table; } @@ -396,4 +471,45 @@ public class TestRegionMergeTransactionOnCluster { assertEquals(expectedRegionNum, rowCount); scanner.close(); } + + // Make it public so that JVMClusterUtil can access it. + public static class MyMaster extends HMaster { + public MyMaster(Configuration conf, CoordinatedStateManager cp) + throws IOException, KeeperException, + InterruptedException { + super(conf, cp); + } + + @Override + protected RSRpcServices createRpcServices() throws IOException { + return new MyMasterRpcServices(this); + } + } + + static class MyMasterRpcServices extends MasterRpcServices { + static AtomicBoolean enabled = new AtomicBoolean(false); + + private HMaster myMaster; + public MyMasterRpcServices(HMaster master) throws IOException { + super(master); + myMaster = master; + } + + @Override + public ReportRegionStateTransitionResponse reportRegionStateTransition(RpcController c, + ReportRegionStateTransitionRequest req) throws ServiceException { + ReportRegionStateTransitionResponse resp = super.reportRegionStateTransition(c, req); + if (enabled.get() && req.getTransition(0).getTransitionCode() + == TransitionCode.READY_TO_MERGE && !resp.hasErrorMessage()) { + RegionStates regionStates = myMaster.getAssignmentManager().getRegionStates(); + for (RegionState regionState: regionStates.getRegionsInTransition().values()) { + // Find the merging_new region and remove it + if (regionState.isMergingNew()) { + regionStates.deleteRegion(regionState.getRegion()); + } + } + } + return resp; + } + } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java index afeec4d..cbe79fe 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionReplicas.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.regionserver; +import static org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.*; import java.io.IOException; import java.util.Random; import java.util.concurrent.ExecutorService; @@ -33,7 +34,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.NotServingRegionException; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TestMetaTableAccessor; import org.apache.hadoop.hbase.client.Consistency; @@ -45,14 +46,11 @@ import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.io.hfile.HFileScanner; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; -import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; import org.apache.hadoop.hbase.protobuf.generated.ClientProtos; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; import org.apache.hadoop.hdfs.DFSConfigKeys; import org.apache.hadoop.util.StringUtils; -import org.junit.After; import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; @@ -65,7 +63,7 @@ import com.google.protobuf.ServiceException; * Tests for region replicas. Sad that we cannot isolate these without bringing up a whole * cluster. See {@link TestRegionServerNoMaster}. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestRegionReplicas { private static final Log LOG = LogFactory.getLog(TestRegionReplicas.class); @@ -105,77 +103,18 @@ public class TestRegionReplicas { @AfterClass public static void afterClass() throws Exception { + HRegionServer.TEST_SKIP_REPORTING_TRANSITION = false; table.close(); HTU.shutdownMiniCluster(); } - @After - public void after() throws Exception { - // Clean the state if the test failed before cleaning the znode - // It does not manage all bad failures, so if there are multiple failures, only - // the first one should be looked at. - ZKAssign.deleteNodeFailSilent(HTU.getZooKeeperWatcher(), hriPrimary); - } - private HRegionServer getRS() { return HTU.getMiniHBaseCluster().getRegionServer(0); } - private void openRegion(HRegionInfo hri) throws Exception { - ZKAssign.createNodeOffline(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - // first version is '0' - AdminProtos.OpenRegionRequest orr = RequestConverter.buildOpenRegionRequest(getRS().getServerName(), hri, 0, null, null); - AdminProtos.OpenRegionResponse responseOpen = getRS().getRSRpcServices().openRegion(null, orr); - Assert.assertTrue(responseOpen.getOpeningStateCount() == 1); - Assert.assertTrue(responseOpen.getOpeningState(0). - equals(AdminProtos.OpenRegionResponse.RegionOpeningState.OPENED)); - checkRegionIsOpened(hri.getEncodedName()); - } - - private void closeRegion(HRegionInfo hri) throws Exception { - ZKAssign.createNodeClosing(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - - AdminProtos.CloseRegionRequest crr = RequestConverter.buildCloseRegionRequest(getRS().getServerName(), - hri.getEncodedName(), true); - AdminProtos.CloseRegionResponse responseClose = getRS().getRSRpcServices().closeRegion(null, crr); - Assert.assertTrue(responseClose.getClosed()); - - checkRegionIsClosed(hri.getEncodedName()); - - ZKAssign.deleteClosedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), getRS().getServerName()); - } - - private void checkRegionIsOpened(String encodedRegionName) throws Exception { - - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { - Thread.sleep(1); - } - - Assert.assertTrue(getRS().getRegionByEncodedName(encodedRegionName).isAvailable()); - - Assert.assertTrue( - ZKAssign.deleteOpenedNode(HTU.getZooKeeperWatcher(), encodedRegionName, getRS().getServerName())); - } - - - private void checkRegionIsClosed(String encodedRegionName) throws Exception { - - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { - Thread.sleep(1); - } - - try { - Assert.assertFalse(getRS().getRegionByEncodedName(encodedRegionName).isAvailable()); - } catch (NotServingRegionException expected) { - // That's how it work: if the region is closed we have an exception. - } - - // We don't delete the znode here, because there is not always a znode. - } - @Test(timeout = 60000) public void testOpenRegionReplica() throws Exception { - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); try { //load some data to primary HTU.loadNumericRows(table, f, 0, 1000); @@ -184,14 +123,14 @@ public class TestRegionReplicas { Assert.assertEquals(1000, HTU.countRows(table)); } finally { HTU.deleteNumericRows(table, f, 0, 1000); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } /** Tests that the meta location is saved for secondary regions */ @Test(timeout = 60000) public void testRegionReplicaUpdatesMetaLocation() throws Exception { - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); Table meta = null; try { meta = new HTable(HTU.getConfiguration(), TableName.META_TABLE_NAME); @@ -199,7 +138,7 @@ public class TestRegionReplicas { , getRS().getServerName(), -1, 1, false); } finally { if (meta != null ) meta.close(); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } @@ -213,7 +152,7 @@ public class TestRegionReplicas { // flush so that region replica can read getRS().getRegionByEncodedName(hriPrimary.getEncodedName()).flushcache(); - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); // first try directly against region HRegion region = getRS().getFromOnlineRegions(hriSecondary.getEncodedName()); @@ -222,7 +161,7 @@ public class TestRegionReplicas { assertGetRpc(hriSecondary, 42, true); } finally { HTU.deleteNumericRows(table, HConstants.CATALOG_FAMILY, 0, 1000); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } @@ -236,7 +175,7 @@ public class TestRegionReplicas { // flush so that region replica can read getRS().getRegionByEncodedName(hriPrimary.getEncodedName()).flushcache(); - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); // try directly Get against region replica byte[] row = Bytes.toBytes(String.valueOf(42)); @@ -247,7 +186,7 @@ public class TestRegionReplicas { Assert.assertArrayEquals(row, result.getValue(f, null)); } finally { HTU.deleteNumericRows(table, HConstants.CATALOG_FAMILY, 0, 1000); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } @@ -263,7 +202,8 @@ public class TestRegionReplicas { } // build a mock rpc - private void assertGetRpc(HRegionInfo info, int value, boolean expect) throws IOException, ServiceException { + private void assertGetRpc(HRegionInfo info, int value, boolean expect) + throws IOException, ServiceException { byte[] row = Bytes.toBytes(String.valueOf(value)); Get get = new Get(row); ClientProtos.GetRequest getReq = RequestConverter.buildGetRequest(info.getRegionName(), get); @@ -286,13 +226,14 @@ public class TestRegionReplicas { // enable store file refreshing final int refreshPeriod = 2000; // 2 sec HTU.getConfiguration().setInt("hbase.hstore.compactionThreshold", 100); - HTU.getConfiguration().setInt(StorefileRefresherChore.REGIONSERVER_STOREFILE_REFRESH_PERIOD, refreshPeriod); + HTU.getConfiguration().setInt(StorefileRefresherChore.REGIONSERVER_STOREFILE_REFRESH_PERIOD, + refreshPeriod); // restart the region server so that it starts the refresher chore restartRegionServer(); try { LOG.info("Opening the secondary region " + hriSecondary.getEncodedName()); - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); //load some data to primary LOG.info("Loading data to primary region"); @@ -348,7 +289,7 @@ public class TestRegionReplicas { } finally { HTU.deleteNumericRows(table, HConstants.CATALOG_FAMILY, 0, 1000); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } @@ -365,7 +306,7 @@ public class TestRegionReplicas { final int startKey = 0, endKey = 1000; try { - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); //load some data to primary so that reader won't fail HTU.loadNumericRows(table, f, startKey, endKey); @@ -429,13 +370,13 @@ public class TestRegionReplicas { // whether to do a close and open if (random.nextInt(10) == 0) { try { - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } catch (Exception ex) { LOG.warn("Failed closing the region " + hriSecondary + " " + StringUtils.stringifyException(ex)); exceptions[2].compareAndSet(null, ex); } try { - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); } catch (Exception ex) { LOG.warn("Failed opening the region " + hriSecondary + " " + StringUtils.stringifyException(ex)); exceptions[2].compareAndSet(null, ex); @@ -469,7 +410,7 @@ public class TestRegionReplicas { } } finally { HTU.deleteNumericRows(table, HConstants.CATALOG_FAMILY, startKey, endKey); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } @@ -481,7 +422,7 @@ public class TestRegionReplicas { try { LOG.info("Opening the secondary region " + hriSecondary.getEncodedName()); - openRegion(hriSecondary); + openRegion(HTU, getRS(), hriSecondary); // load some data to primary LOG.info("Loading data to primary region"); @@ -528,7 +469,7 @@ public class TestRegionReplicas { Assert.assertEquals(4498500, sum); } finally { HTU.deleteNumericRows(table, HConstants.CATALOG_FAMILY, 0, 1000); - closeRegion(hriSecondary); + closeRegion(HTU, getRS(), hriSecondary); } } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java index a6e38f5..d3285a3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerMetrics.java @@ -22,6 +22,7 @@ import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.*; import org.apache.hadoop.hbase.test.MetricsAssertHelper; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.apache.log4j.Level; @@ -37,7 +38,7 @@ import java.util.ArrayList; import java.util.List; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestRegionServerMetrics { private static MetricsAssertHelper metricsHelper; @@ -159,11 +160,6 @@ public class TestRegionServerMetrics { } table.get(gets); - // By default, master doesn't host meta now. - // Adding some meta related requests - requests += 3; - readRequests ++; - metricsRegionServer.getRegionServerWrapper().forceRecompute(); metricsHelper.assertCounter("totalRequestCount", requests + 50, serverSource); metricsHelper.assertCounter("readRequestCount", readRequests + 20, serverSource); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java index 381feb7..65aed5b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java @@ -25,27 +25,23 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.NotServingRegionException; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; -import org.apache.hadoop.hbase.coordination.BaseCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; -import org.apache.hadoop.hbase.executor.EventType; +import org.apache.hadoop.hbase.master.HMaster; +import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.RequestConverter; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.CloseRegionRequest; import org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler; -import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.junit.After; +import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; +import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; @@ -55,11 +51,10 @@ import org.mortbay.log.Log; import com.google.protobuf.ServiceException; - /** * Tests on the region server, without the master. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestRegionServerNoMaster { private static final int NB_SERVERS = 1; @@ -74,7 +69,6 @@ public class TestRegionServerNoMaster { @BeforeClass public static void before() throws Exception { - HTU.getConfiguration().setBoolean("hbase.assignment.usezk", true); HTU.startMiniCluster(NB_SERVERS); final TableName tableName = TableName.valueOf(TestRegionServerNoMaster.class.getSimpleName()); @@ -91,15 +85,41 @@ public class TestRegionServerNoMaster { } public static void stopMasterAndAssignMeta(HBaseTestingUtility HTU) - throws NodeExistsException, KeeperException, IOException, InterruptedException { - // No master - HTU.getHBaseCluster().getMaster().stopMaster(); + throws IOException, InterruptedException { + // Stop master + HMaster master = HTU.getHBaseCluster().getMaster(); + ServerName masterAddr = master.getServerName(); + master.stopMaster(); Log.info("Waiting until master thread exits"); while (HTU.getHBaseCluster().getMasterThread() != null && HTU.getHBaseCluster().getMasterThread().isAlive()) { Threads.sleep(100); } + + HRegionServer.TEST_SKIP_REPORTING_TRANSITION = true; + // Master is down, so is the meta. We need to assign it somewhere + // so that regions can be assigned during the mocking phase. + HRegionServer hrs = HTU.getHBaseCluster() + .getLiveRegionServerThreads().get(0).getRegionServer(); + ZooKeeperWatcher zkw = hrs.getZooKeeper(); + MetaTableLocator mtl = new MetaTableLocator(); + ServerName sn = mtl.getMetaRegionLocation(zkw); + if (sn != null && !masterAddr.equals(sn)) { + return; + } + + ProtobufUtil.openRegion(hrs.getRSRpcServices(), + hrs.getServerName(), HRegionInfo.FIRST_META_REGIONINFO); + while (true) { + sn = mtl.getMetaRegionLocation(zkw); + if (sn != null && sn.equals(hrs.getServerName()) + && hrs.onlineRegions.containsKey( + HRegionInfo.FIRST_META_REGIONINFO.getEncodedName())) { + break; + } + Thread.sleep(100); + } } /** Flush the given region in the mini cluster. Since no master, we cannot use HBaseAdmin.flush() */ @@ -116,215 +136,99 @@ public class TestRegionServerNoMaster { @AfterClass public static void afterClass() throws Exception { + HRegionServer.TEST_SKIP_REPORTING_TRANSITION = false; table.close(); HTU.shutdownMiniCluster(); } - @After - public void after() throws Exception { - // Clean the state if the test failed before cleaning the znode - // It does not manage all bad failures, so if there are multiple failures, only - // the first one should be looked at. - ZKAssign.deleteNodeFailSilent(HTU.getZooKeeperWatcher(), hri); - } - - private static HRegionServer getRS() { return HTU.getHBaseCluster().getLiveRegionServerThreads().get(0).getRegionServer(); } - /** - * Reopen the region. Reused in multiple tests as we always leave the region open after a test. - */ - private void reopenRegion() throws Exception { - // We reopen. We need a ZK node here, as a open is always triggered by a master. - ZKAssign.createNodeOffline(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - // first version is '0' + public static void openRegion(HBaseTestingUtility HTU, HRegionServer rs, HRegionInfo hri) + throws Exception { AdminProtos.OpenRegionRequest orr = - RequestConverter.buildOpenRegionRequest(getRS().getServerName(), hri, 0, null, null); - AdminProtos.OpenRegionResponse responseOpen = getRS().rpcServices.openRegion(null, orr); + RequestConverter.buildOpenRegionRequest(rs.getServerName(), hri, null, null); + AdminProtos.OpenRegionResponse responseOpen = rs.rpcServices.openRegion(null, orr); + Assert.assertTrue(responseOpen.getOpeningStateCount() == 1); Assert.assertTrue(responseOpen.getOpeningState(0). equals(AdminProtos.OpenRegionResponse.RegionOpeningState.OPENED)); - checkRegionIsOpened(); + checkRegionIsOpened(HTU, rs, hri); } - private void checkRegionIsOpened() throws Exception { - - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { + public static void checkRegionIsOpened(HBaseTestingUtility HTU, HRegionServer rs, + HRegionInfo hri) throws Exception { + while (!rs.getRegionsInTransitionInRS().isEmpty()) { Thread.sleep(1); } - Assert.assertTrue(getRS().getRegion(regionName).isAvailable()); - - Assert.assertTrue( - ZKAssign.deleteOpenedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), - getRS().getServerName())); + Assert.assertTrue(rs.getRegion(hri.getRegionName()).isAvailable()); } + public static void closeRegion(HBaseTestingUtility HTU, HRegionServer rs, HRegionInfo hri) + throws Exception { + AdminProtos.CloseRegionRequest crr = RequestConverter.buildCloseRegionRequest( + rs.getServerName(), hri.getEncodedName()); + AdminProtos.CloseRegionResponse responseClose = rs.rpcServices.closeRegion(null, crr); + Assert.assertTrue(responseClose.getClosed()); + checkRegionIsClosed(HTU, rs, hri); + } - private void checkRegionIsClosed() throws Exception { - - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { + public static void checkRegionIsClosed(HBaseTestingUtility HTU, HRegionServer rs, + HRegionInfo hri) throws Exception { + while (!rs.getRegionsInTransitionInRS().isEmpty()) { Thread.sleep(1); } try { - Assert.assertFalse(getRS().getRegion(regionName).isAvailable()); + Assert.assertFalse(rs.getRegion(hri.getRegionName()).isAvailable()); } catch (NotServingRegionException expected) { // That's how it work: if the region is closed we have an exception. } - - // We don't delete the znode here, because there is not always a znode. } - /** * Close the region without using ZK */ - private void closeNoZK() throws Exception { + private void closeRegionNoZK() throws Exception { // no transition in ZK AdminProtos.CloseRegionRequest crr = - RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName, false); + RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName); AdminProtos.CloseRegionResponse responseClose = getRS().rpcServices.closeRegion(null, crr); Assert.assertTrue(responseClose.getClosed()); // now waiting & checking. After a while, the transition should be done and the region closed - checkRegionIsClosed(); + checkRegionIsClosed(HTU, getRS(), hri); } @Test(timeout = 60000) public void testCloseByRegionServer() throws Exception { - closeNoZK(); - reopenRegion(); - } - - @Test(timeout = 60000) - public void testCloseByMasterWithoutZNode() throws Exception { - - // Transition in ZK on. This should fail, as there is no znode - AdminProtos.CloseRegionRequest crr = RequestConverter.buildCloseRegionRequest( - getRS().getServerName(), regionName, true); - AdminProtos.CloseRegionResponse responseClose = getRS().rpcServices.closeRegion(null, crr); - Assert.assertTrue(responseClose.getClosed()); - - // now waiting. After a while, the transition should be done - while (!getRS().getRegionsInTransitionInRS().isEmpty()) { - Thread.sleep(1); - } - - // the region is still available, the close got rejected at the end - Assert.assertTrue("The close should have failed", getRS().getRegion(regionName).isAvailable()); - } - - @Test(timeout = 60000) - public void testOpenCloseByMasterWithZNode() throws Exception { - - ZKAssign.createNodeClosing(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - - AdminProtos.CloseRegionRequest crr = RequestConverter.buildCloseRegionRequest( - getRS().getServerName(), regionName, true); - AdminProtos.CloseRegionResponse responseClose = getRS().rpcServices.closeRegion(null, crr); - Assert.assertTrue(responseClose.getClosed()); - - checkRegionIsClosed(); - - ZKAssign.deleteClosedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), - getRS().getServerName()); - - reopenRegion(); - } - - /** - * Test that we can send multiple openRegion to the region server. - * This is used when: - * - there is a SocketTimeout: in this case, the master does not know if the region server - * received the request before the timeout. - * - We have a socket error during the operation: same stuff: we don't know - * - a master failover: if we find a znode in thz M_ZK_REGION_OFFLINE, we don't know if - * the region server has received the query or not. Only solution to be efficient: re-ask - * immediately. - */ - @Test(timeout = 60000) - public void testMultipleOpen() throws Exception { - - // We close - closeNoZK(); - checkRegionIsClosed(); - - // We reopen. We need a ZK node here, as a open is always triggered by a master. - ZKAssign.createNodeOffline(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - - // We're sending multiple requests in a row. The region server must handle this nicely. - for (int i = 0; i < 10; i++) { - AdminProtos.OpenRegionRequest orr = RequestConverter.buildOpenRegionRequest( - getRS().getServerName(), hri, 0, null, null); - AdminProtos.OpenRegionResponse responseOpen = getRS().rpcServices.openRegion(null, orr); - Assert.assertTrue(responseOpen.getOpeningStateCount() == 1); - - AdminProtos.OpenRegionResponse.RegionOpeningState ors = responseOpen.getOpeningState(0); - Assert.assertTrue("request " + i + " failed", - ors.equals(AdminProtos.OpenRegionResponse.RegionOpeningState.OPENED) || - ors.equals(AdminProtos.OpenRegionResponse.RegionOpeningState.ALREADY_OPENED) - ); - } - - checkRegionIsOpened(); - } - - @Test - public void testOpenClosingRegion() throws Exception { - Assert.assertTrue(getRS().getRegion(regionName).isAvailable()); - - try { - // we re-opened meta so some of its data is lost - ServerName sn = getRS().getServerName(); - MetaTableAccessor.updateRegionLocation(getRS().getConnection(), - hri, sn, getRS().getRegion(regionName).getOpenSeqNum()); - // fake region to be closing now, need to clear state afterwards - getRS().regionsInTransitionInRS.put(hri.getEncodedNameAsBytes(), Boolean.FALSE); - AdminProtos.OpenRegionRequest orr = - RequestConverter.buildOpenRegionRequest(sn, hri, 0, null, null); - getRS().rpcServices.openRegion(null, orr); - Assert.fail("The closing region should not be opened"); - } catch (ServiceException se) { - Assert.assertTrue("The region should be already in transition", - se.getCause() instanceof RegionAlreadyInTransitionException); - } finally { - getRS().regionsInTransitionInRS.remove(hri.getEncodedNameAsBytes()); - } + closeRegionNoZK(); + openRegion(HTU, getRS(), hri); } @Test(timeout = 60000) public void testMultipleCloseFromMaster() throws Exception { - - // As opening, we must support multiple requests on the same region - ZKAssign.createNodeClosing(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); for (int i = 0; i < 10; i++) { AdminProtos.CloseRegionRequest crr = - RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName, 0, null, true); + RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName, null); try { AdminProtos.CloseRegionResponse responseClose = getRS().rpcServices.closeRegion(null, crr); - Assert.assertEquals("The first request should succeeds", 0, i); Assert.assertTrue("request " + i + " failed", responseClose.getClosed() || responseClose.hasClosed()); } catch (ServiceException se) { - Assert.assertTrue("The next queries should throw an exception.", i > 0); + Assert.assertTrue("The next queries may throw an exception.", i > 0); } } - checkRegionIsClosed(); - - Assert.assertTrue( - ZKAssign.deleteClosedNode(HTU.getZooKeeperWatcher(), hri.getEncodedName(), - getRS().getServerName()) - ); + checkRegionIsClosed(HTU, getRS(), hri); - reopenRegion(); + openRegion(HTU, getRS(), hri); } /** @@ -333,16 +237,15 @@ public class TestRegionServerNoMaster { @Test(timeout = 60000) public void testCancelOpeningWithoutZK() throws Exception { // We close - closeNoZK(); - checkRegionIsClosed(); + closeRegionNoZK(); + checkRegionIsClosed(HTU, getRS(), hri); // Let do the initial steps, without having a handler - ZKAssign.createNodeOffline(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); getRS().getRegionsInTransitionInRS().put(hri.getEncodedNameAsBytes(), Boolean.TRUE); // That's a close without ZK. AdminProtos.CloseRegionRequest crr = - RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName, false); + RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName); try { getRS().rpcServices.closeRegion(null, crr); Assert.assertTrue(false); @@ -356,90 +259,12 @@ public class TestRegionServerNoMaster { // Let's start the open handler HTableDescriptor htd = getRS().tableDescriptors.get(hri.getTable()); - BaseCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(getRS()); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(getRS().getServerName()); - zkCrd.setVersionOfOfflineNode(0); - - getRS().service.submit(new OpenRegionHandler(getRS(), getRS(), hri, htd, - csm.getOpenRegionCoordination(), zkCrd)); + getRS().service.submit(new OpenRegionHandler(getRS(), getRS(), hri, htd)); // The open handler should have removed the region from RIT but kept the region closed - checkRegionIsClosed(); - - // The open handler should have updated the value in ZK. - Assert.assertTrue(ZKAssign.deleteNode( - getRS().getZooKeeper(), hri.getEncodedName(), - EventType.RS_ZK_REGION_FAILED_OPEN, 1) - ); - - reopenRegion(); - } - - /** - * Test an open then a close with ZK. This is going to mess-up the ZK states, so - * the opening will fail as well because it doesn't find what it expects in ZK. - */ - @Test(timeout = 60000) - public void testCancelOpeningWithZK() throws Exception { - // We close - closeNoZK(); - checkRegionIsClosed(); - - // Let do the initial steps, without having a handler - getRS().getRegionsInTransitionInRS().put(hri.getEncodedNameAsBytes(), Boolean.TRUE); - - // That's a close without ZK. - ZKAssign.createNodeClosing(HTU.getZooKeeperWatcher(), hri, getRS().getServerName()); - AdminProtos.CloseRegionRequest crr = - RequestConverter.buildCloseRegionRequest(getRS().getServerName(), regionName, false); - try { - getRS().rpcServices.closeRegion(null, crr); - Assert.assertTrue(false); - } catch (ServiceException expected) { - Assert.assertTrue(expected.getCause() instanceof RegionAlreadyInTransitionException); - } - - // The close should have left the ZK state as it is: it's the job the AM to delete it - Assert.assertTrue(ZKAssign.deleteNode( - getRS().getZooKeeper(), hri.getEncodedName(), - EventType.M_ZK_REGION_CLOSING, 0) - ); - - // The state in RIT should have changed to close - Assert.assertEquals(Boolean.FALSE, getRS().getRegionsInTransitionInRS().get( - hri.getEncodedNameAsBytes())); - - // Let's start the open handler - // It should not succeed for two reasons: - // 1) There is no ZK node - // 2) The region in RIT was changed. - // The order is more or less implementation dependant. - HTableDescriptor htd = getRS().tableDescriptors.get(hri.getTable()); - - BaseCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(getRS()); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(getRS().getServerName()); - zkCrd.setVersionOfOfflineNode(0); - - getRS().service.submit(new OpenRegionHandler(getRS(), getRS(), hri, htd, - csm.getOpenRegionCoordination(), zkCrd)); - - // The open handler should have removed the region from RIT but kept the region closed - checkRegionIsClosed(); - - // We should not find any znode here. - Assert.assertEquals(-1, ZKAssign.getVersion(HTU.getZooKeeperWatcher(), hri)); + checkRegionIsClosed(HTU, getRS(), hri); - reopenRegion(); + openRegion(HTU, getRS(), hri); } /** @@ -454,7 +279,7 @@ public class TestRegionServerNoMaster { ServerName earlierServerName = ServerName.valueOf(sn.getHostname(), sn.getPort(), 1); try { - CloseRegionRequest request = RequestConverter.buildCloseRegionRequest(earlierServerName, regionName, true); + CloseRegionRequest request = RequestConverter.buildCloseRegionRequest(earlierServerName, regionName); getRS().getRSRpcServices().closeRegion(null, request); Assert.fail("The closeRegion should have been rejected"); } catch (ServiceException se) { @@ -463,17 +288,17 @@ public class TestRegionServerNoMaster { } //actual close - closeNoZK(); + closeRegionNoZK(); try { AdminProtos.OpenRegionRequest orr = RequestConverter.buildOpenRegionRequest( - earlierServerName, hri, 0, null, null); + earlierServerName, hri, null, null); getRS().getRSRpcServices().openRegion(null, orr); Assert.fail("The openRegion should have been rejected"); } catch (ServiceException se) { Assert.assertTrue(se.getCause() instanceof IOException); Assert.assertTrue(se.getCause().getMessage().contains("This RPC was intended for a different server")); } finally { - reopenRegion(); + openRegion(HTU, getRS(), hri); } } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerOnlineConfigChange.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerOnlineConfigChange.java index 7ffdc11..c58e9c6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerOnlineConfigChange.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerOnlineConfigChange.java @@ -28,9 +28,9 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.regionserver.compactions.CompactionConfiguration; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -123,7 +123,7 @@ public class TestRegionServerOnlineConfigChange { HStore hstore = (HStore)s; // Set the new compaction ratio to a different value. - double newCompactionRatio = + double newCompactionRatio = hstore.getStoreEngine().getCompactionPolicy().getConf().getCompactionRatio() + 0.1; conf.setFloat(strPrefix + "ratio", (float)newCompactionRatio); @@ -209,4 +209,4 @@ public class TestRegionServerOnlineConfigChange { assertEquals(newMajorCompactionJitter, hstore.getStoreEngine().getCompactionPolicy().getConf().getMajorCompactionJitter(), 0.00001); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java index d906160..924a196 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionSplitPolicy.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Before; @@ -40,7 +41,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestRegionSplitPolicy { private Configuration conf; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java index cb2a3ba..27bdda0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestResettingCounters.java @@ -30,12 +30,13 @@ import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.Increment; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Durability; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestResettingCounters { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java index 5c27ff5..5a95df11 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java @@ -42,6 +42,7 @@ import org.apache.hadoop.hbase.KeepDeletedCells; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -65,7 +66,7 @@ import com.google.common.collect.Lists; /** * Test cases against ReversibleKeyValueScanner */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestReversibleScanners { private static final Log LOG = LogFactory.getLog(TestReversibleScanners.class); HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRowTooBig.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRowTooBig.java index a307445..5bc77b5 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRowTooBig.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRowTooBig.java @@ -22,7 +22,9 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RowTooBigException; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -32,10 +34,10 @@ import org.junit.experimental.categories.Category; import java.io.IOException; /** - * Test case to check HRS throws {@link RowTooBigException} + * Test case to check HRS throws {@link org.apache.hadoop.hbase.client.RowTooBigException} * when row size exceeds configured limits. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestRowTooBig { private final static HBaseTestingUtility HTU = HBaseTestingUtility.createLocalHTU(); private static final HTableDescriptor TEST_HTD = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java index f369d21..309efbf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSCVFWithMiniCluster.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableExistsException; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; @@ -52,7 +53,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) /* * This test verifies that the scenarios illustrated by HBASE-10850 work * w.r.t. essential column family optimization diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java index 234330f..2854832 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanDeleteTracker.java @@ -21,24 +21,28 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.HBaseTestCase; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.regionserver.DeleteTracker.DeleteResult; import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; - -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestScanDeleteTracker extends HBaseTestCase { private ScanDeleteTracker sdt; private long timestamp = 10L; private byte deleteType = 0; + @Before public void setUp() throws Exception { super.setUp(); sdt = new ScanDeleteTracker(); } + @Test public void testDeletedBy_Delete() { KeyValue kv = new KeyValue(Bytes.toBytes("row"), Bytes.toBytes("f"), Bytes.toBytes("qualifier"), timestamp, KeyValue.Type.Delete); @@ -47,6 +51,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { assertEquals(DeleteResult.VERSION_DELETED, ret); } + @Test public void testDeletedBy_DeleteColumn() { KeyValue kv = new KeyValue(Bytes.toBytes("row"), Bytes.toBytes("f"), Bytes.toBytes("qualifier"), timestamp, KeyValue.Type.DeleteColumn); @@ -58,6 +63,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { assertEquals(DeleteResult.COLUMN_DELETED, ret); } + @Test public void testDeletedBy_DeleteFamily() { KeyValue kv = new KeyValue(Bytes.toBytes("row"), Bytes.toBytes("f"), Bytes.toBytes("qualifier"), timestamp, KeyValue.Type.DeleteFamily); @@ -69,6 +75,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { assertEquals(DeleteResult.FAMILY_DELETED, ret); } + @Test public void testDeletedBy_DeleteFamilyVersion() { byte [] qualifier1 = Bytes.toBytes("qualifier1"); byte [] qualifier2 = Bytes.toBytes("qualifier2"); @@ -113,6 +120,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { } + @Test public void testDelete_DeleteColumn() { byte [] qualifier = Bytes.toBytes("qualifier"); deleteType = KeyValue.Type.Delete.getCode(); @@ -134,6 +142,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { } + @Test public void testDeleteColumn_Delete() { byte [] qualifier = Bytes.toBytes("qualifier"); deleteType = KeyValue.Type.DeleteColumn.getCode(); @@ -154,6 +163,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { //Testing new way where we save the Delete in case of a Delete for specific //ts, could have just added the last line to the first test, but rather keep //them separated + @Test public void testDelete_KeepDelete(){ byte [] qualifier = Bytes.toBytes("qualifier"); deleteType = KeyValue.Type.Delete.getCode(); @@ -164,6 +174,7 @@ public class TestScanDeleteTracker extends HBaseTestCase { assertEquals(false ,sdt.isEmpty()); } + @Test public void testDelete_KeepVersionZero(){ byte [] qualifier = Bytes.toBytes("qualifier"); deleteType = KeyValue.Type.Delete.getCode(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java index c6e5986..c0dcee6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java @@ -25,11 +25,12 @@ import java.util.List; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.MatchCode; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestScanWildcardColumnTracker extends HBaseTestCase { final static int VERSIONS = 2; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java index bf6e4b1..afe02be 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWithBloomError.java @@ -42,6 +42,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; @@ -61,7 +62,7 @@ import org.junit.runners.Parameterized.Parameters; * This is needed for the multi-column Bloom filter optimization. */ @RunWith(Parameterized.class) -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestScanWithBloomError { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java index cea09f2..08b8dcc 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanner.java @@ -43,6 +43,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownScannerException; @@ -61,11 +62,10 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; - /** * Test of a long-lived scanner validating as we go. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestScanner { @Rule public TestName name = new TestName(); private final Log LOG = LogFactory.getLog(this.getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java index 8e3ca88..49ded21 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScannerWithBulkload.java @@ -26,12 +26,14 @@ import junit.framework.Assert; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -49,7 +51,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestScannerWithBulkload { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -93,13 +95,13 @@ public class TestScannerWithBulkload { scanner = table.getScanner(scan); result = scanner.next(); while (result != null) { - List kvs = result.getColumn(Bytes.toBytes("col"), Bytes.toBytes("q")); - for (KeyValue _kv : kvs) { - if (Bytes.toString(_kv.getRow()).equals("row1")) { - System.out.println(Bytes.toString(_kv.getRow())); - System.out.println(Bytes.toString(_kv.getQualifier())); - System.out.println(Bytes.toString(_kv.getValue())); - Assert.assertEquals("version3", Bytes.toString(_kv.getValue())); + List cells = result.getColumnCells(Bytes.toBytes("col"), Bytes.toBytes("q")); + for (Cell _c : cells) { + if (Bytes.toString(_c.getRow()).equals("row1")) { + System.out.println(Bytes.toString(_c.getRow())); + System.out.println(Bytes.toString(_c.getQualifier())); + System.out.println(Bytes.toString(_c.getValue())); + Assert.assertEquals("version3", Bytes.toString(_c.getValue())); } } result = scanner.next(); @@ -111,13 +113,13 @@ public class TestScannerWithBulkload { private Result scanAfterBulkLoad(ResultScanner scanner, Result result, String expctedVal) throws IOException { while (result != null) { - List kvs = result.getColumn(Bytes.toBytes("col"), Bytes.toBytes("q")); - for (KeyValue _kv : kvs) { - if (Bytes.toString(_kv.getRow()).equals("row1")) { - System.out.println(Bytes.toString(_kv.getRow())); - System.out.println(Bytes.toString(_kv.getQualifier())); - System.out.println(Bytes.toString(_kv.getValue())); - Assert.assertEquals(expctedVal, Bytes.toString(_kv.getValue())); + List cells = result.getColumnCells(Bytes.toBytes("col"), Bytes.toBytes("q")); + for (Cell _c : cells) { + if (Bytes.toString(_c.getRow()).equals("row1")) { + System.out.println(Bytes.toString(_c.getRow())); + System.out.println(Bytes.toString(_c.getQualifier())); + System.out.println(Bytes.toString(_c.getValue())); + Assert.assertEquals(expctedVal, Bytes.toString(_c.getValue())); } } result = scanner.next(); @@ -187,9 +189,9 @@ public class TestScannerWithBulkload { ResultScanner scanner = table.getScanner(scan); Result result = scanner.next(); - List kvs = result.getColumn(Bytes.toBytes("col"), Bytes.toBytes("q")); - Assert.assertEquals(1, kvs.size()); - Assert.assertEquals("version1", Bytes.toString(kvs.get(0).getValue())); + List cells = result.getColumnCells(Bytes.toBytes("col"), Bytes.toBytes("q")); + Assert.assertEquals(1, cells.size()); + Assert.assertEquals("version1", Bytes.toString(cells.get(0).getValue())); scanner.close(); return table; } @@ -265,13 +267,13 @@ public class TestScannerWithBulkload { scanner = table.getScanner(scan); result = scanner.next(); while (result != null) { - List kvs = result.getColumn(Bytes.toBytes("col"), Bytes.toBytes("q")); - for (KeyValue _kv : kvs) { - if (Bytes.toString(_kv.getRow()).equals("row1")) { - System.out.println(Bytes.toString(_kv.getRow())); - System.out.println(Bytes.toString(_kv.getQualifier())); - System.out.println(Bytes.toString(_kv.getValue())); - Assert.assertEquals("version3", Bytes.toString(_kv.getValue())); + List cells = result.getColumnCells(Bytes.toBytes("col"), Bytes.toBytes("q")); + for (Cell _c : cells) { + if (Bytes.toString(_c.getRow()).equals("row1")) { + System.out.println(Bytes.toString(_c.getRow())); + System.out.println(Bytes.toString(_c.getQualifier())); + System.out.println(Bytes.toString(_c.getValue())); + Assert.assertEquals("version3", Bytes.toString(_c.getValue())); } } result = scanner.next(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java index 748f94b..ec81ac1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSeekOptimizations.java @@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; @@ -59,7 +60,7 @@ import org.junit.runners.Parameterized.Parameters; * actually saving I/O operations. */ @RunWith(Parameterized.class) -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestSeekOptimizations { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerCustomProtocol.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerCustomProtocol.java index 1f0ab99..c6c3cb7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerCustomProtocol.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerCustomProtocol.java @@ -34,8 +34,9 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; @@ -71,7 +72,7 @@ import com.google.protobuf.RpcController; import com.google.protobuf.Service; import com.google.protobuf.ServiceException; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestServerCustomProtocol { private static final Log LOG = LogFactory.getLog(TestServerCustomProtocol.class); static final String WHOAREYOU = "Who are you?"; @@ -307,91 +308,93 @@ public class TestServerCustomProtocol { @Test public void testSingleMethod() throws Throwable { - HTable table = new HTable(util.getConfiguration(), TEST_TABLE); - Map results = table.coprocessorService(PingProtos.PingService.class, - null, ROW_A, - new Batch.Call() { - @Override - public String call(PingProtos.PingService instance) throws IOException { - BlockingRpcCallback rpcCallback = - new BlockingRpcCallback(); - instance.ping(null, PingProtos.PingRequest.newBuilder().build(), rpcCallback); - return rpcCallback.get().getPong(); - } - }); - // Should have gotten results for 1 of the three regions only since we specified - // rows from 1 region - assertEquals(1, results.size()); - verifyRegionResults(table, results, ROW_A); - - final String name = "NAME"; - results = hello(table, name, null, ROW_A); - // Should have gotten results for 1 of the three regions only since we specified - // rows from 1 region - assertEquals(1, results.size()); - verifyRegionResults(table, results, "Hello, NAME", ROW_A); - table.close(); + try (HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + RegionLocator locator = table.getRegionLocator(); + Map results = table.coprocessorService(PingProtos.PingService.class, + null, ROW_A, + new Batch.Call() { + @Override + public String call(PingProtos.PingService instance) throws IOException { + BlockingRpcCallback rpcCallback = + new BlockingRpcCallback(); + instance.ping(null, PingProtos.PingRequest.newBuilder().build(), rpcCallback); + return rpcCallback.get().getPong(); + } + }); + // Should have gotten results for 1 of the three regions only since we specified + // rows from 1 region + assertEquals(1, results.size()); + verifyRegionResults(locator, results, ROW_A); + + final String name = "NAME"; + results = hello(table, name, null, ROW_A); + // Should have gotten results for 1 of the three regions only since we specified + // rows from 1 region + assertEquals(1, results.size()); + verifyRegionResults(locator, results, "Hello, NAME", ROW_A); + } } @Test public void testRowRange() throws Throwable { - HTable table = new HTable(util.getConfiguration(), TEST_TABLE); - for (Entry e: table.getRegionLocations().entrySet()) { - LOG.info("Region " + e.getKey().getRegionNameAsString() + ", servername=" + e.getValue()); - } - // Here are what regions looked like on a run: - // - // test,,1355943549657.c65d4822d8bdecc033a96451f3a0f55d. - // test,bbb,1355943549661.110393b070dd1ed93441e0bc9b3ffb7e. - // test,ccc,1355943549665.c3d6d125141359cbbd2a43eaff3cdf74. - - Map results = ping(table, null, ROW_A); - // Should contain first region only. - assertEquals(1, results.size()); - verifyRegionResults(table, results, ROW_A); - - // Test start row + empty end - results = ping(table, ROW_BC, null); - assertEquals(2, results.size()); - // should contain last 2 regions - HRegionLocation loc = table.getRegionLocation(ROW_A, true); - assertNull("Should be missing region for row aaa (prior to start row)", - results.get(loc.getRegionInfo().getRegionName())); - verifyRegionResults(table, results, ROW_B); - verifyRegionResults(table, results, ROW_C); - - // test empty start + end - results = ping(table, null, ROW_BC); - // should contain the first 2 regions - assertEquals(2, results.size()); - verifyRegionResults(table, results, ROW_A); - verifyRegionResults(table, results, ROW_B); - loc = table.getRegionLocation(ROW_C, true); - assertNull("Should be missing region for row ccc (past stop row)", - results.get(loc.getRegionInfo().getRegionName())); - - // test explicit start + end - results = ping(table, ROW_AB, ROW_BC); - // should contain first 2 regions - assertEquals(2, results.size()); - verifyRegionResults(table, results, ROW_A); - verifyRegionResults(table, results, ROW_B); - loc = table.getRegionLocation(ROW_C, true); - assertNull("Should be missing region for row ccc (past stop row)", + try (HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + RegionLocator locator = table.getRegionLocator(); + for (Entry e: table.getRegionLocations().entrySet()) { + LOG.info("Region " + e.getKey().getRegionNameAsString() + ", servername=" + e.getValue()); + } + // Here are what regions looked like on a run: + // + // test,,1355943549657.c65d4822d8bdecc033a96451f3a0f55d. + // test,bbb,1355943549661.110393b070dd1ed93441e0bc9b3ffb7e. + // test,ccc,1355943549665.c3d6d125141359cbbd2a43eaff3cdf74. + + Map results = ping(table, null, ROW_A); + // Should contain first region only. + assertEquals(1, results.size()); + verifyRegionResults(locator, results, ROW_A); + + // Test start row + empty end + results = ping(table, ROW_BC, null); + assertEquals(2, results.size()); + // should contain last 2 regions + HRegionLocation loc = table.getRegionLocation(ROW_A, true); + assertNull("Should be missing region for row aaa (prior to start row)", results.get(loc.getRegionInfo().getRegionName())); - - // test single region - results = ping(table, ROW_B, ROW_BC); - // should only contain region bbb - assertEquals(1, results.size()); - verifyRegionResults(table, results, ROW_B); - loc = table.getRegionLocation(ROW_A, true); - assertNull("Should be missing region for row aaa (prior to start)", - results.get(loc.getRegionInfo().getRegionName())); - loc = table.getRegionLocation(ROW_C, true); - assertNull("Should be missing region for row ccc (past stop row)", - results.get(loc.getRegionInfo().getRegionName())); - table.close(); + verifyRegionResults(locator, results, ROW_B); + verifyRegionResults(locator, results, ROW_C); + + // test empty start + end + results = ping(table, null, ROW_BC); + // should contain the first 2 regions + assertEquals(2, results.size()); + verifyRegionResults(locator, results, ROW_A); + verifyRegionResults(locator, results, ROW_B); + loc = table.getRegionLocation(ROW_C, true); + assertNull("Should be missing region for row ccc (past stop row)", + results.get(loc.getRegionInfo().getRegionName())); + + // test explicit start + end + results = ping(table, ROW_AB, ROW_BC); + // should contain first 2 regions + assertEquals(2, results.size()); + verifyRegionResults(locator, results, ROW_A); + verifyRegionResults(locator, results, ROW_B); + loc = table.getRegionLocation(ROW_C, true); + assertNull("Should be missing region for row ccc (past stop row)", + results.get(loc.getRegionInfo().getRegionName())); + + // test single region + results = ping(table, ROW_B, ROW_BC); + // should only contain region bbb + assertEquals(1, results.size()); + verifyRegionResults(locator, results, ROW_B); + loc = table.getRegionLocation(ROW_A, true); + assertNull("Should be missing region for row aaa (prior to start)", + results.get(loc.getRegionInfo().getRegionName())); + loc = table.getRegionLocation(ROW_C, true); + assertNull("Should be missing region for row ccc (past stop row)", + results.get(loc.getRegionInfo().getRegionName())); + } } private Map ping(final Table table, final byte [] start, final byte [] end) @@ -414,40 +417,46 @@ public class TestServerCustomProtocol { @Test public void testCompoundCall() throws Throwable { - HTable table = new HTable(util.getConfiguration(), TEST_TABLE); - Map results = compoundOfHelloAndPing(table, ROW_A, ROW_C); - verifyRegionResults(table, results, "Hello, pong", ROW_A); - verifyRegionResults(table, results, "Hello, pong", ROW_B); - verifyRegionResults(table, results, "Hello, pong", ROW_C); - table.close(); + try (HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + RegionLocator locator = table.getRegionLocator(); + Map results = compoundOfHelloAndPing(table, ROW_A, ROW_C); + verifyRegionResults(locator, results, "Hello, pong", ROW_A); + verifyRegionResults(locator, results, "Hello, pong", ROW_B); + verifyRegionResults(locator, results, "Hello, pong", ROW_C); + } } @Test public void testNullCall() throws Throwable { - HTable table = new HTable(util.getConfiguration(), TEST_TABLE); - Map results = hello(table, null, ROW_A, ROW_C); - verifyRegionResults(table, results, "Who are you?", ROW_A); - verifyRegionResults(table, results, "Who are you?", ROW_B); - verifyRegionResults(table, results, "Who are you?", ROW_C); + try(HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + RegionLocator locator = table.getRegionLocator(); + Map results = hello(table, null, ROW_A, ROW_C); + verifyRegionResults(locator, results, "Who are you?", ROW_A); + verifyRegionResults(locator, results, "Who are you?", ROW_B); + verifyRegionResults(locator, results, "Who are you?", ROW_C); + } } @Test public void testNullReturn() throws Throwable { - HTable table = new HTable(util.getConfiguration(), TEST_TABLE); - Map results = hello(table, "nobody", ROW_A, ROW_C); - verifyRegionResults(table, results, null, ROW_A); - verifyRegionResults(table, results, null, ROW_B); - verifyRegionResults(table, results, null, ROW_C); + try (HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + RegionLocator locator = table.getRegionLocator(); + Map results = hello(table, "nobody", ROW_A, ROW_C); + verifyRegionResults(locator, results, null, ROW_A); + verifyRegionResults(locator, results, null, ROW_B); + verifyRegionResults(locator, results, null, ROW_C); + } } @Test public void testEmptyReturnType() throws Throwable { - Table table = new HTable(util.getConfiguration(), TEST_TABLE); - Map results = noop(table, ROW_A, ROW_C); - assertEquals("Should have results from three regions", 3, results.size()); - // all results should be null - for (Object v : results.values()) { - assertNull(v); + try (HTable table = new HTable(util.getConfiguration(), TEST_TABLE)) { + Map results = noop(table, ROW_A, ROW_C); + assertEquals("Should have results from three regions", 3, results.size()); + // all results should be null + for (Object v : results.values()) { + assertNull(v); + } } } @@ -456,7 +465,7 @@ public class TestServerCustomProtocol { verifyRegionResults(table, results, "pong", row); } - private void verifyRegionResults(RegionLocator table, + private void verifyRegionResults(RegionLocator regionLocator, Map results, String expected, byte[] row) throws Exception { for (Map.Entry e: results.entrySet()) { @@ -464,7 +473,7 @@ public class TestServerCustomProtocol { ", result key=" + Bytes.toString(e.getKey()) + ", value=" + e.getValue()); } - HRegionLocation loc = table.getRegionLocation(row, true); + HRegionLocation loc = regionLocator.getRegionLocation(row, true); byte[] region = loc.getRegionInfo().getRegionName(); assertTrue("Results should contain region " + Bytes.toStringBinary(region) + " for row '" + Bytes.toStringBinary(row)+ "'", diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java index 9a97df2..9b3c6c1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestServerNonceManager.java @@ -27,6 +27,7 @@ import java.util.concurrent.atomic.AtomicInteger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Chore; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -38,7 +39,7 @@ import org.mockito.Mockito; import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestServerNonceManager { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitLogWorker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitLogWorker.java index 84d3ea8..44d7464 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitLogWorker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitLogWorker.java @@ -37,7 +37,6 @@ import org.apache.hadoop.hbase.CoordinatedStateManagerFactory; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.SplitLogCounters; @@ -47,6 +46,8 @@ import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.executor.ExecutorService; import org.apache.hadoop.hbase.executor.ExecutorType; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.CancelableProgressable; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZKSplitLog; @@ -61,7 +62,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestSplitLogWorker { private static final Log LOG = LogFactory.getLog(TestSplitLogWorker.class); private static final int WAIT_TIME = 15000; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java index 9126c4d..66375e9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java @@ -24,6 +24,7 @@ import static org.junit.Assert.assertTrue; import static org.mockito.Matchers.any; import static org.mockito.Matchers.anyInt; import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.doCallRealMethod; import static org.mockito.Mockito.doNothing; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.spy; @@ -43,7 +44,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.io.hfile.CacheConfig; import org.apache.hadoop.hbase.client.Scan; @@ -53,6 +53,8 @@ import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.PairOfSameType; @@ -69,7 +71,7 @@ import com.google.common.collect.ImmutableList; * Test the {@link SplitTransaction} class against an HRegion (as opposed to * running cluster). */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestSplitTransaction { private final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final Path testdir = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java index 99de513..4138027 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java @@ -22,7 +22,6 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNotSame; -import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -32,6 +31,7 @@ import java.util.Collection; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.atomic.AtomicBoolean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -46,21 +46,19 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MasterNotRunningException; +import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.MiniHBaseCluster; -import org.apache.hadoop.hbase.RegionTransition; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.UnknownRegionException; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.ZooKeeperConnectionException; -import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Admin; -import org.apache.hadoop.hbase.client.Connection; -import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.Consistency; import org.apache.hadoop.hbase.client.Delete; +import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Mutation; @@ -68,23 +66,25 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; -import org.apache.hadoop.hbase.coordination.ZKSplitTransactionCoordination; -import org.apache.hadoop.hbase.coordination.ZkCloseRegionCoordination; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.TestReplicasClient.SlowMeCopro; import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.executor.EventType; import org.apache.hadoop.hbase.master.AssignmentManager; import org.apache.hadoop.hbase.master.HMaster; +import org.apache.hadoop.hbase.master.MasterRpcServices; import org.apache.hadoop.hbase.master.RegionState; import org.apache.hadoop.hbase.master.RegionState.State; import org.apache.hadoop.hbase.master.RegionStates; import org.apache.hadoop.hbase.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionRequest; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.ReportRegionStateTransitionResponse; +import org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos.RegionStateTransition.TransitionCode; import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.FSUtils; @@ -92,12 +92,9 @@ import org.apache.hadoop.hbase.util.HBaseFsck; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.apache.hadoop.hbase.util.PairOfSameType; import org.apache.hadoop.hbase.util.Threads; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.zookeeper.KeeperException; import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.apache.zookeeper.data.Stat; import org.junit.After; import org.junit.AfterClass; import org.junit.Assert; @@ -106,6 +103,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; +import com.google.protobuf.RpcController; import com.google.protobuf.ServiceException; /** @@ -113,7 +111,7 @@ import com.google.protobuf.ServiceException; * only the below tests are against a running cluster where TestSplitTransaction * is tests against a bare {@link HRegion}. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) @SuppressWarnings("deprecation") public class TestSplitTransactionOnCluster { private static final Log LOG = @@ -123,24 +121,13 @@ public class TestSplitTransactionOnCluster { private static final int NB_SERVERS = 3; private static CountDownLatch latch = new CountDownLatch(1); private static volatile boolean secondSplit = false; - private static volatile boolean callRollBack = false; - private static volatile boolean firstSplitCompleted = false; - private static boolean useZKForAssignment = true; static final HBaseTestingUtility TESTING_UTIL = new HBaseTestingUtility(); - static void setupOnce() throws Exception { - TESTING_UTIL.getConfiguration().setInt("hbase.balancer.period", 60000); - useZKForAssignment = TESTING_UTIL.getConfiguration().getBoolean( - "hbase.assignment.usezk", true); - TESTING_UTIL.startMiniCluster(NB_SERVERS); - } - @BeforeClass public static void before() throws Exception { - // Use ZK for region assignment - TESTING_UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", true); - setupOnce(); + TESTING_UTIL.getConfiguration().setInt("hbase.balancer.period", 60000); + TESTING_UTIL.startMiniCluster(1, NB_SERVERS, null, MyMaster.class, null); } @AfterClass public static void after() throws Exception { @@ -185,99 +172,6 @@ public class TestSplitTransactionOnCluster { } @Test(timeout = 60000) - public void testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack() throws Exception { - final TableName tableName = - TableName.valueOf("testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack"); - - if (!useZKForAssignment) { - // This test doesn't apply if not using ZK for assignment - return; - } - - try { - // Create table then get the single region for our new table. - HTable t = createTableAndWait(tableName, Bytes.toBytes("cf")); - final List regions = cluster.getRegions(tableName); - HRegionInfo hri = getAndCheckSingleTableRegion(regions); - int regionServerIndex = cluster.getServerWith(regions.get(0).getRegionName()); - final HRegionServer regionServer = cluster.getRegionServer(regionServerIndex); - insertData(tableName, admin, t); - t.close(); - - // Turn off balancer so it doesn't cut in and mess up our placements. - this.admin.setBalancerRunning(false, true); - // Turn off the meta scanner so it don't remove parent on us. - cluster.getMaster().setCatalogJanitorEnabled(false); - - // find a splittable region - final HRegion region = findSplittableRegion(regions); - assertTrue("not able to find a splittable region", region != null); - MockedCoordinatedStateManager cp = new MockedCoordinatedStateManager(); - cp.initialize(regionServer, region); - cp.start(); - regionServer.csm = cp; - - new Thread() { - @Override - public void run() { - SplitTransaction st = null; - st = new MockedSplitTransaction(region, Bytes.toBytes("row2")); - try { - st.prepare(); - st.execute(regionServer, regionServer); - } catch (IOException e) { - - } - } - }.start(); - for (int i = 0; !callRollBack && i < 100; i++) { - Thread.sleep(100); - } - assertTrue("Waited too long for rollback", callRollBack); - SplitTransaction st = new MockedSplitTransaction(region, Bytes.toBytes("row3")); - try { - secondSplit = true; - // make region splittable - region.initialize(); - st.prepare(); - st.execute(regionServer, regionServer); - } catch (IOException e) { - LOG.debug("Rollback started :"+ e.getMessage()); - st.rollback(regionServer, regionServer); - } - for (int i=0; !firstSplitCompleted && i<100; i++) { - Thread.sleep(100); - } - assertTrue("fist split did not complete", firstSplitCompleted); - - RegionStates regionStates = cluster.getMaster().getAssignmentManager().getRegionStates(); - Map rit = regionStates.getRegionsInTransition(); - - for (int i=0; rit.containsKey(hri.getTable()) && i<100; i++) { - Thread.sleep(100); - } - assertFalse("region still in transition", rit.containsKey( - rit.containsKey(hri.getTable()))); - - List onlineRegions = regionServer.getOnlineRegions(tableName); - // Region server side split is successful. - assertEquals("The parent region should be splitted", 2, onlineRegions.size()); - //Should be present in RIT - List regionsOfTable = cluster.getMaster().getAssignmentManager() - .getRegionStates().getRegionsOfTable(tableName); - // Master side should also reflect the same - assertEquals("No of regions in master", 2, regionsOfTable.size()); - } finally { - admin.setBalancerRunning(true, false); - secondSplit = false; - firstSplitCompleted = false; - callRollBack = false; - cluster.getMaster().setCatalogJanitorEnabled(true); - TESTING_UTIL.deleteTable(tableName); - } - } - - @Test(timeout = 60000) public void testRITStateForRollback() throws Exception { final TableName tableName = TableName.valueOf("testRITStateForRollback"); @@ -383,20 +277,15 @@ public class TestSplitTransactionOnCluster { /** * A test that intentionally has master fail the processing of the split message. - * Tests that the regionserver split ephemeral node gets cleaned up if it - * crashes and that after we process server shutdown, the daughters are up on - * line. + * Tests that after we process server shutdown, the daughters are up on line. * @throws IOException * @throws InterruptedException - * @throws NodeExistsException - * @throws KeeperException - * @throws DeserializationException + * @throws ServiceException */ - @Test (timeout = 300000) public void testRSSplitEphemeralsDisappearButDaughtersAreOnlinedAfterShutdownHandling() - throws IOException, InterruptedException, NodeExistsException, KeeperException, - DeserializationException, ServiceException { + @Test (timeout = 300000) public void testRSSplitDaughtersAreOnlinedAfterShutdownHandling() + throws IOException, InterruptedException, ServiceException { final TableName tableName = - TableName.valueOf("testRSSplitEphemeralsDisappearButDaughtersAreOnlinedAfterShutdownHandling"); + TableName.valueOf("testRSSplitDaughtersAreOnlinedAfterShutdownHandling"); // Create table then get the single region for our new table. HTable t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY); @@ -419,48 +308,15 @@ public class TestSplitTransactionOnCluster { // Now, before we split, set special flag in master, a flag that has // it FAIL the processing of split. AssignmentManager.TEST_SKIP_SPLIT_HANDLING = true; - // Now try splitting and it should work. - split(hri, server, regionCount); - - String path = ZKAssign.getNodeName(TESTING_UTIL.getZooKeeperWatcher(), - hri.getEncodedName()); - RegionTransition rt = null; - Stat stats = null; - List daughters = null; - if (useZKForAssignment) { - daughters = checkAndGetDaughters(tableName); - - // Wait till the znode moved to SPLIT - for (int i=0; i<100; i++) { - stats = TESTING_UTIL.getZooKeeperWatcher().getRecoverableZooKeeper().exists(path, false); - rt = RegionTransition.parseFrom(ZKAssign.getData(TESTING_UTIL.getZooKeeperWatcher(), - hri.getEncodedName())); - if (rt.getEventType().equals(EventType.RS_ZK_REGION_SPLIT)) break; - Thread.sleep(100); - } - LOG.info("EPHEMERAL NODE BEFORE SERVER ABORT, path=" + path + ", stats=" + stats); - assertTrue(rt != null && rt.getEventType().equals(EventType.RS_ZK_REGION_SPLIT)); - // Now crash the server, for ZK-less assignment, the server is auto aborted - cluster.abortRegionServer(tableRegionIndex); + try { + // Now try splitting and it should work. + split(hri, server, regionCount); + } catch (RegionServerStoppedException rsse) { + // Expected. The regionserver should crash } + waitUntilRegionServerDead(); awaitDaughters(tableName, 2); - if (useZKForAssignment) { - regions = cluster.getRegions(tableName); - for (HRegion r: regions) { - assertTrue(daughters.contains(r)); - } - - // Finally assert that the ephemeral SPLIT znode was cleaned up. - for (int i=0; i<100; i++) { - // wait a bit (10s max) for the node to disappear - stats = TESTING_UTIL.getZooKeeperWatcher().getRecoverableZooKeeper().exists(path, false); - if (stats == null) break; - Thread.sleep(100); - } - LOG.info("EPHEMERAL NODE AFTER SERVER ABORT, path=" + path + ", stats=" + stats); - assertTrue(stats == null); - } } finally { // Set this flag back. AssignmentManager.TEST_SKIP_SPLIT_HANDLING = false; @@ -496,15 +352,8 @@ public class TestSplitTransactionOnCluster { HRegionServer server = cluster.getRegionServer(tableRegionIndex); printOutRegions(server, "Initial regions: "); int regionCount = ProtobufUtil.getOnlineRegions(server.getRSRpcServices()).size(); - // Insert into zk a blocking znode, a znode of same name as region - // so it gets in way of our splitting. - ServerName fakedServer = ServerName.valueOf("any.old.server", 1234, -1); - if (useZKForAssignment) { - ZKAssign.createNodeClosing(TESTING_UTIL.getZooKeeperWatcher(), - hri, fakedServer); - } else { - regionStates.updateRegionState(hri, RegionState.State.CLOSING); - } + regionStates.updateRegionState(hri, RegionState.State.CLOSING); + // Now try splitting.... should fail. And each should successfully // rollback. this.admin.split(hri.getRegionNameAsString()); @@ -516,13 +365,8 @@ public class TestSplitTransactionOnCluster { assertEquals(regionCount, ProtobufUtil.getOnlineRegions( server.getRSRpcServices()).size()); } - if (useZKForAssignment) { - // Now clear the zknode - ZKAssign.deleteClosingNode(TESTING_UTIL.getZooKeeperWatcher(), - hri, fakedServer); - } else { - regionStates.regionOnline(hri, server.getServerName()); - } + regionStates.regionOnline(hri, server.getServerName()); + // Now try splitting and it should work. split(hri, server, regionCount); // Get daughters @@ -542,7 +386,7 @@ public class TestSplitTransactionOnCluster { * @throws InterruptedException */ @Test (timeout=300000) public void testShutdownFixupWhenDaughterHasSplit() - throws IOException, InterruptedException, ServiceException { + throws IOException, InterruptedException { final TableName tableName = TableName.valueOf("testShutdownFixupWhenDaughterHasSplit"); @@ -699,103 +543,6 @@ public class TestSplitTransactionOnCluster { } /** - * Verifies HBASE-5806. When splitting is partially done and the master goes down - * when the SPLIT node is in either SPLIT or SPLITTING state. - * - * @throws IOException - * @throws InterruptedException - * @throws NodeExistsException - * @throws KeeperException - * @throws DeserializationException - */ - @Test(timeout = 400000) - public void testMasterRestartWhenSplittingIsPartial() - throws IOException, InterruptedException, NodeExistsException, - KeeperException, DeserializationException, ServiceException { - final TableName tableName = TableName.valueOf("testMasterRestartWhenSplittingIsPartial"); - - if (!useZKForAssignment) { - // This test doesn't apply if not using ZK for assignment - return; - } - - // Create table then get the single region for our new table. - HTable t = createTableAndWait(tableName, HConstants.CATALOG_FAMILY); - List regions = cluster.getRegions(tableName); - HRegionInfo hri = getAndCheckSingleTableRegion(regions); - - int tableRegionIndex = ensureTableRegionNotOnSameServerAsMeta(admin, hri); - - // Turn off balancer so it doesn't cut in and mess up our placements. - this.admin.setBalancerRunning(false, true); - // Turn off the meta scanner so it don't remove parent on us. - cluster.getMaster().setCatalogJanitorEnabled(false); - ZooKeeperWatcher zkw = new ZooKeeperWatcher(t.getConfiguration(), - "testMasterRestartWhenSplittingIsPartial", new UselessTestAbortable()); - try { - // Add a bit of load up into the table so splittable. - TESTING_UTIL.loadTable(t, HConstants.CATALOG_FAMILY, false); - // Get region pre-split. - HRegionServer server = cluster.getRegionServer(tableRegionIndex); - printOutRegions(server, "Initial regions: "); - // Now, before we split, set special flag in master, a flag that has - // it FAIL the processing of split. - AssignmentManager.TEST_SKIP_SPLIT_HANDLING = true; - // Now try splitting and it should work. - - this.admin.split(hri.getRegionNameAsString()); - checkAndGetDaughters(tableName); - // Assert the ephemeral node is up in zk. - String path = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - Stat stats = zkw.getRecoverableZooKeeper().exists(path, false); - LOG.info("EPHEMERAL NODE BEFORE SERVER ABORT, path=" + path + ", stats=" - + stats); - byte[] bytes = ZKAssign.getData(zkw, hri.getEncodedName()); - RegionTransition rtd = RegionTransition.parseFrom(bytes); - // State could be SPLIT or SPLITTING. - assertTrue(rtd.getEventType().equals(EventType.RS_ZK_REGION_SPLIT) - || rtd.getEventType().equals(EventType.RS_ZK_REGION_SPLITTING)); - - // abort and wait for new master. - MockMasterWithoutCatalogJanitor master = abortAndWaitForMaster(); - - this.admin = new HBaseAdmin(TESTING_UTIL.getConfiguration()); - - // Update the region to be offline and split, so that HRegionInfo#equals - // returns true in checking rebuilt region states map. - hri.setOffline(true); - hri.setSplit(true); - ServerName regionServerOfRegion = master.getAssignmentManager() - .getRegionStates().getRegionServerOfRegion(hri); - assertTrue(regionServerOfRegion != null); - - // Remove the block so that split can move ahead. - AssignmentManager.TEST_SKIP_SPLIT_HANDLING = false; - String node = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - Stat stat = new Stat(); - byte[] data = ZKUtil.getDataNoWatch(zkw, node, stat); - // ZKUtil.create - for (int i=0; data != null && i<60; i++) { - Thread.sleep(1000); - data = ZKUtil.getDataNoWatch(zkw, node, stat); - } - assertNull("Waited too long for ZK node to be removed: "+node, data); - RegionStates regionStates = master.getAssignmentManager().getRegionStates(); - assertTrue("Split parent should be in SPLIT state", - regionStates.isRegionInState(hri, State.SPLIT)); - regionServerOfRegion = regionStates.getRegionServerOfRegion(hri); - assertTrue(regionServerOfRegion == null); - } finally { - // Set this flag back. - AssignmentManager.TEST_SKIP_SPLIT_HANDLING = false; - admin.setBalancerRunning(true, false); - cluster.getMaster().setCatalogJanitorEnabled(true); - t.close(); - zkw.close(); - } - } - - /** * Verifies HBASE-5806. Here the case is that splitting is completed but before the * CJ could remove the parent region the master is killed and restarted. * @throws IOException @@ -832,22 +579,8 @@ public class TestSplitTransactionOnCluster { this.admin.split(hri.getRegionNameAsString()); checkAndGetDaughters(tableName); - // Assert the ephemeral node is up in zk. - String path = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - Stat stats = zkw.getRecoverableZooKeeper().exists(path, false); - LOG.info("EPHEMERAL NODE BEFORE SERVER ABORT, path=" + path + ", stats=" - + stats); - String node = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - Stat stat = new Stat(); - byte[] data = ZKUtil.getDataNoWatch(zkw, node, stat); - // ZKUtil.create - for (int i=0; data != null && i<60; i++) { - Thread.sleep(1000); - data = ZKUtil.getDataNoWatch(zkw, node, stat); - } - assertNull("Waited too long for ZK node to be removed: "+node, data); - MockMasterWithoutCatalogJanitor master = abortAndWaitForMaster(); + HMaster master = abortAndWaitForMaster(); this.admin = new HBaseAdmin(TESTING_UTIL.getConfiguration()); @@ -887,7 +620,6 @@ public class TestSplitTransactionOnCluster { @Test(timeout = 60000) public void testTableExistsIfTheSpecifiedTableRegionIsSplitParent() throws Exception { - ZooKeeperWatcher zkw = HBaseTestingUtility.getZooKeeperWatcher(TESTING_UTIL); final TableName tableName = TableName.valueOf("testTableExistsIfTheSpecifiedTableRegionIsSplitParent"); // Create table then get the single region for our new table. @@ -924,6 +656,82 @@ public class TestSplitTransactionOnCluster { } } + @Test + public void testSplitWithRegionReplicas() throws Exception { + final TableName tableName = + TableName.valueOf("foobar"); + HTableDescriptor htd = TESTING_UTIL.createTableDescriptor("foobar"); + htd.setRegionReplication(2); + htd.addCoprocessor(SlowMeCopro.class.getName()); + // Create table then get the single region for our new table. + Table t = TESTING_UTIL.createTable(htd, new byte[][]{Bytes.toBytes("cf")}, + TESTING_UTIL.getConfiguration()); + List oldRegions; + do { + oldRegions = cluster.getRegions(tableName); + Thread.sleep(10); + } while (oldRegions.size() != 2); + for (HRegion h : oldRegions) LOG.debug("OLDREGION " + h.getRegionInfo()); + try { + int regionServerIndex = cluster.getServerWith(oldRegions.get(0).getRegionName()); + HRegionServer regionServer = cluster.getRegionServer(regionServerIndex); + insertData(tableName, admin, t); + // Turn off balancer so it doesn't cut in and mess up our placements. + admin.setBalancerRunning(false, true); + // Turn off the meta scanner so it don't remove parent on us. + cluster.getMaster().setCatalogJanitorEnabled(false); + boolean tableExists = MetaTableAccessor.tableExists(regionServer.getConnection(), + tableName); + assertEquals("The specified table should be present.", true, tableExists); + final HRegion region = findSplittableRegion(oldRegions); + regionServerIndex = cluster.getServerWith(region.getRegionName()); + regionServer = cluster.getRegionServer(regionServerIndex); + assertTrue("not able to find a splittable region", region != null); + SplitTransaction st = new SplitTransaction(region, Bytes.toBytes("row2")); + try { + st.prepare(); + st.execute(regionServer, regionServer); + } catch (IOException e) { + e.printStackTrace(); + fail("Split execution should have succeeded with no exceptions thrown " + e); + } + //TESTING_UTIL.waitUntilAllRegionsAssigned(tableName); + List newRegions; + do { + newRegions = cluster.getRegions(tableName); + for (HRegion h : newRegions) LOG.debug("NEWREGION " + h.getRegionInfo()); + Thread.sleep(1000); + } while ((newRegions.contains(oldRegions.get(0)) || newRegions.contains(oldRegions.get(1))) + || newRegions.size() != 4); + tableExists = MetaTableAccessor.tableExists(regionServer.getConnection(), + tableName); + assertEquals("The specified table should be present.", true, tableExists); + // exists works on stale and we see the put after the flush + byte[] b1 = "row1".getBytes(); + Get g = new Get(b1); + g.setConsistency(Consistency.STRONG); + // The following GET will make a trip to the meta to get the new location of the 1st daughter + // In the process it will also get the location of the replica of the daughter (initially + // pointing to the parent's replica) + Result r = t.get(g); + Assert.assertFalse(r.isStale()); + LOG.info("exists stale after flush done"); + + SlowMeCopro.getCdl().set(new CountDownLatch(1)); + g = new Get(b1); + g.setConsistency(Consistency.TIMELINE); + // This will succeed because in the previous GET we get the location of the replica + r = t.get(g); + Assert.assertTrue(r.isStale()); + SlowMeCopro.getCdl().get().countDown(); + } finally { + SlowMeCopro.getCdl().get().countDown(); + admin.setBalancerRunning(true, false); + cluster.getMaster().setCatalogJanitorEnabled(true); + t.close(); + } + } + private void insertData(final TableName tableName, HBaseAdmin admin, Table t) throws IOException, InterruptedException { Put p = new Put(Bytes.toBytes("row1")); @@ -1009,13 +817,13 @@ public class TestSplitTransactionOnCluster { } // We should not be able to assign it again - am.assign(hri, true, true); + am.assign(hri, true); assertFalse("Split region can't be assigned", regionStates.isRegionInTransition(hri)); assertTrue(regionStates.isRegionInState(hri, State.SPLIT)); // We should not be able to unassign it either - am.unassign(hri, true, null); + am.unassign(hri, null); assertFalse("Split region can't be unassigned", regionStates.isRegionInTransition(hri)); assertTrue(regionStates.isRegionInState(hri, State.SPLIT)); @@ -1025,6 +833,52 @@ public class TestSplitTransactionOnCluster { } } + /** + * Not really restarting the master. Simulate it by clear of new region + * state since it is not persisted, will be lost after master restarts. + */ + @Test(timeout = 180000) + public void testSplitAndRestartingMaster() throws Exception { + LOG.info("Starting testSplitAndRestartingMaster"); + final TableName tableName = TableName.valueOf("testSplitAndRestartingMaster"); + // Create table then get the single region for our new table. + createTableAndWait(tableName, HConstants.CATALOG_FAMILY); + List regions = cluster.getRegions(tableName); + HRegionInfo hri = getAndCheckSingleTableRegion(regions); + ensureTableRegionNotOnSameServerAsMeta(admin, hri); + int regionServerIndex = cluster.getServerWith(regions.get(0).getRegionName()); + HRegionServer regionServer = cluster.getRegionServer(regionServerIndex); + // Turn off balancer so it doesn't cut in and mess up our placements. + this.admin.setBalancerRunning(false, true); + // Turn off the meta scanner so it don't remove parent on us. + cluster.getMaster().setCatalogJanitorEnabled(false); + try { + MyMasterRpcServices.enabled.set(true); + // find a splittable region. Refresh the regions list + regions = cluster.getRegions(tableName); + final HRegion region = findSplittableRegion(regions); + assertTrue("not able to find a splittable region", region != null); + + // Now split. + SplitTransaction st = new SplitTransaction(region, Bytes.toBytes("row2")); + try { + st.prepare(); + st.execute(regionServer, regionServer); + } catch (IOException e) { + fail("Split execution should have succeeded with no exceptions thrown"); + } + + // Postcondition + List daughters = cluster.getRegions(tableName); + LOG.info("xxx " + regions.size() + AssignmentManager.TEST_SKIP_SPLIT_HANDLING); + assertTrue(daughters.size() == 2); + } finally { + MyMasterRpcServices.enabled.set(false); + admin.setBalancerRunning(true, false); + cluster.getMaster().setCatalogJanitorEnabled(true); + } + } + @Test(timeout = 180000) public void testSplitHooksBeforeAndAfterPONR() throws Exception { TableName firstTable = TableName.valueOf("testSplitHooksBeforeAndAfterPONR_1"); @@ -1101,17 +955,6 @@ public class TestSplitTransactionOnCluster { throw new SplittingNodeCreationFailedException (); } }; - String node = ZKAssign.getNodeName(regionServer.getZooKeeper(), - region.getRegionInfo().getEncodedName()); - regionServer.getZooKeeper().sync(node); - for (int i = 0; i < 100; i++) { - // We expect the znode to be deleted by this time. Here the - // znode could be in OPENED state and the - // master has not yet deleted the znode. - if (ZKUtil.checkExists(regionServer.getZooKeeper(), node) != -1) { - Thread.sleep(100); - } - } try { st.prepare(); st.execute(regionServer, regionServer); @@ -1121,13 +964,7 @@ public class TestSplitTransactionOnCluster { // This will at least make the test to fail; assertTrue("Should be instance of CreateSplittingNodeFailedException", e instanceof SplittingNodeCreationFailedException ); - node = ZKAssign.getNodeName(regionServer.getZooKeeper(), - region.getRegionInfo().getEncodedName()); - { - assertTrue(ZKUtil.checkExists(regionServer.getZooKeeper(), node) == -1); - } assertTrue(st.rollback(regionServer, regionServer)); - assertTrue(ZKUtil.checkExists(regionServer.getZooKeeper(), node) == -1); } } finally { TESTING_UTIL.deleteTable(tableName); @@ -1163,51 +1000,12 @@ public class TestSplitTransactionOnCluster { TESTING_UTIL.deleteTable(tableName); } } - - @Test - public void testFailedSplit() throws Exception { - TableName tableName = TableName.valueOf("testFailedSplit"); - byte[] colFamily = Bytes.toBytes("info"); - TESTING_UTIL.createTable(tableName, colFamily); - Connection connection = ConnectionFactory.createConnection(TESTING_UTIL.getConfiguration()); - HTable table = (HTable) connection.getTable(tableName); - try { - TESTING_UTIL.loadTable(table, colFamily); - List regions = TESTING_UTIL.getHBaseAdmin().getTableRegions(tableName); - assertTrue(regions.size() == 1); - final HRegion actualRegion = cluster.getRegions(tableName).get(0); - actualRegion.getCoprocessorHost().load(FailingSplitRegionObserver.class, - Coprocessor.PRIORITY_USER, actualRegion.getBaseConf()); - - // The following split would fail. - admin.split(tableName); - FailingSplitRegionObserver.latch.await(); - LOG.info("Waiting for region to come out of RIT"); - TESTING_UTIL.waitFor(60000, 1000, new Waiter.Predicate() { - @Override - public boolean evaluate() throws Exception { - RegionStates regionStates = cluster.getMaster().getAssignmentManager().getRegionStates(); - Map rit = regionStates.getRegionsInTransition(); - return !rit.containsKey(actualRegion.getRegionInfo().getEncodedName()); - } - }); - regions = TESTING_UTIL.getHBaseAdmin().getTableRegions(tableName); - assertTrue(regions.size() == 1); - } finally { - table.close(); - connection.close(); - TESTING_UTIL.deleteTable(tableName); - } - } public static class MockedCoordinatedStateManager extends ZkCoordinatedStateManager { public void initialize(Server server, HRegion region) { this.server = server; this.watcher = server.getZooKeeper(); - splitTransactionCoordination = new MockedSplitTransactionCoordination(this, watcher, region); - closeRegionCoordination = new ZkCloseRegionCoordination(this, watcher); - openRegionCoordination = new ZkOpenRegionCoordination(this, watcher); } } @@ -1230,46 +1028,12 @@ public class TestSplitTransactionOnCluster { } return super.rollback(server, services); } - - - } - - public static class MockedSplitTransactionCoordination extends ZKSplitTransactionCoordination { - - private HRegion currentRegion; - - public MockedSplitTransactionCoordination(CoordinatedStateManager coordinationProvider, - ZooKeeperWatcher watcher, HRegion region) { - super(coordinationProvider, watcher); - currentRegion = region; - } - - @Override - public void completeSplitTransaction(RegionServerServices services, HRegion a, HRegion b, - SplitTransactionDetails std, HRegion parent) throws IOException { - if (this.currentRegion.getRegionInfo().getTable().getNameAsString() - .equals("testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack")) { - try { - if (!secondSplit){ - callRollBack = true; - latch.await(); - } - } catch (InterruptedException e) { - } - - } - super.completeSplitTransaction(services, a, b, std, parent); - if (this.currentRegion.getRegionInfo().getTable().getNameAsString() - .equals("testShouldFailSplitIfZNodeDoesNotExistDueToPrevRollBack")) { - firstSplitCompleted = true; } - } - } private HRegion findSplittableRegion(final List regions) throws InterruptedException { for (int i = 0; i < 5; ++i) { for (HRegion r: regions) { - if (r.isSplittable()) { + if (r.isSplittable() && r.getRegionInfo().getReplicaId() == 0) { return(r); } } @@ -1291,14 +1055,11 @@ public class TestSplitTransactionOnCluster { return daughters; } - private MockMasterWithoutCatalogJanitor abortAndWaitForMaster() + private HMaster abortAndWaitForMaster() throws IOException, InterruptedException { cluster.abortMaster(0); cluster.waitOnMaster(0); - cluster.getConfiguration().setClass(HConstants.MASTER_IMPL, - MockMasterWithoutCatalogJanitor.class, HMaster.class); - MockMasterWithoutCatalogJanitor master = null; - master = (MockMasterWithoutCatalogJanitor) cluster.startMaster().getMaster(); + HMaster master = cluster.startMaster().getMaster(); cluster.waitForActiveAndReadyMaster(); return master; } @@ -1306,22 +1067,14 @@ public class TestSplitTransactionOnCluster { private void split(final HRegionInfo hri, final HRegionServer server, final int regionCount) throws IOException, InterruptedException { this.admin.split(hri.getRegionNameAsString()); - try { - for (int i = 0; ProtobufUtil.getOnlineRegions( - server.getRSRpcServices()).size() <= regionCount && i < 300; i++) { - LOG.debug("Waiting on region to split"); - Thread.sleep(100); - } - - assertFalse("Waited too long for split", - ProtobufUtil.getOnlineRegions(server.getRSRpcServices()).size() <= regionCount); - } catch (RegionServerStoppedException e) { - if (useZKForAssignment) { - // If not using ZK for assignment, the exception may be expected. - LOG.error(e); - throw e; - } + for (int i = 0; ProtobufUtil.getOnlineRegions( + server.getRSRpcServices()).size() <= regionCount && i < 300; i++) { + LOG.debug("Waiting on region to split"); + Thread.sleep(100); } + + assertFalse("Waited too long for split", + ProtobufUtil.getOnlineRegions(server.getRSRpcServices()).size() <= regionCount); } /** @@ -1343,8 +1096,8 @@ public class TestSplitTransactionOnCluster { // hbase:meta We don't want hbase:meta replay polluting our test when we later crash // the table region serving server. int metaServerIndex = cluster.getServerWithMeta(); - assertTrue(metaServerIndex != -1); - HRegionServer metaRegionServer = cluster.getRegionServer(metaServerIndex); + assertTrue(metaServerIndex == -1); // meta is on master now + HRegionServer metaRegionServer = cluster.getMaster(); int tableRegionIndex = cluster.getServerWith(hri.getRegionName()); assertTrue(tableRegionIndex != -1); HRegionServer tableRegionServer = cluster.getRegionServer(tableRegionIndex); @@ -1358,12 +1111,12 @@ public class TestSplitTransactionOnCluster { admin.move(hri.getEncodedNameAsBytes(), Bytes.toBytes(hrs.getServerName().toString())); } // Wait till table region is up on the server that is NOT carrying hbase:meta. - for (int i = 0; i < 20; i++) { + for (int i = 0; i < 100; i++) { tableRegionIndex = cluster.getServerWith(hri.getRegionName()); if (tableRegionIndex != -1 && tableRegionIndex != metaServerIndex) break; LOG.debug("Waiting on region move off the hbase:meta server; current index " + tableRegionIndex + " and metaServerIndex=" + metaServerIndex); - Thread.sleep(1000); + Thread.sleep(100); } assertTrue("Region not moved off hbase:meta server", tableRegionIndex != -1 && tableRegionIndex != metaServerIndex); @@ -1404,13 +1157,14 @@ public class TestSplitTransactionOnCluster { private void waitUntilRegionServerDead() throws InterruptedException, InterruptedIOException { // Wait until the master processes the RS shutdown - for (int i=0; cluster.getMaster().getClusterStatus(). - getServers().size() > NB_SERVERS && i<100; i++) { + for (int i=0; (cluster.getMaster().getClusterStatus().getServers().size() > NB_SERVERS + || cluster.getLiveRegionServerThreads().size() > NB_SERVERS) && i<100; i++) { LOG.info("Waiting on server to go down"); Thread.sleep(100); } - assertFalse("Waited too long for RS to die", cluster.getMaster().getClusterStatus(). - getServers().size() > NB_SERVERS); + assertFalse("Waited too long for RS to die", + cluster.getMaster().getClusterStatus(). getServers().size() > NB_SERVERS + || cluster.getLiveRegionServerThreads().size() > NB_SERVERS); } private void awaitDaughters(TableName tableName, int numDaughters) throws InterruptedException { @@ -1443,20 +1197,52 @@ public class TestSplitTransactionOnCluster { return t; } - public static class MockMasterWithoutCatalogJanitor extends HMaster { + private static class SplittingNodeCreationFailedException extends IOException { + private static final long serialVersionUID = 1652404976265623004L; - public MockMasterWithoutCatalogJanitor(Configuration conf, CoordinatedStateManager cp) + public SplittingNodeCreationFailedException () { + super(); + } + } + + // Make it public so that JVMClusterUtil can access it. + public static class MyMaster extends HMaster { + public MyMaster(Configuration conf, CoordinatedStateManager cp) throws IOException, KeeperException, InterruptedException { super(conf, cp); } + + @Override + protected RSRpcServices createRpcServices() throws IOException { + return new MyMasterRpcServices(this); + } } - private static class SplittingNodeCreationFailedException extends IOException { - private static final long serialVersionUID = 1652404976265623004L; + static class MyMasterRpcServices extends MasterRpcServices { + static AtomicBoolean enabled = new AtomicBoolean(false); - public SplittingNodeCreationFailedException () { - super(); + private HMaster myMaster; + public MyMasterRpcServices(HMaster master) throws IOException { + super(master); + myMaster = master; + } + + @Override + public ReportRegionStateTransitionResponse reportRegionStateTransition(RpcController c, + ReportRegionStateTransitionRequest req) throws ServiceException { + ReportRegionStateTransitionResponse resp = super.reportRegionStateTransition(c, req); + if (enabled.get() && req.getTransition(0).getTransitionCode().equals( + TransitionCode.READY_TO_SPLIT) && !resp.hasErrorMessage()) { + RegionStates regionStates = myMaster.getAssignmentManager().getRegionStates(); + for (RegionState regionState: regionStates.getRegionsInTransition().values()) { + // Find the merging_new region and remove it + if (regionState.isSplittingNew()) { + regionStates.deleteRegion(regionState.getRegion()); + } + } + } + return resp; } } @@ -1513,7 +1299,6 @@ public class TestSplitTransactionOnCluster { HRegionServer rs = (HRegionServer) environment.getRegionServerServices(); st.stepsAfterPONR(rs, rs, daughterRegions); } - } static class CustomSplitPolicy extends RegionSplitPolicy { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java index 6ee785d..b5bc927 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStore.java @@ -58,6 +58,7 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.io.compress.Compression; @@ -93,7 +94,7 @@ import com.google.common.collect.Lists; /** * Test class for the Store */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestStore { public static final Log LOG = LogFactory.getLog(TestStore.class); @Rule public TestName name = new TestName(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java index 36a7f77..e5a5022 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java @@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Scan; @@ -57,6 +58,9 @@ import org.apache.hadoop.hbase.util.BloomFilterFactory; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.ChecksumType; import org.apache.hadoop.hbase.util.FSUtils; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; @@ -67,7 +71,7 @@ import com.google.common.collect.Lists; /** * Test HStoreFile */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStoreFile extends HBaseTestCase { static final Log LOG = LogFactory.getLog(TestStoreFile.class); private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -77,12 +81,12 @@ public class TestStoreFile extends HBaseTestCase { private static final int CKBYTES = 512; private static String TEST_FAMILY = "cf"; - @Override + @Before public void setUp() throws Exception { super.setUp(); } - @Override + @After public void tearDown() throws Exception { super.tearDown(); } @@ -92,11 +96,12 @@ public class TestStoreFile extends HBaseTestCase { * using two HalfMapFiles. * @throws Exception */ + @Test public void testBasicHalfMapFile() throws Exception { final HRegionInfo hri = new HRegionInfo(TableName.valueOf("testBasicHalfMapFileTb")); HRegionFileSystem regionFs = HRegionFileSystem.createRegionOnFileSystem( - conf, fs, new Path(this.testDir, hri.getTable().getNameAsString()), hri); + conf, fs, new Path(testDir, hri.getTable().getNameAsString()), hri); HFileContext meta = new HFileContextBuilder().withBlockSize(2*1024).build(); StoreFile.Writer writer = new StoreFile.WriterBuilder(conf, cacheConf, this.fs) @@ -144,10 +149,11 @@ public class TestStoreFile extends HBaseTestCase { * store files in other regions works. * @throws IOException */ + @Test public void testReference() throws IOException { final HRegionInfo hri = new HRegionInfo(TableName.valueOf("testReferenceTb")); HRegionFileSystem regionFs = HRegionFileSystem.createRegionOnFileSystem( - conf, fs, new Path(this.testDir, hri.getTable().getNameAsString()), hri); + conf, fs, new Path(testDir, hri.getTable().getNameAsString()), hri); HFileContext meta = new HFileContextBuilder().withBlockSize(8 * 1024).build(); // Make a store file and write data to it. @@ -187,13 +193,14 @@ public class TestStoreFile extends HBaseTestCase { assertTrue(Bytes.equals(kv.getRow(), finalRow)); } + @Test public void testHFileLink() throws IOException { final HRegionInfo hri = new HRegionInfo(TableName.valueOf("testHFileLinkTb")); // force temp data in hbase/target/test-data instead of /tmp/hbase-xxxx/ Configuration testConf = new Configuration(this.conf); - FSUtils.setRootDir(testConf, this.testDir); + FSUtils.setRootDir(testConf, testDir); HRegionFileSystem regionFs = HRegionFileSystem.createRegionOnFileSystem( - testConf, fs, FSUtils.getTableDir(this.testDir, hri.getTable()), hri); + testConf, fs, FSUtils.getTableDir(testDir, hri.getTable()), hri); HFileContext meta = new HFileContextBuilder().withBlockSize(8 * 1024).build(); // Make a store file and write data to it. @@ -229,15 +236,16 @@ public class TestStoreFile extends HBaseTestCase { * This test creates an hfile and then the dir structures and files to verify that references * to hfilelinks (created by snapshot clones) can be properly interpreted. */ + @Test public void testReferenceToHFileLink() throws IOException { // force temp data in hbase/target/test-data instead of /tmp/hbase-xxxx/ Configuration testConf = new Configuration(this.conf); - FSUtils.setRootDir(testConf, this.testDir); + FSUtils.setRootDir(testConf, testDir); // adding legal table name chars to verify regex handles it. HRegionInfo hri = new HRegionInfo(TableName.valueOf("_original-evil-name")); HRegionFileSystem regionFs = HRegionFileSystem.createRegionOnFileSystem( - testConf, fs, FSUtils.getTableDir(this.testDir, hri.getTable()), hri); + testConf, fs, FSUtils.getTableDir(testDir, hri.getTable()), hri); HFileContext meta = new HFileContextBuilder().withBlockSize(8 * 1024).build(); // Make a store file and write data to it. //// @@ -251,7 +259,7 @@ public class TestStoreFile extends HBaseTestCase { // create link to store file. /clone/region//--

    <% entry.getKey() %><% entry.getValue().toDescriptiveString() %><% entry.getKey() %> + <% HRegionInfo.getDescriptiveNameFromRegionStateForDisplay( + entry.getValue(), conf) %> <% (currentTime - entry.getValue().getStamp()) %>
    Total number of Regions in Transition for more than <% ritThreshold %> milliseconds <% numOfRITOverThreshold %>
    <% r.getRegionNameAsString() %><% Bytes.toStringBinary(r.getStartKey()) %><% Bytes.toStringBinary(r.getEndKey()) %><% HRegionInfo.getRegionNameAsStringForDisplay(r, + regionServer.getConfiguration()) %><% Bytes.toStringBinary(HRegionInfo.getStartKeyForDisplay(r, + regionServer.getConfiguration())) %><% Bytes.toStringBinary(HRegionInfo.getEndKeyForDisplay(r, + regionServer.getConfiguration())) %> <% r.getReplicaId() %>
    <% r.getRegionNameAsString() %><% HRegionInfo.getRegionNameAsStringForDisplay(r, + regionServer.getConfiguration()) %><% load.getReadRequestsCount() %> <% load.getWriteRequestsCount() %><% r.getRegionNameAsString() %><% HRegionInfo.getRegionNameAsStringForDisplay(r, + regionServer.getConfiguration()) %><% load.getStores() %> <% load.getStorefiles() %><% r.getRegionNameAsString() %><% HRegionInfo.getRegionNameAsStringForDisplay(r, + regionServer.getConfiguration()) %><% load.getTotalCompactingKVs() %> <% load.getCurrentCompactedKVs() %><% r.getRegionNameAsString() %><% HRegionInfo.getRegionNameAsStringForDisplay(r, + regionServer.getConfiguration()) %><% load.getMemstoreSizeMB() %>m
    <%= escapeXml(Bytes.toStringBinary(regionInfo.getRegionName())) %><%= escapeXml(Bytes.toStringBinary(HRegionInfo.getRegionNameForDisplay(regionInfo, + conf))) %><%= escapeXml(Bytes.toStringBinary(regionInfo.getStartKey())) %><%= escapeXml(Bytes.toStringBinary(regionInfo.getEndKey())) %><%= escapeXml(Bytes.toStringBinary(HRegionInfo.getStartKeyForDisplay(regionInfo, + conf))) %><%= escapeXml(Bytes.toStringBinary(HRegionInfo.getEndKeyForDisplay(regionInfo, + conf))) %> <%= locality%> <%= req%>
    HRegionInfo hriClone = new HRegionInfo(TableName.valueOf("clone")); HRegionFileSystem cloneRegionFs = HRegionFileSystem.createRegionOnFileSystem( - testConf, fs, FSUtils.getTableDir(this.testDir, hri.getTable()), + testConf, fs, FSUtils.getTableDir(testDir, hri.getTable()), hriClone); Path dstPath = cloneRegionFs.getStoreDir(TEST_FAMILY); HFileLink.create(testConf, this.fs, dstPath, hri, storeFilePath.getName()); @@ -268,7 +276,7 @@ public class TestStoreFile extends HBaseTestCase { Path pathB = splitStoreFile(cloneRegionFs, splitHriB, TEST_FAMILY, f, SPLITKEY, false);// bottom // OK test the thing - FSUtils.logFileSystemState(fs, this.testDir, LOG); + FSUtils.logFileSystemState(fs, testDir, LOG); // There is a case where a file with the hfilelink pattern is actually a daughter // reference to a hfile link. This code in StoreFile that handles this case. @@ -493,6 +501,7 @@ public class TestStoreFile extends HBaseTestCase { private static final int BLOCKSIZE_SMALL = 8192; + @Test public void testBloomFilter() throws Exception { FileSystem fs = FileSystem.getLocal(conf); conf.setFloat(BloomFilterFactory.IO_STOREFILE_BLOOM_ERROR_RATE, (float) 0.01); @@ -513,6 +522,7 @@ public class TestStoreFile extends HBaseTestCase { bloomWriteRead(writer, fs); } + @Test public void testDeleteFamilyBloomFilter() throws Exception { FileSystem fs = FileSystem.getLocal(conf); conf.setFloat(BloomFilterFactory.IO_STOREFILE_BLOOM_ERROR_RATE, (float) 0.01); @@ -575,6 +585,7 @@ public class TestStoreFile extends HBaseTestCase { /** * Test for HBASE-8012 */ + @Test public void testReseek() throws Exception { // write the file Path f = new Path(ROOT_DIR, getName()); @@ -599,6 +610,7 @@ public class TestStoreFile extends HBaseTestCase { assertNotNull("Intial reseek should position at the beginning of the file", s.peek()); } + @Test public void testBloomTypes() throws Exception { float err = (float) 0.01; FileSystem fs = FileSystem.getLocal(conf); @@ -687,6 +699,7 @@ public class TestStoreFile extends HBaseTestCase { } } + @Test public void testSeqIdComparator() { assertOrdering(StoreFile.Comparators.SEQ_ID, mockStoreFile(true, 100, 1000, -1, "/foo/123"), @@ -765,6 +778,7 @@ public class TestStoreFile extends HBaseTestCase { * Test to ensure correctness when using StoreFile with multiple timestamps * @throws IOException */ + @Test public void testMultipleTimestamps() throws IOException { byte[] family = Bytes.toBytes("familyname"); byte[] qualifier = Bytes.toBytes("qualifier"); @@ -773,7 +787,7 @@ public class TestStoreFile extends HBaseTestCase { Scan scan = new Scan(); // Make up a directory hierarchy that has a regiondir ("7e0102") and familyname. - Path storedir = new Path(new Path(this.testDir, "7e0102"), "familyname"); + Path storedir = new Path(new Path(testDir, "7e0102"), "familyname"); Path dir = new Path(storedir, "1234567890"); HFileContext meta = new HFileContextBuilder().withBlockSize(8 * 1024).build(); // Make a store file and write data to it. @@ -815,11 +829,12 @@ public class TestStoreFile extends HBaseTestCase { assertTrue(!scanner.shouldUseScanner(scan, columns, Long.MIN_VALUE)); } + @Test public void testCacheOnWriteEvictOnClose() throws Exception { Configuration conf = this.conf; // Find a home for our files (regiondir ("7e0102") and familyname). - Path baseDir = new Path(new Path(this.testDir, "7e0102"),"twoCOWEOC"); + Path baseDir = new Path(new Path(testDir, "7e0102"),"twoCOWEOC"); // Grab the block cache and get the initial hit/miss counts BlockCache bc = new CacheConfig(conf).getBlockCache(); @@ -987,9 +1002,10 @@ public class TestStoreFile extends HBaseTestCase { * Check if data block encoding information is saved correctly in HFile's * file info. */ + @Test public void testDataBlockEncodingMetaData() throws IOException { // Make up a directory hierarchy that has a regiondir ("7e0102") and familyname. - Path dir = new Path(new Path(this.testDir, "7e0102"), "familyname"); + Path dir = new Path(new Path(testDir, "7e0102"), "familyname"); Path path = new Path(dir, "1234567890"); DataBlockEncoding dataBlockEncoderAlgo = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java index 955996c..da39f59 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java @@ -20,20 +20,23 @@ package org.apache.hadoop.hbase.regionserver; import org.apache.hadoop.hbase.HBaseTestCase; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.HFileLink; +import org.junit.Test; import org.junit.experimental.categories.Category; /** * Test HStoreFile */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStoreFileInfo extends HBaseTestCase { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); /** * Validate that we can handle valid tables with '.', '_', and '-' chars. */ + @Test public void testStoreFileNames() { String[] legalHFileLink = { "MyTable_02=abc012-def345", "MyTable_02.300=abc012-def345", "MyTable_02-400=abc012-def345", "MyTable_02-400.200=abc012-def345", diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileRefresherChore.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileRefresherChore.java index d8e56a2..0319051 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileRefresherChore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileRefresherChore.java @@ -38,6 +38,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.TableName; @@ -54,7 +55,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStoreFileRefresherChore { private HBaseTestingUtility TEST_UTIL; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileScannerWithTagCompression.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileScannerWithTagCompression.java index 4a6b2e7..1bcb7c9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileScannerWithTagCompression.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileScannerWithTagCompression.java @@ -31,6 +31,7 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; @@ -42,7 +43,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStoreFileScannerWithTagCompression { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java index 2c0b35f..bf9fed6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreScanner.java @@ -36,6 +36,7 @@ import org.apache.hadoop.hbase.KeepDeletedCells; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueTestUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdge; @@ -43,7 +44,7 @@ import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; import org.junit.experimental.categories.Category; // Can't be small as it plays with EnvironmentEdgeManager -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestStoreScanner extends TestCase { private static final String CF_STR = "cf"; final byte [] CF = Bytes.toBytes(CF_STR); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java index 1c20305..ed8b819 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeCompactor.java @@ -38,6 +38,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.io.compress.Compression; @@ -51,7 +52,7 @@ import org.mockito.invocation.InvocationOnMock; import org.mockito.stubbing.Answer; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStripeCompactor { private static final byte[] NAME_OF_THINGS = Bytes.toBytes("foo"); private static final TableName TABLE_NAME = TableName.valueOf(NAME_OF_THINGS, NAME_OF_THINGS); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java index c9a8839..d8cdc90 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreEngine.java @@ -27,6 +27,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; import org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest; @@ -35,7 +36,7 @@ import org.apache.hadoop.hbase.regionserver.compactions.StripeCompactor; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStripeStoreEngine { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java index 36a726d..48f93e0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStripeStoreFileManager.java @@ -41,6 +41,7 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValue.KVComparator; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; @@ -49,8 +50,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; - -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStripeStoreFileManager { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static final Path BASEDIR = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java index 581b987..eaea83e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTags.java @@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.client.Admin; @@ -67,7 +68,7 @@ import org.junit.rules.TestName; /** * Class that test tags */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestTags { static boolean useFilter = false; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java index 85fb5dc..edec023 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestTimeRangeTracker.java @@ -21,10 +21,11 @@ import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.io.TimeRange; import org.apache.hadoop.hbase.testclassification.SmallTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category({SmallTests.class}) +@Category({RegionServerTests.class, SmallTests.class}) public class TestTimeRangeTracker { @Test public void testAlwaysDecrementingSetsMaximum() { @@ -107,4 +108,4 @@ public class TestTimeRangeTracker { System.out.println(trr.getMinimumTimestamp() + " " + trr.getMaximumTimestamp() + " " + (System.currentTimeMillis() - start)); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java index 7ed85ca..b929cfe 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestWideScanner.java @@ -32,15 +32,17 @@ import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestCase; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.util.Bytes; +import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestWideScanner extends HBaseTestCase { private final Log LOG = LogFactory.getLog(this.getClass()); @@ -84,6 +86,7 @@ public class TestWideScanner extends HBaseTestCase { return count; } + @Test public void testWideScanBatching() throws IOException { final int batch = 256; try { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessMergeOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessMergeOnCluster.java deleted file mode 100644 index 4900af8..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessMergeOnCluster.java +++ /dev/null @@ -1,45 +0,0 @@ -/** - * Copyright The Apache Software Foundation - * - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with this - * work for additional information regarding copyright ownership. The ASF - * licenses this file to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations - * under the License. - */ -package org.apache.hadoop.hbase.regionserver; - -import org.apache.hadoop.hbase.testclassification.LargeTests; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.experimental.categories.Category; - -/** - * Like {@link TestRegionMergeTransaction} in that we're testing - * {@link RegionMergeTransaction} only the below tests are against a running - * cluster where {@link TestRegionMergeTransaction} is tests against bare - * {@link HRegion}. - */ -@Category(LargeTests.class) -public class TestZKLessMergeOnCluster extends TestRegionMergeTransactionOnCluster { - @BeforeClass - public static void beforeAllTests() throws Exception { - // Don't use ZK for region assignment - TEST_UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", false); - setupOnce(); - } - - @AfterClass - public static void afterAllTests() throws Exception { - TestRegionMergeTransactionOnCluster.afterAllTests(); - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessSplitOnCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessSplitOnCluster.java deleted file mode 100644 index 1201c01..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestZKLessSplitOnCluster.java +++ /dev/null @@ -1,45 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.regionserver; - -import org.apache.hadoop.hbase.testclassification.LargeTests; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.experimental.categories.Category; - -/** - * Like {@link TestSplitTransaction} in that we're testing {@link SplitTransaction} - * only the below tests are against a running cluster where {@link TestSplitTransaction} - * is tests against a bare {@link HRegion}. - */ -@Category(LargeTests.class) -public class TestZKLessSplitOnCluster extends TestSplitTransactionOnCluster { - @BeforeClass - public static void before() throws Exception { - // Don't use ZK for region assignment - TESTING_UTIL.getConfiguration().setBoolean("hbase.assignment.usezk", false); - setupOnce(); - } - - @AfterClass - public static void after() throws Exception { - TestSplitTransactionOnCluster.after(); - } -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/PerfTestCompactionPolicies.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/PerfTestCompactionPolicies.java index 1e96aa0..3fcd3fe 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/PerfTestCompactionPolicies.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/PerfTestCompactionPolicies.java @@ -23,6 +23,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.regionserver.HStore; import org.apache.hadoop.hbase.regionserver.StoreConfigInformation; import org.apache.hadoop.hbase.regionserver.StoreFile; @@ -41,7 +42,7 @@ import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) @RunWith(Parameterized.class) public class PerfTestCompactionPolicies extends MockStoreFileGenerator { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestOffPeakHours.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestOffPeakHours.java index 194e1f8..f43c29a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestOffPeakHours.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestOffPeakHours.java @@ -22,13 +22,14 @@ import static org.junit.Assert.assertTrue; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Before; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestOffPeakHours { private static HBaseTestingUtility testUtil; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java index c2f1739..0685568 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestStripeCompactionPolicy.java @@ -48,6 +48,7 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.hfile.HFile; @@ -75,7 +76,7 @@ import org.mockito.ArgumentMatcher; import com.google.common.collect.ImmutableList; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestStripeCompactionPolicy { private static final byte[] KEY_A = Bytes.toBytes("aaa"); private static final byte[] KEY_B = Bytes.toBytes("bbb"); @@ -131,6 +132,8 @@ public class TestStripeCompactionPolicy { public void testSingleStripeCompaction() throws Exception { // Create a special policy that only compacts single stripes, using standard methods. Configuration conf = HBaseConfiguration.create(); + // Test depends on this not being set to pass. Default breaks test. TODO: Revisit. + conf.unset("hbase.hstore.compaction.min.size"); conf.setFloat(CompactionConfiguration.HBASE_HSTORE_COMPACTION_RATIO_KEY, 1.0F); conf.setInt(StripeStoreConfig.MIN_FILES_KEY, 3); conf.setInt(StripeStoreConfig.MAX_FILES_KEY, 4); @@ -251,6 +254,8 @@ public class TestStripeCompactionPolicy { @Test public void testSplitOffStripe() throws Exception { Configuration conf = HBaseConfiguration.create(); + // Test depends on this not being set to pass. Default breaks test. TODO: Revisit. + conf.unset("hbase.hstore.compaction.min.size"); // First test everything with default split count of 2, then split into more. conf.setInt(StripeStoreConfig.MIN_FILES_KEY, 2); Long[] toSplit = new Long[] { defaultSplitSize - 2, 1L, 1L }; @@ -281,6 +286,10 @@ public class TestStripeCompactionPolicy { public void testSplitOffStripeOffPeak() throws Exception { // for HBASE-11439 Configuration conf = HBaseConfiguration.create(); + + // Test depends on this not being set to pass. Default breaks test. TODO: Revisit. + conf.unset("hbase.hstore.compaction.min.size"); + conf.setInt(StripeStoreConfig.MIN_FILES_KEY, 2); // Select the last 2 files. StripeCompactionPolicy.StripeInformationProvider si = @@ -391,6 +400,8 @@ public class TestStripeCompactionPolicy { @Test public void testSingleStripeDropDeletes() throws Exception { Configuration conf = HBaseConfiguration.create(); + // Test depends on this not being set to pass. Default breaks test. TODO: Revisit. + conf.unset("hbase.hstore.compaction.min.size"); StripeCompactionPolicy policy = createPolicy(conf); // Verify the deletes can be dropped if there are no L0 files. Long[][] stripes = new Long[][] { new Long[] { 3L, 2L, 2L, 2L }, new Long[] { 6L } }; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java deleted file mode 100644 index 75d4b3d..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestCloseRegionHandler.java +++ /dev/null @@ -1,255 +0,0 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.regionserver.handler; - -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertTrue; - -import java.io.IOException; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.RegionTransition; -import org.apache.hadoop.hbase.Server; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.coordination.OpenRegionCoordination; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.exceptions.DeserializationException; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.coordination.ZkCloseRegionCoordination; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.MockServer; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; -import org.mockito.Mockito; - -/** - * Test of the {@link CloseRegionHandler}. - */ -@Category(MediumTests.class) -public class TestCloseRegionHandler { - static final Log LOG = LogFactory.getLog(TestCloseRegionHandler.class); - private final static HBaseTestingUtility HTU = HBaseTestingUtility.createLocalHTU(); - private static final HTableDescriptor TEST_HTD = - new HTableDescriptor(TableName.valueOf("TestCloseRegionHandler")); - private HRegionInfo TEST_HRI; - private int testIndex = 0; - - @BeforeClass public static void before() throws Exception { - HTU.getConfiguration().setBoolean("hbase.assignment.usezk", true); - HTU.startMiniZKCluster(); - } - - @AfterClass public static void after() throws IOException { - HTU.shutdownMiniZKCluster(); - } - - /** - * Before each test, use a different HRI, so the different tests - * don't interfere with each other. This allows us to use just - * a single ZK cluster for the whole suite. - */ - @Before - public void setupHRI() { - TEST_HRI = new HRegionInfo(TEST_HTD.getTableName(), - Bytes.toBytes(testIndex), - Bytes.toBytes(testIndex + 1)); - testIndex++; - } - - /** - * Test that if we fail a flush, abort gets set on close. - * @see HBASE-4270 - * @throws IOException - * @throws NodeExistsException - * @throws KeeperException - */ - @Test public void testFailedFlushAborts() - throws IOException, NodeExistsException, KeeperException { - final Server server = new MockServer(HTU, false); - final RegionServerServices rss = HTU.createMockRegionServerService(); - HTableDescriptor htd = TEST_HTD; - final HRegionInfo hri = - new HRegionInfo(htd.getTableName(), HConstants.EMPTY_END_ROW, - HConstants.EMPTY_END_ROW); - HRegion region = HTU.createLocalHRegion(hri, htd); - try { - assertNotNull(region); - // Spy on the region so can throw exception when close is called. - HRegion spy = Mockito.spy(region); - final boolean abort = false; - Mockito.when(spy.close(abort)). - thenThrow(new IOException("Mocked failed close!")); - // The CloseRegionHandler will try to get an HRegion that corresponds - // to the passed hri -- so insert the region into the online region Set. - rss.addToOnlineRegions(spy); - // Assert the Server is NOT stopped before we call close region. - assertFalse(server.isStopped()); - - ZkCoordinatedStateManager consensusProvider = new ZkCoordinatedStateManager(); - consensusProvider.initialize(server); - consensusProvider.start(); - - ZkCloseRegionCoordination.ZkCloseRegionDetails zkCrd = - new ZkCloseRegionCoordination.ZkCloseRegionDetails(); - zkCrd.setPublishStatusInZk(false); - zkCrd.setExpectedVersion(-1); - - CloseRegionHandler handler = new CloseRegionHandler(server, rss, hri, false, - consensusProvider.getCloseRegionCoordination(), zkCrd); - boolean throwable = false; - try { - handler.process(); - } catch (Throwable t) { - throwable = true; - } finally { - assertTrue(throwable); - // Abort calls stop so stopped flag should be set. - assertTrue(server.isStopped()); - } - } finally { - HRegion.closeHRegion(region); - } - } - - /** - * Test if close region can handle ZK closing node version mismatch - * @throws IOException - * @throws NodeExistsException - * @throws KeeperException - * @throws DeserializationException - */ - @Test public void testZKClosingNodeVersionMismatch() - throws IOException, NodeExistsException, KeeperException, DeserializationException { - final Server server = new MockServer(HTU); - final RegionServerServices rss = HTU.createMockRegionServerService(); - - HTableDescriptor htd = TEST_HTD; - final HRegionInfo hri = TEST_HRI; - - ZkCoordinatedStateManager coordinationProvider = new ZkCoordinatedStateManager(); - coordinationProvider.initialize(server); - coordinationProvider.start(); - - // open a region first so that it can be closed later - OpenRegion(server, rss, htd, hri, coordinationProvider.getOpenRegionCoordination()); - - // close the region - // Create it CLOSING, which is what Master set before sending CLOSE RPC - int versionOfClosingNode = ZKAssign.createNodeClosing(server.getZooKeeper(), - hri, server.getServerName()); - - // The CloseRegionHandler will validate the expected version - // Given it is set to invalid versionOfClosingNode+1, - // CloseRegionHandler should be M_ZK_REGION_CLOSING - - ZkCloseRegionCoordination.ZkCloseRegionDetails zkCrd = - new ZkCloseRegionCoordination.ZkCloseRegionDetails(); - zkCrd.setPublishStatusInZk(true); - zkCrd.setExpectedVersion(versionOfClosingNode+1); - - CloseRegionHandler handler = new CloseRegionHandler(server, rss, hri, false, - coordinationProvider.getCloseRegionCoordination(), zkCrd); - handler.process(); - - // Handler should remain in M_ZK_REGION_CLOSING - RegionTransition rt = - RegionTransition.parseFrom(ZKAssign.getData(server.getZooKeeper(), hri.getEncodedName())); - assertTrue(rt.getEventType().equals(EventType.M_ZK_REGION_CLOSING )); - } - - /** - * Test if the region can be closed properly - * @throws IOException - * @throws NodeExistsException - * @throws KeeperException - * @throws org.apache.hadoop.hbase.exceptions.DeserializationException - */ - @Test public void testCloseRegion() - throws IOException, NodeExistsException, KeeperException, DeserializationException { - final Server server = new MockServer(HTU); - final RegionServerServices rss = HTU.createMockRegionServerService(); - - HTableDescriptor htd = TEST_HTD; - HRegionInfo hri = TEST_HRI; - - ZkCoordinatedStateManager coordinationProvider = new ZkCoordinatedStateManager(); - coordinationProvider.initialize(server); - coordinationProvider.start(); - - // open a region first so that it can be closed later - OpenRegion(server, rss, htd, hri, coordinationProvider.getOpenRegionCoordination()); - - // close the region - // Create it CLOSING, which is what Master set before sending CLOSE RPC - int versionOfClosingNode = ZKAssign.createNodeClosing(server.getZooKeeper(), - hri, server.getServerName()); - - // The CloseRegionHandler will validate the expected version - // Given it is set to correct versionOfClosingNode, - // CloseRegionHandlerit should be RS_ZK_REGION_CLOSED - - ZkCloseRegionCoordination.ZkCloseRegionDetails zkCrd = - new ZkCloseRegionCoordination.ZkCloseRegionDetails(); - zkCrd.setPublishStatusInZk(true); - zkCrd.setExpectedVersion(versionOfClosingNode); - - CloseRegionHandler handler = new CloseRegionHandler(server, rss, hri, false, - coordinationProvider.getCloseRegionCoordination(), zkCrd); - handler.process(); - // Handler should have transitioned it to RS_ZK_REGION_CLOSED - RegionTransition rt = RegionTransition.parseFrom( - ZKAssign.getData(server.getZooKeeper(), hri.getEncodedName())); - assertTrue(rt.getEventType().equals(EventType.RS_ZK_REGION_CLOSED)); - } - - private void OpenRegion(Server server, RegionServerServices rss, - HTableDescriptor htd, HRegionInfo hri, OpenRegionCoordination coordination) - throws IOException, NodeExistsException, KeeperException, DeserializationException { - // Create it OFFLINE node, which is what Master set before sending OPEN RPC - ZKAssign.createNodeOffline(server.getZooKeeper(), hri, server.getServerName()); - - OpenRegionCoordination.OpenRegionDetails ord = - coordination.getDetailsForNonCoordinatedOpening(); - OpenRegionHandler openHandler = - new OpenRegionHandler(server, rss, hri, htd, coordination, ord); - rss.getRegionsInTransitionInRS().put(hri.getEncodedNameAsBytes(), Boolean.TRUE); - openHandler.process(); - // This parse is not used? - RegionTransition.parseFrom(ZKAssign.getData(server.getZooKeeper(), hri.getEncodedName())); - // delete the node, which is what Master do after the region is opened - ZKAssign.deleteNode(server.getZooKeeper(), hri.getEncodedName(), - EventType.RS_ZK_REGION_OPENED, server.getServerName()); - } - -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestOpenRegionHandler.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestOpenRegionHandler.java deleted file mode 100644 index 8346787..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/handler/TestOpenRegionHandler.java +++ /dev/null @@ -1,361 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.regionserver.handler; - -import static org.junit.Assert.*; - -import java.io.IOException; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.*; -import org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager; -import org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination; -import org.apache.hadoop.hbase.executor.EventType; -import org.apache.hadoop.hbase.regionserver.HRegion; -import org.apache.hadoop.hbase.regionserver.RegionServerServices; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.MockServer; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZKUtil; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; -import org.apache.zookeeper.KeeperException; -import org.apache.zookeeper.KeeperException.NodeExistsException; -import org.junit.AfterClass; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -/** - * Test of the {@link OpenRegionHandler}. - */ -@Category(MediumTests.class) -public class TestOpenRegionHandler { - static final Log LOG = LogFactory.getLog(TestOpenRegionHandler.class); - private final static HBaseTestingUtility HTU = HBaseTestingUtility.createLocalHTU(); - private static HTableDescriptor TEST_HTD; - private HRegionInfo TEST_HRI; - - private int testIndex = 0; - - @BeforeClass public static void before() throws Exception { - HTU.getConfiguration().setBoolean("hbase.assignment.usezk", true); - HTU.startMiniZKCluster(); - TEST_HTD = new HTableDescriptor(TableName.valueOf("TestOpenRegionHandler.java")); - } - - @AfterClass public static void after() throws IOException { - TEST_HTD = null; - HTU.shutdownMiniZKCluster(); - } - - /** - * Before each test, use a different HRI, so the different tests - * don't interfere with each other. This allows us to use just - * a single ZK cluster for the whole suite. - */ - @Before - public void setupHRI() { - TEST_HRI = new HRegionInfo(TEST_HTD.getTableName(), - Bytes.toBytes(testIndex), - Bytes.toBytes(testIndex + 1)); - testIndex++; - } - - /** - * Test the openregionhandler can deal with its znode being yanked out from - * under it. - * @see HBASE-3627 - * @throws IOException - * @throws NodeExistsException - * @throws KeeperException - */ - @Test public void testYankingRegionFromUnderIt() - throws IOException, NodeExistsException, KeeperException { - final Server server = new MockServer(HTU); - final RegionServerServices rss = HTU.createMockRegionServerService(); - - HTableDescriptor htd = TEST_HTD; - final HRegionInfo hri = TEST_HRI; - HRegion region = - HRegion.createHRegion(hri, HTU.getDataTestDir(), HTU - .getConfiguration(), htd); - assertNotNull(region); - try { - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - OpenRegionHandler handler = new OpenRegionHandler(server, rss, hri, - htd, csm.getOpenRegionCoordination(), zkCrd) { - HRegion openRegion() { - // Open region first, then remove znode as though it'd been hijacked. - HRegion region = super.openRegion(); - - // Don't actually open region BUT remove the znode as though it'd - // been hijacked on us. - ZooKeeperWatcher zkw = this.server.getZooKeeper(); - String node = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - try { - ZKUtil.deleteNodeFailSilent(zkw, node); - } catch (KeeperException e) { - throw new RuntimeException("Ugh failed delete of " + node, e); - } - return region; - } - }; - rss.getRegionsInTransitionInRS().put( - hri.getEncodedNameAsBytes(), Boolean.TRUE); - // Call process without first creating OFFLINE region in zk, see if - // exception or just quiet return (expected). - handler.process(); - rss.getRegionsInTransitionInRS().put( - hri.getEncodedNameAsBytes(), Boolean.TRUE); - ZKAssign.createNodeOffline(server.getZooKeeper(), hri, server.getServerName()); - // Call process again but this time yank the zk znode out from under it - // post OPENING; again will expect it to come back w/o NPE or exception. - handler.process(); - } finally { - HRegion.closeHRegion(region); - } - } - - /** - * Test the openregionhandler can deal with perceived failure of transitioning to OPENED state - * due to intermittent zookeeper malfunctioning. - * @see HBASE-9387 - * @throws IOException - * @throws NodeExistsException - * @throws KeeperException - */ - @Test - public void testRegionServerAbortionDueToFailureTransitioningToOpened() - throws IOException, NodeExistsException, KeeperException { - final Server server = new MockServer(HTU); - final RegionServerServices rss = HTU.createMockRegionServerService(); - - HTableDescriptor htd = TEST_HTD; - final HRegionInfo hri = TEST_HRI; - HRegion region = - HRegion.createHRegion(hri, HTU.getDataTestDir(), HTU - .getConfiguration(), htd); - assertNotNull(region); - try { - - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - ZkOpenRegionCoordination openRegionCoordination = - new ZkOpenRegionCoordination(csm, server.getZooKeeper()) { - @Override - public boolean transitionToOpened(final HRegion r, OpenRegionDetails ord) - throws IOException { - // remove znode simulating intermittent zookeeper connection issue - ZooKeeperWatcher zkw = server.getZooKeeper(); - String node = ZKAssign.getNodeName(zkw, hri.getEncodedName()); - try { - ZKUtil.deleteNodeFailSilent(zkw, node); - } catch (KeeperException e) { - throw new RuntimeException("Ugh failed delete of " + node, e); - } - // then try to transition to OPENED - return super.transitionToOpened(r, ord); - } - }; - - OpenRegionHandler handler = new OpenRegionHandler(server, rss, hri, htd, - openRegionCoordination, zkCrd); - rss.getRegionsInTransitionInRS().put( - hri.getEncodedNameAsBytes(), Boolean.TRUE); - // Call process without first creating OFFLINE region in zk, see if - // exception or just quiet return (expected). - handler.process(); - rss.getRegionsInTransitionInRS().put( - hri.getEncodedNameAsBytes(), Boolean.TRUE); - ZKAssign.createNodeOffline(server.getZooKeeper(), hri, server.getServerName()); - // Call process again but this time yank the zk znode out from under it - // post OPENING; again will expect it to come back w/o NPE or exception. - handler.process(); - } catch (IOException ioe) { - } finally { - HRegion.closeHRegion(region); - } - // Region server is expected to abort due to OpenRegionHandler perceiving transitioning - // to OPENED as failed - // This was corresponding to the second handler.process() call above. - assertTrue("region server should have aborted", server.isAborted()); - } - - @Test - public void testFailedOpenRegion() throws Exception { - Server server = new MockServer(HTU); - RegionServerServices rsServices = HTU.createMockRegionServerService(); - - // Create it OFFLINE, which is what it expects - ZKAssign.createNodeOffline(server.getZooKeeper(), TEST_HRI, server.getServerName()); - - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - // Create the handler - OpenRegionHandler handler = - new OpenRegionHandler(server, rsServices, TEST_HRI, TEST_HTD, - csm.getOpenRegionCoordination(), zkCrd) { - @Override - HRegion openRegion() { - // Fake failure of opening a region due to an IOE, which is caught - return null; - } - }; - rsServices.getRegionsInTransitionInRS().put( - TEST_HRI.getEncodedNameAsBytes(), Boolean.TRUE); - handler.process(); - - // Handler should have transitioned it to FAILED_OPEN - RegionTransition rt = RegionTransition.parseFrom( - ZKAssign.getData(server.getZooKeeper(), TEST_HRI.getEncodedName())); - assertEquals(EventType.RS_ZK_REGION_FAILED_OPEN, rt.getEventType()); - } - - @Test - public void testFailedUpdateMeta() throws Exception { - Server server = new MockServer(HTU); - RegionServerServices rsServices = HTU.createMockRegionServerService(); - - // Create it OFFLINE, which is what it expects - ZKAssign.createNodeOffline(server.getZooKeeper(), TEST_HRI, server.getServerName()); - - // Create the handler - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - OpenRegionHandler handler = new OpenRegionHandler(server, rsServices, TEST_HRI, TEST_HTD, - csm.getOpenRegionCoordination(), zkCrd) { - @Override - boolean updateMeta(final HRegion r) { - // Fake failure of updating META - return false; - } - }; - rsServices.getRegionsInTransitionInRS().put( - TEST_HRI.getEncodedNameAsBytes(), Boolean.TRUE); - handler.process(); - - // Handler should have transitioned it to FAILED_OPEN - RegionTransition rt = RegionTransition.parseFrom( - ZKAssign.getData(server.getZooKeeper(), TEST_HRI.getEncodedName())); - assertEquals(EventType.RS_ZK_REGION_FAILED_OPEN, rt.getEventType()); - } - - @Test - public void testTransitionToFailedOpenEvenIfCleanupFails() throws Exception { - Server server = new MockServer(HTU); - RegionServerServices rsServices = HTU.createMockRegionServerService(); - // Create it OFFLINE, which is what it expects - ZKAssign.createNodeOffline(server.getZooKeeper(), TEST_HRI, server.getServerName()); - // Create the handler - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - OpenRegionHandler handler = new OpenRegionHandler(server, rsServices, TEST_HRI, TEST_HTD, - csm.getOpenRegionCoordination(), zkCrd) { - @Override - boolean updateMeta(HRegion r) { - return false; - }; - - @Override - void cleanupFailedOpen(HRegion region) throws IOException { - throw new IOException("FileSystem got closed."); - } - }; - rsServices.getRegionsInTransitionInRS().put(TEST_HRI.getEncodedNameAsBytes(), Boolean.TRUE); - try { - handler.process(); - } catch (Exception e) { - // Ignore the IOException that we have thrown from cleanupFailedOpen - } - RegionTransition rt = RegionTransition.parseFrom(ZKAssign.getData(server.getZooKeeper(), - TEST_HRI.getEncodedName())); - assertEquals(EventType.RS_ZK_REGION_FAILED_OPEN, rt.getEventType()); - } - - @Test - public void testTransitionToFailedOpenFromOffline() throws Exception { - Server server = new MockServer(HTU); - RegionServerServices rsServices = HTU.createMockRegionServerService(server.getServerName()); - // Create it OFFLINE, which is what it expects - ZKAssign.createNodeOffline(server.getZooKeeper(), TEST_HRI, server.getServerName()); - // Create the handler - ZkCoordinatedStateManager csm = new ZkCoordinatedStateManager(); - csm.initialize(server); - csm.start(); - - ZkOpenRegionCoordination.ZkOpenRegionDetails zkCrd = - new ZkOpenRegionCoordination.ZkOpenRegionDetails(); - zkCrd.setServerName(server.getServerName()); - - ZkOpenRegionCoordination openRegionCoordination = - new ZkOpenRegionCoordination(csm, server.getZooKeeper()) { - @Override - public boolean transitionFromOfflineToOpening(HRegionInfo regionInfo, - OpenRegionDetails ord) { - return false; - } - }; - - OpenRegionHandler handler = new OpenRegionHandler(server, rsServices, TEST_HRI, TEST_HTD, - openRegionCoordination, zkCrd); - rsServices.getRegionsInTransitionInRS().put(TEST_HRI.getEncodedNameAsBytes(), Boolean.TRUE); - - handler.process(); - - RegionTransition rt = RegionTransition.parseFrom(ZKAssign.getData(server.getZooKeeper(), - TEST_HRI.getEncodedName())); - assertEquals(EventType.RS_ZK_REGION_FAILED_OPEN, rt.getEventType()); - } - -} - diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCompressor.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCompressor.java index 4e46ed7..03baf48 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCompressor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCompressor.java @@ -27,6 +27,7 @@ import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.util.Dictionary; import org.apache.hadoop.hbase.io.util.LRUDictionary; @@ -38,7 +39,7 @@ import org.junit.experimental.categories.Category; /** * Test our compressor class. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestCompressor { @BeforeClass public static void setUpBeforeClass() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCustomWALCellCodec.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCustomWALCellCodec.java index 7f48f9b..624f2c2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCustomWALCellCodec.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestCustomWALCellCodec.java @@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; /** * Test that we can create, load, setup our own custom codec */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestCustomWALCellCodec { public static class CustomWALCellCodec extends WALCellCodec { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java index 0081eb1..10e7e3d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java @@ -30,6 +30,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Increment; @@ -50,7 +51,7 @@ import org.junit.experimental.categories.Category; /** * Tests for WAL write durability */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestDurability { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static FileSystem FS; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java index 669060c..970b0f2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestFSHLog.java @@ -30,26 +30,27 @@ import java.util.Comparator; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; import java.util.UUID; import java.util.concurrent.atomic.AtomicLong; import org.apache.commons.lang.mutable.MutableBoolean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.commons.logging.impl.Log4JLogger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.Coprocessor; -import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; @@ -57,6 +58,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.SampleRegionWALObserver; import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdge; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -65,6 +68,10 @@ import org.apache.hadoop.hbase.util.Threads; import org.apache.hadoop.hbase.wal.DefaultWALProvider; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hdfs.DFSClient; +import org.apache.hadoop.hdfs.server.datanode.DataNode; +import org.apache.hadoop.hdfs.server.namenode.LeaseManager; +import org.apache.log4j.Level; import org.junit.After; import org.junit.AfterClass; import org.junit.Before; @@ -77,7 +84,7 @@ import org.junit.rules.TestName; /** * Provides FSHLog test cases. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestFSHLog { protected static final Log LOG = LogFactory.getLog(TestFSHLog.class); @@ -152,18 +159,15 @@ public class TestFSHLog { } } - protected void addEdits(WAL log, HRegionInfo hri, TableName tableName, - int times, AtomicLong sequenceId) throws IOException { - HTableDescriptor htd = new HTableDescriptor(); - htd.addFamily(new HColumnDescriptor("row")); - - final byte [] row = Bytes.toBytes("row"); + protected void addEdits(WAL log, HRegionInfo hri, HTableDescriptor htd, int times, + AtomicLong sequenceId) throws IOException { + final byte[] row = Bytes.toBytes("row"); for (int i = 0; i < times; i++) { long timestamp = System.currentTimeMillis(); WALEdit cols = new WALEdit(); cols.add(new KeyValue(row, row, row, timestamp, row)); - log.append(htd, hri, new WALKey(hri.getEncodedNameAsBytes(), tableName, timestamp), cols, - sequenceId, true, null); + log.append(htd, hri, new WALKey(hri.getEncodedNameAsBytes(), htd.getTableName(), timestamp), + cols, sequenceId, true, null); } log.sync(); } @@ -173,8 +177,8 @@ public class TestFSHLog { * @param wal * @param regionEncodedName */ - protected void flushRegion(WAL wal, byte[] regionEncodedName) { - wal.startCacheFlush(regionEncodedName); + protected void flushRegion(WAL wal, byte[] regionEncodedName, Set flushedFamilyNames) { + wal.startCacheFlush(regionEncodedName, flushedFamilyNames); wal.completeCacheFlush(regionEncodedName); } @@ -248,10 +252,14 @@ public class TestFSHLog { conf1.setInt("hbase.regionserver.maxlogs", 1); FSHLog wal = new FSHLog(fs, FSUtils.getRootDir(conf1), dir.toString(), HConstants.HREGION_OLDLOGDIR_NAME, conf1, null, true, null, null); - TableName t1 = TableName.valueOf("t1"); - TableName t2 = TableName.valueOf("t2"); - HRegionInfo hri1 = new HRegionInfo(t1, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); - HRegionInfo hri2 = new HRegionInfo(t2, HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); + HTableDescriptor t1 = + new HTableDescriptor(TableName.valueOf("t1")).addFamily(new HColumnDescriptor("row")); + HTableDescriptor t2 = + new HTableDescriptor(TableName.valueOf("t2")).addFamily(new HColumnDescriptor("row")); + HRegionInfo hri1 = + new HRegionInfo(t1.getTableName(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); + HRegionInfo hri2 = + new HRegionInfo(t2.getTableName(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); // variables to mock region sequenceIds final AtomicLong sequenceId1 = new AtomicLong(1); final AtomicLong sequenceId2 = new AtomicLong(1); @@ -278,12 +286,12 @@ public class TestFSHLog { assertEquals(hri1.getEncodedNameAsBytes(), regionsToFlush[0]); // flush region 1, and roll the wal file. Only last wal which has entries for region1 should // remain. - flushRegion(wal, hri1.getEncodedNameAsBytes()); + flushRegion(wal, hri1.getEncodedNameAsBytes(), t1.getFamiliesKeys()); wal.rollWriter(); // only one wal should remain now (that is for the second region). assertEquals(1, wal.getNumRolledLogFiles()); // flush the second region - flushRegion(wal, hri2.getEncodedNameAsBytes()); + flushRegion(wal, hri2.getEncodedNameAsBytes(), t2.getFamiliesKeys()); wal.rollWriter(true); // no wal should remain now. assertEquals(0, wal.getNumRolledLogFiles()); @@ -300,14 +308,14 @@ public class TestFSHLog { regionsToFlush = wal.findRegionsToForceFlush(); assertEquals(2, regionsToFlush.length); // flush both regions - flushRegion(wal, hri1.getEncodedNameAsBytes()); - flushRegion(wal, hri2.getEncodedNameAsBytes()); + flushRegion(wal, hri1.getEncodedNameAsBytes(), t1.getFamiliesKeys()); + flushRegion(wal, hri2.getEncodedNameAsBytes(), t2.getFamiliesKeys()); wal.rollWriter(true); assertEquals(0, wal.getNumRolledLogFiles()); // Add an edit to region1, and roll the wal. addEdits(wal, hri1, t1, 2, sequenceId1); // tests partial flush: roll on a partial flush, and ensure that wal is not archived. - wal.startCacheFlush(hri1.getEncodedNameAsBytes()); + wal.startCacheFlush(hri1.getEncodedNameAsBytes(), t1.getFamiliesKeys()); wal.rollWriter(); wal.completeCacheFlush(hri1.getEncodedNameAsBytes()); assertEquals(1, wal.getNumRolledLogFiles()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java index 0417083..0450904 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestKeyValueCompression.java @@ -24,6 +24,7 @@ import java.util.List; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.io.util.LRUDictionary; @@ -36,7 +37,7 @@ import static org.junit.Assert.*; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestKeyValueCompression { private static final byte[] VALUE = Bytes.toBytes("fake value"); private static final int BUF_SIZE = 256*1024; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java index 7f4ee80..b4cb213 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java @@ -20,8 +20,9 @@ package org.apache.hadoop.hbase.regionserver.wal; import java.io.FileNotFoundException; import java.io.IOException; import java.util.concurrent.atomic.AtomicLong; - import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.junit.Assert; import static org.junit.Assert.assertTrue; @@ -36,7 +37,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; @@ -61,7 +61,7 @@ import org.junit.experimental.categories.Category; * Tests for conditions that should trigger RegionServer aborts when * rolling the current WAL fails. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestLogRollAbort { private static final Log LOG = LogFactory.getLog(TestLogRolling.class); private static MiniDFSCluster dfsCluster; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java index 423d8d2..cdbdf6f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollPeriod.java @@ -27,8 +27,11 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; @@ -43,7 +46,7 @@ import org.junit.experimental.categories.Category; /** * Tests that verifies that the log is forced to be rolled every "hbase.regionserver.logroll.period" */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestLogRollPeriod { private static final Log LOG = LogFactory.getLog(TestLogRolling.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java index 0890c48..86e77ad 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRolling.java @@ -41,7 +41,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -57,6 +56,8 @@ import org.apache.hadoop.hbase.fs.HFileSystem; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.Store; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.JVMClusterUtil; @@ -77,7 +78,7 @@ import org.junit.experimental.categories.Category; /** * Test log deletion as logs are rolled. */ -@Category(LargeTests.class) +@Category({VerySlowRegionServerTests.class, LargeTests.class}) public class TestLogRolling { private static final Log LOG = LogFactory.getLog(TestLogRolling.class); private HRegionServer server; @@ -428,7 +429,7 @@ public class TestLogRolling { desc.addFamily(new HColumnDescriptor(HConstants.CATALOG_FAMILY)); admin.createTable(desc); - Table table = new HTable(TEST_UTIL.getConfiguration(), desc.getTableName()); + HTable table = new HTable(TEST_UTIL.getConfiguration(), desc.getTableName()); server = TEST_UTIL.getRSForFirstRegionInTable(desc.getTableName()); final WAL log = server.getWAL(null); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java index 8727e23..41e05ae 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollingNoCluster.java @@ -32,10 +32,10 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; -import org.apache.hadoop.hbase.util.FSTableDescriptors; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; @@ -46,7 +46,7 @@ import org.junit.experimental.categories.Category; /** * Test many concurrent appenders to an {@link #WAL} while rolling the log. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestLogRollingNoCluster { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final static byte [] EMPTY_1K_ARRAY = new byte[1024]; @@ -135,8 +135,7 @@ public class TestLogRollingNoCluster { byte[] bytes = Bytes.toBytes(i); edit.add(new KeyValue(bytes, bytes, bytes, now, EMPTY_1K_ARRAY)); final HRegionInfo hri = HRegionInfo.FIRST_META_REGIONINFO; - final FSTableDescriptors fts = new FSTableDescriptors(TEST_UTIL.getConfiguration()); - final HTableDescriptor htd = fts.get(TableName.META_TABLE_NAME); + final HTableDescriptor htd = TEST_UTIL.getMetaTableDescriptor(); final long txid = wal.append(htd, hri, new WALKey(hri.getEncodedNameAsBytes(), TableName.META_TABLE_NAME, now), edit, sequenceId, true, null); wal.sync(txid); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestMetricsWAL.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestMetricsWAL.java index ffb20db..d9183d0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestMetricsWAL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestMetricsWAL.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.regionserver.wal; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import static org.mockito.Mockito.mock; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestMetricsWAL { @Test public void testLogRollRequested() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestProtobufLog.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestProtobufLog.java index 20924b2..04cb2ce 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestProtobufLog.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestProtobufLog.java @@ -38,10 +38,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.SampleRegionWALObserver; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; @@ -59,7 +60,7 @@ import org.junit.rules.TestName; /** * WAL tests that can be reused across providers. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestProtobufLog { protected static final Log LOG = LogFactory.getLog(TestProtobufLog.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadOldRootAndMetaEdits.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadOldRootAndMetaEdits.java index 1138f65..b256651 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadOldRootAndMetaEdits.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestReadOldRootAndMetaEdits.java @@ -32,11 +32,13 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.wal.WAL; @@ -51,7 +53,7 @@ import org.junit.experimental.categories.Category; /** * Tests to read old ROOT, Meta edits. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestReadOldRootAndMetaEdits { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java index 746a4e2..be5d951 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestSecureWALReplay.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.regionserver.wal; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.wal.WAL.Reader; import org.apache.hadoop.hbase.wal.WALProvider.Writer; @@ -27,7 +28,7 @@ import org.apache.hadoop.hbase.wal.WALProvider.Writer; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestSecureWALReplay extends TestWALReplay { @BeforeClass diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java index 87fcd2e..c8629d0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALActionsListener.java @@ -28,6 +28,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; @@ -45,7 +46,7 @@ import static org.junit.Assert.*; /** * Test that the actions are called while playing with an WAL */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestWALActionsListener { protected static final Log LOG = LogFactory.getLog(TestWALActionsListener.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java index 151c293..501fdda 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java @@ -30,15 +30,16 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.Tag; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.codec.Codec.Decoder; import org.apache.hadoop.hbase.codec.Codec.Encoder; import org.apache.hadoop.hbase.io.util.LRUDictionary; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestWALCellCodecWithCompression { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java index 3f551e4..6cdfe3b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplay.java @@ -27,7 +27,10 @@ import static org.mockito.Mockito.when; import java.io.IOException; import java.security.PrivilegedExceptionAction; import java.util.ArrayList; +import java.util.Collection; +import java.util.HashSet; import java.util.List; +import java.util.Set; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicLong; @@ -48,7 +51,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.MasterNotRunningException; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -75,6 +77,8 @@ import org.apache.hadoop.hbase.regionserver.RegionScanner; import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdge; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; @@ -99,7 +103,7 @@ import org.mockito.Mockito; /** * Test replay of edits out of a WAL split. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestWALReplay { public static final Log LOG = LogFactory.getLog(TestWALReplay.class); static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); @@ -786,13 +790,15 @@ public class TestWALReplay { // Add 1k to each family. final int countPerFamily = 1000; + Set familyNames = new HashSet(); for (HColumnDescriptor hcd: htd.getFamilies()) { addWALEdits(tableName, hri, rowName, hcd.getName(), countPerFamily, ee, wal, htd, sequenceId); + familyNames.add(hcd.getName()); } // Add a cache flush, shouldn't have any effect - wal.startCacheFlush(regionName); + wal.startCacheFlush(regionName, familyNames); wal.completeCacheFlush(regionName); // Add an edit to another family, should be skipped. @@ -832,11 +838,11 @@ public class TestWALReplay { final HRegion region = new HRegion(basedir, newWal, newFS, newConf, hri, htd, null) { @Override - protected FlushResult internalFlushcache( - final WAL wal, final long myseqid, MonitoredTask status) + protected FlushResult internalFlushcache(final WAL wal, final long myseqid, + Collection storesToFlush, MonitoredTask status) throws IOException { LOG.info("InternalFlushCache Invoked"); - FlushResult fs = super.internalFlushcache(wal, myseqid, + FlushResult fs = super.internalFlushcache(wal, myseqid, storesToFlush, Mockito.mock(MonitoredTask.class)); flushcount.incrementAndGet(); return fs; @@ -958,16 +964,16 @@ public class TestWALReplay { private HRegion r; @Override - public void requestFlush(HRegion region) { + public void requestFlush(HRegion region, boolean forceFlushAllStores) { try { - r.flushcache(); + r.flushcache(forceFlushAllStores); } catch (IOException e) { throw new RuntimeException("Exception flushing", e); } } @Override - public void requestDelayedFlush(HRegion region, long when) { + public void requestDelayedFlush(HRegion region, long when, boolean forceFlushAllStores) { // TODO Auto-generated method stub } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayCompressed.java hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayCompressed.java index 69407c1..4987fd4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayCompressed.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALReplayCompressed.java @@ -20,13 +20,14 @@ package org.apache.hadoop.hbase.regionserver.wal; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; /** * Enables compression and runs the TestWALReplay tests. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestWALReplayCompressed extends TestWALReplay { @BeforeClass diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java index 8e59c2a..a501af9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMasterReplication.java @@ -35,7 +35,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Delete; @@ -52,6 +51,8 @@ import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -60,7 +61,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestMasterReplication { private static final Log LOG = LogFactory.getLog(TestReplicationBase.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java index c2bb2ce..f9aeb6f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestMultiSlaveReplication.java @@ -46,6 +46,7 @@ import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -53,7 +54,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestMultiSlaveReplication { private static final Log LOG = LogFactory.getLog(TestReplicationBase.class); @@ -139,7 +140,7 @@ public class TestMultiSlaveReplication { htable2.setWriteBufferSize(1024); Table htable3 = new HTable(conf3, tableName); htable3.setWriteBufferSize(1024); - + admin1.addPeer("1", utility2.getClusterKey()); // put "row" and wait 'til it got around, then delete @@ -183,7 +184,7 @@ public class TestMultiSlaveReplication { // Even if the log was rolled in the middle of the replication // "row" is still replication. checkRow(row, 1, htable2); - // Replication thread of cluster 2 may be sleeping, and since row2 is not there in it, + // Replication thread of cluster 2 may be sleeping, and since row2 is not there in it, // we should wait before checking. checkWithWait(row, 1, htable3); @@ -260,7 +261,7 @@ public class TestMultiSlaveReplication { } } } - + private void checkRow(byte[] row, int count, Table... tables) throws IOException { Get get = new Get(row); for (Table table : tables) { @@ -292,7 +293,7 @@ public class TestMultiSlaveReplication { if (removedFromAll) { break; } else { - Thread.sleep(SLEEP_TIME); + Thread.sleep(SLEEP_TIME); } } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java index 2c9fc0f..169feba 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestPerTableCFReplication.java @@ -37,7 +37,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; @@ -49,6 +48,8 @@ import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -57,7 +58,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({FlakeyTests.class, LargeTests.class}) public class TestPerTableCFReplication { private static final Log LOG = LogFactory.getLog(TestPerTableCFReplication.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationChangingPeerRegionservers.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationChangingPeerRegionservers.java index d0caa45..67f2031 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationChangingPeerRegionservers.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationChangingPeerRegionservers.java @@ -22,9 +22,10 @@ import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; import static org.junit.Assert.fail; +import java.io.IOException; + import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; @@ -32,18 +33,18 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -import java.io.IOException; - /** * Test handling of changes to the number of a peer's regionservers. */ -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationChangingPeerRegionservers extends TestReplicationBase { private static final Log LOG = LogFactory.getLog(TestReplicationChangingPeerRegionservers.class); @@ -60,9 +61,9 @@ public class TestReplicationChangingPeerRegionservers extends TestReplicationBas utility1.getHBaseCluster().getRegionServerThreads()) { utility1.getHBaseAdmin().rollWALWriter(r.getRegionServer().getServerName()); } - utility1.truncateTable(tableName); + utility1.deleteTableData(tableName); // truncating the table will send one Delete per row to the slave cluster - // in an async fashion, which is why we cannot just call truncateTable on + // in an async fashion, which is why we cannot just call deleteTableData on // utility2 since late writes could make it to the slave in some way. // Instead, we truncate the first table and wait for all the Deletes to // make it to the slave. diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java index d73d7f8..3378c3f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationDisableInactivePeer.java @@ -20,10 +20,11 @@ package org.apache.hadoop.hbase.replication; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -31,7 +32,7 @@ import org.junit.experimental.categories.Category; import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.fail; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationDisableInactivePeer extends TestReplicationBase { private static final Log LOG = LogFactory.getLog(TestReplicationDisableInactivePeer.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java index 884d809..633dcc9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java @@ -29,17 +29,18 @@ import java.util.concurrent.atomic.AtomicReference; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.Cell; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException; -import org.apache.hadoop.hbase.wal.WAL.Entry; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.apache.hadoop.hbase.util.Threads; +import org.apache.hadoop.hbase.wal.WAL.Entry; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.junit.AfterClass; import org.junit.Assert; @@ -51,7 +52,7 @@ import org.junit.experimental.categories.Category; /** * Tests ReplicationSource and ReplicationEndpoint interactions */ -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationEndpoint extends TestReplicationBase { static final Log LOG = LogFactory.getLog(TestReplicationEndpoint.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRS.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRS.java index 9677e71..51a39a6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRS.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRS.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.replication; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -25,7 +26,7 @@ import org.junit.experimental.categories.Category; * Runs the TestReplicationKillRS test and selects the RS to kill in the master cluster * Do not add other tests in this class. */ -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationKillMasterRS extends TestReplicationKillRS { @Test(timeout=300000) diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRSCompressed.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRSCompressed.java index 15dbfa8..8deffd9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRSCompressed.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillMasterRSCompressed.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.replication; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; @@ -27,7 +28,7 @@ import org.junit.experimental.categories.Category; * Run the same test as TestReplicationKillMasterRS but with WAL compression enabled * Do not add other tests in this class. */ -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationKillMasterRSCompressed extends TestReplicationKillMasterRS { /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillRS.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillRS.java index 0f83943..6a6cf21 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillRS.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillRS.java @@ -18,21 +18,21 @@ */ package org.apache.hadoop.hbase.replication; - import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.UnknownScannerException; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.junit.experimental.categories.Category; import static org.junit.Assert.fail; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationKillRS extends TestReplicationBase { private static final Log LOG = LogFactory.getLog(TestReplicationKillRS.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillSlaveRS.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillSlaveRS.java index 3c77760..07e18b2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillSlaveRS.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationKillSlaveRS.java @@ -18,6 +18,7 @@ package org.apache.hadoop.hbase.replication; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -25,7 +26,7 @@ import org.junit.experimental.categories.Category; * Runs the TestReplicationKillRS test and selects the RS to kill in the slave cluster * Do not add other tests in this class. */ -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationKillSlaveRS extends TestReplicationKillRS { @Test(timeout=300000) diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java index d1c42f2..4377082 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSmallTests.java @@ -34,7 +34,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Get; @@ -50,6 +49,8 @@ import org.apache.hadoop.hbase.protobuf.generated.WALProtos; import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.replication.regionserver.Replication; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.JVMClusterUtil; @@ -58,7 +59,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationSmallTests extends TestReplicationBase { private static final Log LOG = LogFactory.getLog(TestReplicationSmallTests.class); @@ -75,9 +76,9 @@ public class TestReplicationSmallTests extends TestReplicationBase { utility1.getHBaseCluster().getRegionServerThreads()) { utility1.getHBaseAdmin().rollWALWriter(r.getRegionServer().getServerName()); } - utility1.truncateTable(tableName); + utility1.deleteTableData(tableName); // truncating the table will send one Delete per row to the slave cluster - // in an async fashion, which is why we cannot just call truncateTable on + // in an async fashion, which is why we cannot just call deleteTableData on // utility2 since late writes could make it to the slave in some way. // Instead, we truncate the first table and wait for all the Deletes to // make it to the slave. @@ -386,7 +387,7 @@ public class TestReplicationSmallTests extends TestReplicationBase { public void testLoading() throws Exception { LOG.info("Writing out rows to table1 in testLoading"); htable1.setWriteBufferSize(1024); - htable1.setAutoFlushTo(false); + ((HTable)htable1).setAutoFlushTo(false); for (int i = 0; i < NB_ROWS_IN_BIG_BATCH; i++) { Put put = new Put(Bytes.toBytes(i)); put.add(famName, row, row); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java index 458819d..aac966e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java @@ -31,19 +31,20 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALProvider; import org.apache.hadoop.hbase.wal.WALFactory; import org.apache.hadoop.hbase.wal.WALKey; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationSource { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateZKImpl.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateZKImpl.java index a07b708..f8060ba 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateZKImpl.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationStateZKImpl.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.replication; import static org.junit.Assert.assertFalse; @@ -30,10 +29,11 @@ import org.apache.hadoop.hbase.ClusterId; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZKClusterId; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -46,7 +46,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationStateZKImpl extends TestReplicationStateBasic { private static final Log LOG = LogFactory.getLog(TestReplicationStateZKImpl.class); @@ -184,4 +184,4 @@ public class TestReplicationStateZKImpl extends TestReplicationStateBasic { return this.isStopped; } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java index 8399ccc..58eb19f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSyncUpTool.java @@ -33,13 +33,14 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.replication.regionserver.ReplicationSyncUp; import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationSyncUpTool extends TestReplicationBase { private static final Log LOG = LogFactory.getLog(TestReplicationSyncUpTool.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTrackerZKImpl.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTrackerZKImpl.java index 2e3fe08..38e6fcf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTrackerZKImpl.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationTrackerZKImpl.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.replication; import static org.junit.Assert.assertEquals; @@ -33,10 +32,11 @@ import org.apache.hadoop.hbase.ClusterId; import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; import org.apache.hadoop.hbase.zookeeper.ZKClusterId; import org.apache.hadoop.hbase.zookeeper.ZKUtil; @@ -54,7 +54,7 @@ import org.junit.experimental.categories.Category; * interfaces (i.e. ReplicationPeers, etc.). Each test case in this class should ensure that the * MiniZKCluster is cleaned and returned to it's initial state (i.e. nothing but the rsZNode). */ -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationTrackerZKImpl { private static final Log LOG = LogFactory.getLog(TestReplicationTrackerZKImpl.class); @@ -183,7 +183,7 @@ public class TestReplicationTrackerZKImpl { int exists = 0; int hyphen = 0; rp.addPeer("6", new ReplicationPeerConfig().setClusterKey(utility.getClusterKey()), null); - + try{ rp.addPeer("6", new ReplicationPeerConfig().setClusterKey(utility.getClusterKey()), null); }catch(IllegalArgumentException e){ @@ -197,11 +197,11 @@ public class TestReplicationTrackerZKImpl { } assertEquals(1, exists); assertEquals(1, hyphen); - + // clean up rp.removePeer("6"); } - + private class DummyReplicationListener implements ReplicationListener { @Override @@ -287,4 +287,4 @@ public class TestReplicationTrackerZKImpl { return this.isStopped; } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java index 30fc603..3710fd6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWALEntryFilters.java @@ -26,7 +26,9 @@ import java.util.TreeMap; import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.wal.WAL.Entry; @@ -42,7 +44,7 @@ import com.google.common.collect.Lists; import static org.junit.Assert.*; import static org.mockito.Mockito.*; -@Category(SmallTests.class) +@Category({ReplicationTests.class, SmallTests.class}) public class TestReplicationWALEntryFilters { static byte[] a = new byte[] {'a'}; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWithTags.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWithTags.java index 2cca99c..fc06d15 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWithTags.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationWithTags.java @@ -36,10 +36,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -54,6 +55,8 @@ import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.ObserverContext; import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.junit.AfterClass; @@ -61,7 +64,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) public class TestReplicationWithTags { private static final Log LOG = LogFactory.getLog(TestReplicationWithTags.class); @@ -72,6 +75,9 @@ public class TestReplicationWithTags { private static ReplicationAdmin replicationAdmin; + private static Connection connection1; + private static Connection connection2; + private static Table htable1; private static Table htable2; @@ -136,22 +142,13 @@ public class TestReplicationWithTags { fam.setMaxVersions(3); fam.setScope(HConstants.REPLICATION_SCOPE_GLOBAL); table.addFamily(fam); - Admin admin = null; - try { - admin = new HBaseAdmin(conf1); + try (Connection conn = ConnectionFactory.createConnection(conf1); + Admin admin = conn.getAdmin()) { admin.createTable(table, HBaseTestingUtility.KEYS_FOR_HBA_CREATE_TABLE); - } finally { - if (admin != null) { - admin.close(); - } } - try { - admin = new HBaseAdmin(conf2); + try (Connection conn = ConnectionFactory.createConnection(conf2); + Admin admin = conn.getAdmin()) { admin.createTable(table, HBaseTestingUtility.KEYS_FOR_HBA_CREATE_TABLE); - } finally { - if(admin != null){ - admin.close(); - } } htable1 = new HTable(conf1, TABLE_NAME); htable1.setWriteBufferSize(1024); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpoint.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpoint.java new file mode 100644 index 0000000..7ca12f0 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpoint.java @@ -0,0 +1,359 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.replication.regionserver; + +import static org.junit.Assert.*; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.io.IOException; +import java.util.List; +import java.util.concurrent.Executors; +import java.util.concurrent.atomic.AtomicLong; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.commons.logging.impl.Log4JLogger; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.Waiter; +import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.Connection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.HConnection; +import org.apache.hadoop.hbase.client.HConnectionManager; +import org.apache.hadoop.hbase.client.RpcRetryingCaller; +import org.apache.hadoop.hbase.client.RpcRetryingCallerImpl; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.client.replication.ReplicationAdmin; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.wal.WAL.Entry; +import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.replication.ReplicationException; +import org.apache.hadoop.hbase.replication.ReplicationPeerConfig; +import org.apache.hadoop.hbase.testclassification.FlakeyTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.ServerRegionReplicaUtil; +import org.apache.hadoop.hbase.zookeeper.ZKUtil; +import org.apache.log4j.Level; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Ignore; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import com.google.common.collect.Lists; + +/** + * Tests RegionReplicaReplicationEndpoint class by setting up region replicas and verifying + * async wal replication replays the edits to the secondary region in various scenarios. + */ +@Category({FlakeyTests.class, MediumTests.class}) +public class TestRegionReplicaReplicationEndpoint { + + private static final Log LOG = LogFactory.getLog(TestRegionReplicaReplicationEndpoint.class); + + static { + ((Log4JLogger) RpcRetryingCallerImpl.LOG).getLogger().setLevel(Level.ALL); + } + + private static final int NB_SERVERS = 2; + + private static final HBaseTestingUtility HTU = new HBaseTestingUtility(); + + @BeforeClass + public static void beforeClass() throws Exception { + /* + Configuration conf = HTU.getConfiguration(); + conf.setFloat("hbase.regionserver.logroll.multiplier", 0.0003f); + conf.setInt("replication.source.size.capacity", 10240); + conf.setLong("replication.source.sleepforretries", 100); + conf.setInt("hbase.regionserver.maxlogs", 10); + conf.setLong("hbase.master.logcleaner.ttl", 10); + conf.setInt("zookeeper.recovery.retry", 1); + conf.setInt("zookeeper.recovery.retry.intervalmill", 10); + conf.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true); + conf.setBoolean(ServerRegionReplicaUtil.REGION_REPLICA_REPLICATION_CONF_KEY, true); + conf.setLong(HConstants.THREAD_WAKE_FREQUENCY, 100); + conf.setInt("replication.stats.thread.period.seconds", 5); + conf.setBoolean("hbase.tests.use.shortcircuit.reads", false); + conf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 3); // less number of retries is needed + conf.setInt("hbase.client.serverside.retries.multiplier", 1); + + HTU.startMiniCluster(NB_SERVERS);*/ + } + + @AfterClass + public static void afterClass() throws Exception { + /* + HTU.shutdownMiniCluster(); + */ + } + + @Ignore("To be fixed before 1.0") + @Test + public void testRegionReplicaReplicationPeerIsCreated() throws IOException, ReplicationException { + // create a table with region replicas. Check whether the replication peer is created + // and replication started. + ReplicationAdmin admin = new ReplicationAdmin(HTU.getConfiguration()); + String peerId = "region_replica_replication"; + + if (admin.getPeerConfig(peerId) != null) { + admin.removePeer(peerId); + } + + HTableDescriptor htd = HTU.createTableDescriptor( + "testReplicationPeerIsCreated_no_region_replicas"); + HTU.getHBaseAdmin().createTable(htd); + ReplicationPeerConfig peerConfig = admin.getPeerConfig(peerId); + assertNull(peerConfig); + + htd = HTU.createTableDescriptor("testReplicationPeerIsCreated"); + htd.setRegionReplication(2); + HTU.getHBaseAdmin().createTable(htd); + + // assert peer configuration is correct + peerConfig = admin.getPeerConfig(peerId); + assertNotNull(peerConfig); + assertEquals(peerConfig.getClusterKey(), ZKUtil.getZooKeeperClusterKey(HTU.getConfiguration())); + assertEquals(peerConfig.getReplicationEndpointImpl(), + RegionReplicaReplicationEndpoint.class.getName()); + admin.close(); + } + + + public void testRegionReplicaReplication(int regionReplication) throws Exception { + // test region replica replication. Create a table with single region, write some data + // ensure that data is replicated to the secondary region + TableName tableName = TableName.valueOf("testRegionReplicaReplicationWithReplicas_" + + regionReplication); + HTableDescriptor htd = HTU.createTableDescriptor(tableName.toString()); + htd.setRegionReplication(regionReplication); + HTU.getHBaseAdmin().createTable(htd); + TableName tableNameNoReplicas = + TableName.valueOf("testRegionReplicaReplicationWithReplicas_NO_REPLICAS"); + HTU.deleteTableIfAny(tableNameNoReplicas); + HTU.createTable(tableNameNoReplicas, HBaseTestingUtility.fam1); + + Connection connection = ConnectionFactory.createConnection(HTU.getConfiguration()); + Table table = connection.getTable(tableName); + Table tableNoReplicas = connection.getTable(tableNameNoReplicas); + + try { + // load some data to the non-replicated table + HTU.loadNumericRows(tableNoReplicas, HBaseTestingUtility.fam1, 6000, 7000); + + // load the data to the table + HTU.loadNumericRows(table, HBaseTestingUtility.fam1, 0, 1000); + + verifyReplication(tableName, regionReplication, 0, 6000); + + } finally { + table.close(); + tableNoReplicas.close(); + HTU.deleteTableIfAny(tableNameNoReplicas); + connection.close(); + } + } + + private void verifyReplication(TableName tableName, int regionReplication, + final int startRow, final int endRow) throws Exception { + // find the regions + final HRegion[] regions = new HRegion[regionReplication]; + + for (int i=0; i < NB_SERVERS; i++) { + HRegionServer rs = HTU.getMiniHBaseCluster().getRegionServer(i); + List onlineRegions = rs.getOnlineRegions(tableName); + for (HRegion region : onlineRegions) { + regions[region.getRegionInfo().getReplicaId()] = region; + } + } + + for (HRegion region : regions) { + assertNotNull(region); + } + + for (int i = 1; i < regionReplication; i++) { + final HRegion region = regions[i]; + // wait until all the data is replicated to all secondary regions + Waiter.waitFor(HTU.getConfiguration(), 60000, new Waiter.Predicate() { + @Override + public boolean evaluate() throws Exception { + LOG.info("verifying replication for region replica:" + region.getRegionInfo()); + try { + HTU.verifyNumericRows(region, HBaseTestingUtility.fam1, startRow, endRow); + } catch(Throwable ex) { + LOG.warn("Verification from secondary region is not complete yet. Got:" + ex + + " " + ex.getMessage()); + // still wait + return false; + } + return true; + } + }); + } + } + + @Ignore("To be fixed before 1.0") + @Test(timeout = 60000) + public void testRegionReplicaReplicationWith2Replicas() throws Exception { + testRegionReplicaReplication(2); + } + + @Ignore("To be fixed before 1.0") + @Test(timeout = 60000) + public void testRegionReplicaReplicationWith3Replicas() throws Exception { + testRegionReplicaReplication(3); + } + + @Ignore("To be fixed before 1.0") + @Test(timeout = 60000) + public void testRegionReplicaReplicationWith10Replicas() throws Exception { + testRegionReplicaReplication(10); + } + + @Ignore("To be fixed before 1.0") + @Test (timeout = 60000) + public void testRegionReplicaReplicationForFlushAndCompaction() throws Exception { + // Tests a table with region replication 3. Writes some data, and causes flushes and + // compactions. Verifies that the data is readable from the replicas. Note that this + // does not test whether the replicas actually pick up flushed files and apply compaction + // to their stores + int regionReplication = 3; + TableName tableName = TableName.valueOf("testRegionReplicaReplicationForFlushAndCompaction"); + HTableDescriptor htd = HTU.createTableDescriptor(tableName.toString()); + htd.setRegionReplication(regionReplication); + HTU.getHBaseAdmin().createTable(htd); + + + Connection connection = ConnectionFactory.createConnection(HTU.getConfiguration()); + Table table = connection.getTable(tableName); + + try { + // load the data to the table + + for (int i = 0; i < 6000; i += 1000) { + LOG.info("Writing data from " + i + " to " + (i+1000)); + HTU.loadNumericRows(table, HBaseTestingUtility.fam1, i, i+1000); + LOG.info("flushing table"); + HTU.flush(tableName); + LOG.info("compacting table"); + HTU.compact(tableName, false); + } + + verifyReplication(tableName, regionReplication, 0, 6000); + } finally { + table.close(); + connection.close(); + } + } + + @Ignore("To be fixed before 1.0") + @Test (timeout = 60000) + public void testRegionReplicaReplicationIgnoresDisabledTables() throws Exception { + testRegionReplicaReplicationIgnoresDisabledTables(false); + } + + @Ignore("To be fixed before 1.0") + @Test (timeout = 60000) + public void testRegionReplicaReplicationIgnoresDroppedTables() throws Exception { + testRegionReplicaReplicationIgnoresDisabledTables(true); + } + + public void testRegionReplicaReplicationIgnoresDisabledTables(boolean dropTable) + throws Exception { + // tests having edits from a disabled or dropped table is handled correctly by skipping those + // entries and further edits after the edits from dropped/disabled table can be replicated + // without problems. + TableName tableName = TableName.valueOf("testRegionReplicaReplicationIgnoresDisabledTables" + + dropTable); + HTableDescriptor htd = HTU.createTableDescriptor(tableName.toString()); + int regionReplication = 3; + htd.setRegionReplication(regionReplication); + HTU.deleteTableIfAny(tableName); + HTU.getHBaseAdmin().createTable(htd); + TableName toBeDisabledTable = TableName.valueOf(dropTable ? "droppedTable" : "disabledTable"); + HTU.deleteTableIfAny(toBeDisabledTable); + htd = HTU.createTableDescriptor(toBeDisabledTable.toString()); + htd.setRegionReplication(regionReplication); + HTU.getHBaseAdmin().createTable(htd); + + // both tables are created, now pause replication + ReplicationAdmin admin = new ReplicationAdmin(HTU.getConfiguration()); + admin.disablePeer(ServerRegionReplicaUtil.getReplicationPeerId()); + + // now that the replication is disabled, write to the table to be dropped, then drop the table. + + HConnection connection = HConnectionManager.createConnection(HTU.getConfiguration()); + Table table = connection.getTable(tableName); + Table tableToBeDisabled = connection.getTable(toBeDisabledTable); + + HTU.loadNumericRows(tableToBeDisabled, HBaseTestingUtility.fam1, 6000, 7000); + + AtomicLong skippedEdits = new AtomicLong(); + RegionReplicaReplicationEndpoint.RegionReplicaOutputSink sink = + mock(RegionReplicaReplicationEndpoint.RegionReplicaOutputSink.class); + when(sink.getSkippedEditsCounter()).thenReturn(skippedEdits); + RegionReplicaReplicationEndpoint.RegionReplicaSinkWriter sinkWriter = + new RegionReplicaReplicationEndpoint.RegionReplicaSinkWriter(sink, + (ClusterConnection) connection, + Executors.newSingleThreadExecutor(), 1000); + + HRegionLocation hrl = connection.locateRegion(toBeDisabledTable, HConstants.EMPTY_BYTE_ARRAY); + byte[] encodedRegionName = hrl.getRegionInfo().getEncodedNameAsBytes(); + + Entry entry = new Entry( + new WALKey(encodedRegionName, toBeDisabledTable, 1), + new WALEdit()); + + HTU.getHBaseAdmin().disableTable(toBeDisabledTable); // disable the table + if (dropTable) { + HTU.getHBaseAdmin().deleteTable(toBeDisabledTable); + } + + sinkWriter.append(toBeDisabledTable, encodedRegionName, + HConstants.EMPTY_BYTE_ARRAY, Lists.newArrayList(entry, entry)); + + assertEquals(2, skippedEdits.get()); + + try { + // load some data to the to-be-dropped table + + // load the data to the table + HTU.loadNumericRows(table, HBaseTestingUtility.fam1, 0, 1000); + + // now enable the replication + admin.enablePeer(ServerRegionReplicaUtil.getReplicationPeerId()); + + verifyReplication(tableName, regionReplication, 0, 6000); + + } finally { + admin.close(); + table.close(); + tableToBeDisabled.close(); + HTU.deleteTableIfAny(toBeDisabledTable); + connection.close(); + } + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java new file mode 100644 index 0000000..a191bdd --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestRegionReplicaReplicationEndpointNoMaster.java @@ -0,0 +1,264 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.replication.regionserver; + +import static org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.closeRegion; +import static org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster.openRegion; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +import java.io.IOException; +import java.util.Queue; +import java.util.concurrent.ConcurrentLinkedQueue; +import java.util.concurrent.atomic.AtomicLong; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.RegionLocations; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.RpcRetryingCallerFactory; +import org.apache.hadoop.hbase.coprocessor.BaseWALObserver; +import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; +import org.apache.hadoop.hbase.coprocessor.ObserverContext; +import org.apache.hadoop.hbase.coprocessor.WALCoprocessorEnvironment; +import org.apache.hadoop.hbase.ipc.RpcControllerFactory; +import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.ReplicateWALEntryResponse; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionServer; +import org.apache.hadoop.hbase.regionserver.TestRegionServerNoMaster; +import org.apache.hadoop.hbase.wal.WAL.Entry; +import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.replication.ReplicationEndpoint; +import org.apache.hadoop.hbase.replication.ReplicationEndpoint.ReplicateContext; +import org.apache.hadoop.hbase.replication.regionserver.RegionReplicaReplicationEndpoint.RegionReplicaReplayCallable; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; +import org.apache.hadoop.hbase.util.ServerRegionReplicaUtil; +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import com.google.common.collect.Lists; + +/** + * Tests RegionReplicaReplicationEndpoint. Unlike TestRegionReplicaReplicationEndpoint this + * class contains lower level tests using callables. + */ +@Category({ReplicationTests.class, MediumTests.class}) +public class TestRegionReplicaReplicationEndpointNoMaster { + + private static final Log LOG = LogFactory.getLog( + TestRegionReplicaReplicationEndpointNoMaster.class); + + private static final int NB_SERVERS = 2; + private static TableName tableName = TableName.valueOf( + TestRegionReplicaReplicationEndpointNoMaster.class.getSimpleName()); + private static HTable table; + private static final byte[] row = "TestRegionReplicaReplicator".getBytes(); + + private static HRegionServer rs0; + private static HRegionServer rs1; + + private static HRegionInfo hriPrimary; + private static HRegionInfo hriSecondary; + + private static final HBaseTestingUtility HTU = new HBaseTestingUtility(); + private static final byte[] f = HConstants.CATALOG_FAMILY; + + @BeforeClass + public static void beforeClass() throws Exception { + Configuration conf = HTU.getConfiguration(); + conf.setBoolean(HConstants.REPLICATION_ENABLE_KEY, true); + conf.setBoolean(ServerRegionReplicaUtil.REGION_REPLICA_REPLICATION_CONF_KEY, true); + + // install WALObserver coprocessor for tests + String walCoprocs = HTU.getConfiguration().get(CoprocessorHost.WAL_COPROCESSOR_CONF_KEY); + if (walCoprocs == null) { + walCoprocs = WALEditCopro.class.getName(); + } else { + walCoprocs += "," + WALEditCopro.class.getName(); + } + HTU.getConfiguration().set(CoprocessorHost.WAL_COPROCESSOR_CONF_KEY, + walCoprocs); + HTU.startMiniCluster(NB_SERVERS); + + // Create table then get the single region for our new table. + HTableDescriptor htd = HTU.createTableDescriptor(tableName.toString()); + table = HTU.createTable(htd, new byte[][]{f}, HTU.getConfiguration()); + + hriPrimary = table.getRegionLocation(row, false).getRegionInfo(); + + // mock a secondary region info to open + hriSecondary = new HRegionInfo(hriPrimary.getTable(), hriPrimary.getStartKey(), + hriPrimary.getEndKey(), hriPrimary.isSplit(), hriPrimary.getRegionId(), 1); + + // No master + TestRegionServerNoMaster.stopMasterAndAssignMeta(HTU); + rs0 = HTU.getMiniHBaseCluster().getRegionServer(0); + rs1 = HTU.getMiniHBaseCluster().getRegionServer(1); + } + + @AfterClass + public static void afterClass() throws Exception { + table.close(); + HTU.shutdownMiniCluster(); + } + + @Before + public void before() throws Exception{ + entries.clear(); + } + + @After + public void after() throws Exception { + } + + static ConcurrentLinkedQueue entries = new ConcurrentLinkedQueue(); + + public static class WALEditCopro extends BaseWALObserver { + public WALEditCopro() { + entries.clear(); + } + @Override + public void postWALWrite(ObserverContext ctx, + HRegionInfo info, WALKey logKey, WALEdit logEdit) throws IOException { + // only keep primary region's edits + if (logKey.getTablename().equals(tableName) && info.getReplicaId() == 0) { + entries.add(new Entry(logKey, logEdit)); + } + } + } + + @Test + public void testReplayCallable() throws Exception { + // tests replaying the edits to a secondary region replica using the Callable directly + openRegion(HTU, rs0, hriSecondary); + ClusterConnection connection = + (ClusterConnection) ConnectionFactory.createConnection(HTU.getConfiguration()); + + //load some data to primary + HTU.loadNumericRows(table, f, 0, 1000); + + Assert.assertEquals(1000, entries.size()); + // replay the edits to the secondary using replay callable + replicateUsingCallable(connection, entries); + + HRegion region = rs0.getFromOnlineRegions(hriSecondary.getEncodedName()); + HTU.verifyNumericRows(region, f, 0, 1000); + + HTU.deleteNumericRows(table, f, 0, 1000); + closeRegion(HTU, rs0, hriSecondary); + connection.close(); + } + + private void replicateUsingCallable(ClusterConnection connection, Queue entries) + throws IOException, RuntimeException { + Entry entry; + while ((entry = entries.poll()) != null) { + byte[] row = entry.getEdit().getCells().get(0).getRow(); + RegionLocations locations = connection.locateRegion(tableName, row, true, true); + RegionReplicaReplayCallable callable = new RegionReplicaReplayCallable(connection, + RpcControllerFactory.instantiate(connection.getConfiguration()), + table.getName(), locations.getRegionLocation(1), + locations.getRegionLocation(1).getRegionInfo(), row, Lists.newArrayList(entry), + new AtomicLong()); + + RpcRetryingCallerFactory factory = RpcRetryingCallerFactory.instantiate( + connection.getConfiguration()); + factory. newCaller().callWithRetries(callable, 10000); + } + } + + @Test + public void testReplayCallableWithRegionMove() throws Exception { + // tests replaying the edits to a secondary region replica using the Callable directly while + // the region is moved to another location.It tests handling of RME. + openRegion(HTU, rs0, hriSecondary); + ClusterConnection connection = + (ClusterConnection) ConnectionFactory.createConnection(HTU.getConfiguration()); + //load some data to primary + HTU.loadNumericRows(table, f, 0, 1000); + + Assert.assertEquals(1000, entries.size()); + // replay the edits to the secondary using replay callable + replicateUsingCallable(connection, entries); + + HRegion region = rs0.getFromOnlineRegions(hriSecondary.getEncodedName()); + HTU.verifyNumericRows(region, f, 0, 1000); + + HTU.loadNumericRows(table, f, 1000, 2000); // load some more data to primary + + // move the secondary region from RS0 to RS1 + closeRegion(HTU, rs0, hriSecondary); + openRegion(HTU, rs1, hriSecondary); + + // replicate the new data + replicateUsingCallable(connection, entries); + + region = rs1.getFromOnlineRegions(hriSecondary.getEncodedName()); + // verify the new data. old data may or may not be there + HTU.verifyNumericRows(region, f, 1000, 2000); + + HTU.deleteNumericRows(table, f, 0, 2000); + closeRegion(HTU, rs1, hriSecondary); + connection.close(); + } + + @Test + public void testRegionReplicaReplicationEndpointReplicate() throws Exception { + // tests replaying the edits to a secondary region replica using the RRRE.replicate() + openRegion(HTU, rs0, hriSecondary); + ClusterConnection connection = + (ClusterConnection) ConnectionFactory.createConnection(HTU.getConfiguration()); + RegionReplicaReplicationEndpoint replicator = new RegionReplicaReplicationEndpoint(); + + ReplicationEndpoint.Context context = mock(ReplicationEndpoint.Context.class); + when(context.getConfiguration()).thenReturn(HTU.getConfiguration()); + + replicator.init(context); + replicator.start(); + + //load some data to primary + HTU.loadNumericRows(table, f, 0, 1000); + + Assert.assertEquals(1000, entries.size()); + // replay the edits to the secondary using replay callable + replicator.replicate(new ReplicateContext().setEntries(Lists.newArrayList(entries))); + + HRegion region = rs0.getFromOnlineRegions(hriSecondary.getEncodedName()); + HTU.verifyNumericRows(region, f, 0, 1000); + + HTU.deleteNumericRows(table, f, 0, 1000); + closeRegion(HTU, rs0, hriSecondary); + connection.close(); + } + +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java index 7efb4e3..b87e7ef 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSink.java @@ -26,6 +26,8 @@ import java.util.List; import java.util.concurrent.atomic.AtomicBoolean; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.ByteStringer; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; @@ -35,7 +37,6 @@ import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Stoppable; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; @@ -52,7 +53,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationSink { private static final Log LOG = LogFactory.getLog(TestReplicationSink.class); private static final int BATCH_SIZE = 10; @@ -118,8 +119,8 @@ public class TestReplicationSink { */ @Before public void setUp() throws Exception { - table1 = TEST_UTIL.truncateTable(TABLE_NAME1); - table2 = TEST_UTIL.truncateTable(TABLE_NAME2); + table1 = TEST_UTIL.deleteTableData(TABLE_NAME1); + table2 = TEST_UTIL.deleteTableData(TABLE_NAME2); } /** diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java index d725d21..a2ea258 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSinkManager.java @@ -25,6 +25,7 @@ import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService; @@ -37,7 +38,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(SmallTests.class) +@Category({ReplicationTests.class, SmallTests.class}) public class TestReplicationSinkManager { private static final String PEER_CLUSTER_ID = "PEER_CLUSTER_ID"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java index 3b56617..f745f8c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationSourceManager.java @@ -37,8 +37,8 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.ClusterId; -import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; @@ -46,11 +46,12 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.CoordinatedStateManager; +import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.ClusterConnection; +import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.wal.WAL; @@ -63,6 +64,8 @@ import org.apache.hadoop.hbase.replication.ReplicationQueues; import org.apache.hadoop.hbase.replication.ReplicationSourceDummy; import org.apache.hadoop.hbase.replication.ReplicationStateZKBase; import org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.NodeFailoverWorker; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.zookeeper.MetaTableLocator; @@ -78,7 +81,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Sets; -@Category(MediumTests.class) +@Category({ReplicationTests.class, MediumTests.class}) public class TestReplicationSourceManager { private static final Log LOG = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationThrottler.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationThrottler.java index 9d56f77..692e9be 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationThrottler.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationThrottler.java @@ -18,18 +18,18 @@ package org.apache.hadoop.hbase.replication.regionserver; -import static org.junit.Assert.assertArrayEquals; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({ReplicationTests.class, SmallTests.class}) public class TestReplicationThrottler { private static final Log LOG = LogFactory.getLog(TestReplicationThrottler.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationWALReaderManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationWALReaderManager.java index bec6d45..577f0ba 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationWALReaderManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicationWALReaderManager.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.replication.regionserver; import org.apache.hadoop.conf.Configuration; @@ -26,13 +25,14 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; import org.apache.hadoop.hbase.wal.WALKey; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.ReplicationTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.junit.After; @@ -53,7 +53,7 @@ import java.util.Collection; import java.util.List; import java.util.concurrent.atomic.AtomicLong; -@Category(LargeTests.class) +@Category({ReplicationTests.class, LargeTests.class}) @RunWith(Parameterized.class) public class TestReplicationWALReaderManager { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java index 7125632..21450a2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestHBaseSaslRpcClient.java @@ -43,6 +43,7 @@ import javax.security.sasl.RealmCallback; import javax.security.sasl.RealmChoiceCallback; import javax.security.sasl.SaslClient; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.security.HBaseSaslRpcClient.SaslClientCallbackHandler; import org.apache.hadoop.io.DataInputBuffer; @@ -58,7 +59,7 @@ import org.mockito.Mockito; import com.google.common.base.Strings; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestHBaseSaslRpcClient { static { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecureRPC.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecureRPC.java index a3cae76..b28a1ef 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecureRPC.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecureRPC.java @@ -35,10 +35,11 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.ipc.RpcClientFactory; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.ipc.FifoRpcScheduler; import org.apache.hadoop.hbase.ipc.RpcClient; -import org.apache.hadoop.hbase.ipc.RpcClientFactory; import org.apache.hadoop.hbase.ipc.RpcServer; import org.apache.hadoop.hbase.ipc.RpcServerInterface; import org.apache.hadoop.hbase.ipc.TestDelayedRpc.TestDelayedImplementation; @@ -54,7 +55,7 @@ import com.google.common.collect.Lists; import com.google.protobuf.BlockingRpcChannel; import com.google.protobuf.BlockingService; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestSecureRPC { public static RpcServerInterface rpcServer; /** @@ -119,4 +120,4 @@ public class TestSecureRPC { rpcClient.close(); } } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUser.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUser.java index 8ee29de..f85832e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUser.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUser.java @@ -32,13 +32,14 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CommonConfigurationKeys; import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import com.google.common.collect.ImmutableSet; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestUser { private static Log LOG = LogFactory.getLog(TestUser.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUsersOperationsWithSecureHadoop.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUsersOperationsWithSecureHadoop.java index ba920ac..a66c124 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUsersOperationsWithSecureHadoop.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestUsersOperationsWithSecureHadoop.java @@ -31,12 +31,13 @@ import static org.junit.Assume.assumeTrue; import java.io.IOException; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.security.UserGroupInformation; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestUsersOperationsWithSecureHadoop { /** * test login with security enabled configuration diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java index 1d7cb1e..2eec09d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessControlFilter.java @@ -29,6 +29,7 @@ import java.util.UUID; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; @@ -47,7 +48,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestAccessControlFilter extends SecureTestUtil { @Rule public TestName name = new TestName(); private static HBaseTestingUtility TEST_UTIL; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java index 4024317..27ee915 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java @@ -15,12 +15,12 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.security.access; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; @@ -38,6 +38,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.CoprocessorEnvironment; +import org.apache.hadoop.hbase.HBaseIOException; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; @@ -45,7 +46,6 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; @@ -94,6 +94,7 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.CheckPermissionsRequest; +import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost; @@ -101,6 +102,8 @@ import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost; import org.apache.hadoop.hbase.regionserver.ScanType; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil; import org.apache.hadoop.hbase.util.TestTableName; @@ -124,7 +127,7 @@ import com.google.protobuf.ServiceException; * Performs authorization checks for common operations, according to different * levels of authorized users. */ -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestAccessController extends SecureTestUtil { private static final Log LOG = LogFactory.getLog(TestAccessController.class); @@ -282,7 +285,9 @@ public class TestAccessController extends SecureTestUtil { // Test deleted the table, no problem LOG.info("Test deleted table " + TEST_TABLE.getTableName()); } + // Verify all table/namespace permissions are erased assertEquals(0, AccessControlLists.getTablePermissions(conf, TEST_TABLE.getTableName()).size()); + assertEquals(0, AccessControlLists.getNamespacePermissions(conf, TEST_TABLE.getTableName().getNameAsString()).size()); } @Test @@ -1868,11 +1873,17 @@ public class TestAccessController extends SecureTestUtil { @Test public void testSnapshot() throws Exception { + Admin admin = TEST_UTIL.getHBaseAdmin(); + final HTableDescriptor htd = admin.getTableDescriptor(TEST_TABLE.getTableName()); + SnapshotDescription.Builder builder = SnapshotDescription.newBuilder(); + builder.setName(TEST_TABLE.getTableName().getNameAsString() + "-snapshot"); + builder.setTable(TEST_TABLE.getTableName().getNameAsString()); + final SnapshotDescription snapshot = builder.build(); AccessTestAction snapshotAction = new AccessTestAction() { @Override public Object run() throws Exception { ACCESS_CONTROLLER.preSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), - null, null); + snapshot, htd); return null; } }; @@ -1881,7 +1892,7 @@ public class TestAccessController extends SecureTestUtil { @Override public Object run() throws Exception { ACCESS_CONTROLLER.preDeleteSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), - null); + snapshot); return null; } }; @@ -1890,7 +1901,7 @@ public class TestAccessController extends SecureTestUtil { @Override public Object run() throws Exception { ACCESS_CONTROLLER.preRestoreSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), - null, null); + snapshot, htd); return null; } }; @@ -1904,8 +1915,8 @@ public class TestAccessController extends SecureTestUtil { } }; - verifyAllowed(snapshotAction, SUPERUSER, USER_ADMIN); - verifyDenied(snapshotAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); + verifyAllowed(snapshotAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(snapshotAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN); verifyDenied(deleteAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); @@ -1918,6 +1929,62 @@ public class TestAccessController extends SecureTestUtil { } @Test + public void testSnapshotWithOwner() throws Exception { + Admin admin = TEST_UTIL.getHBaseAdmin(); + final HTableDescriptor htd = admin.getTableDescriptor(TEST_TABLE.getTableName()); + SnapshotDescription.Builder builder = SnapshotDescription.newBuilder(); + builder.setName(TEST_TABLE.getTableName().getNameAsString() + "-snapshot"); + builder.setTable(TEST_TABLE.getTableName().getNameAsString()); + builder.setOwner(USER_OWNER.getName()); + final SnapshotDescription snapshot = builder.build(); + AccessTestAction snapshotAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), + snapshot, htd); + return null; + } + }; + verifyAllowed(snapshotAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(snapshotAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); + + AccessTestAction deleteAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preDeleteSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), + snapshot); + return null; + } + }; + verifyAllowed(deleteAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(deleteAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); + + AccessTestAction restoreAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preRestoreSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), + snapshot, htd); + return null; + } + }; + verifyAllowed(restoreAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(restoreAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); + + AccessTestAction cloneAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preCloneSnapshot(ObserverContext.createAndPrepare(CP_ENV, null), + null, null); + return null; + } + }; + // Clone by snapshot owner is not allowed , because clone operation creates a new table, + // which needs global admin permission. + verifyAllowed(cloneAction, SUPERUSER, USER_ADMIN); + verifyDenied(cloneAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); + } + + @Test public void testGlobalAuthorizationForNewRegisteredRS() throws Exception { LOG.debug("Test for global authorization for a new registered RegionServer."); MiniHBaseCluster hbaseCluster = TEST_UTIL.getHBaseCluster(); @@ -2358,39 +2425,101 @@ public class TestAccessController extends SecureTestUtil { verifyDenied(putWithReservedTag, USER_OWNER, USER_ADMIN, USER_CREATE, USER_RW, USER_RO); } - @Test - public void testGetNamespacePermission() throws Exception { - String namespace = "testNamespace"; - NamespaceDescriptor desc = NamespaceDescriptor.create(namespace).build(); - TEST_UTIL.getMiniHBaseCluster().getMaster().createNamespace(desc); - grantOnNamespace(TEST_UTIL, USER_NONE.getShortName(), namespace, Permission.Action.READ); - try { - List namespacePermissions = AccessControlClient.getUserPermissions(conf, - AccessControlLists.toNamespaceEntry(namespace)); - assertTrue(namespacePermissions != null); - assertTrue(namespacePermissions.size() == 1); - } catch (Throwable thw) { - throw new HBaseException(thw); - } - TEST_UTIL.getMiniHBaseCluster().getMaster().deleteNamespace(namespace); - } + @Test + public void testSetQuota() throws Exception { + AccessTestAction setUserQuotaAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSetUserQuota(ObserverContext.createAndPrepare(CP_ENV, null), + null, null); + return null; + } + }; + + AccessTestAction setUserTableQuotaAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSetUserQuota(ObserverContext.createAndPrepare(CP_ENV, null), + null, TEST_TABLE.getTableName(), null); + return null; + } + }; + + AccessTestAction setUserNamespaceQuotaAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSetUserQuota(ObserverContext.createAndPrepare(CP_ENV, null), + null, (String)null, null); + return null; + } + }; + + AccessTestAction setTableQuotaAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSetTableQuota(ObserverContext.createAndPrepare(CP_ENV, null), + TEST_TABLE.getTableName(), null); + return null; + } + }; + + AccessTestAction setNamespaceQuotaAction = new AccessTestAction() { + @Override + public Object run() throws Exception { + ACCESS_CONTROLLER.preSetNamespaceQuota(ObserverContext.createAndPrepare(CP_ENV, null), + null, null); + return null; + } + }; + + verifyAllowed(setUserQuotaAction, SUPERUSER, USER_ADMIN); + verifyDenied(setUserQuotaAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); + + verifyAllowed(setUserTableQuotaAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(setUserTableQuotaAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); + + verifyAllowed(setUserNamespaceQuotaAction, SUPERUSER, USER_ADMIN); + verifyDenied(setUserNamespaceQuotaAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); + + verifyAllowed(setTableQuotaAction, SUPERUSER, USER_ADMIN, USER_OWNER); + verifyDenied(setTableQuotaAction, USER_CREATE, USER_RW, USER_RO, USER_NONE); + + verifyAllowed(setNamespaceQuotaAction, SUPERUSER, USER_ADMIN); + verifyDenied(setNamespaceQuotaAction, USER_CREATE, USER_RW, USER_RO, USER_NONE, USER_OWNER); + } + + @Test + public void testGetNamespacePermission() throws Exception { + String namespace = "testNamespace"; + NamespaceDescriptor desc = NamespaceDescriptor.create(namespace).build(); + TEST_UTIL.getMiniHBaseCluster().getMaster().createNamespace(desc); + grantOnNamespace(TEST_UTIL, USER_NONE.getShortName(), namespace, Permission.Action.READ); + try { + List namespacePermissions = AccessControlClient.getUserPermissions(conf, + AccessControlLists.toNamespaceEntry(namespace)); + assertTrue(namespacePermissions != null); + assertTrue(namespacePermissions.size() == 1); + } catch (Throwable thw) { + throw new HBaseException(thw); + } + TEST_UTIL.getMiniHBaseCluster().getMaster().deleteNamespace(namespace); + } @Test - public void testTruncatePerms() throws Throwable { - try (Connection connection = ConnectionFactory.createConnection(TEST_UTIL.getConfiguration())) { - List existingPerms = - AccessControlClient.getUserPermissions(connection, - TEST_TABLE.getTableName().getNameAsString()); + public void testTruncatePerms() throws Exception { + try { + List existingPerms = AccessControlClient.getUserPermissions(conf, + TEST_TABLE.getTableName().getNameAsString()); assertTrue(existingPerms != null); assertTrue(existingPerms.size() > 1); - try (Admin admin = connection.getAdmin()) { - admin.disableTable(TEST_TABLE.getTableName()); - admin.truncateTable(TEST_TABLE.getTableName(), true); - } - List perms = AccessControlClient.getUserPermissions(connection, + TEST_UTIL.getHBaseAdmin().disableTable(TEST_TABLE.getTableName()); + TEST_UTIL.truncateTable(TEST_TABLE.getTableName()); + List perms = AccessControlClient.getUserPermissions(conf, TEST_TABLE.getTableName().getNameAsString()); assertTrue(perms != null); assertEquals(existingPerms.size(), perms.size()); + } catch (Throwable e) { + throw new HBaseIOException(e); } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java index 2736164..2bde357 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController2.java @@ -27,7 +27,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; @@ -38,6 +37,8 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.TestTableName; import org.junit.AfterClass; @@ -46,7 +47,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestAccessController2 extends SecureTestUtil { private static final byte[] TEST_ROW = Bytes.toBytes("test"); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java index 0edc1e9..3a8d662 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLWithMultipleVersions.java @@ -31,7 +31,6 @@ import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; @@ -45,6 +44,8 @@ import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.TestTableName; @@ -59,7 +60,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestCellACLWithMultipleVersions extends SecureTestUtil { private static final Log LOG = LogFactory.getLog(TestCellACLWithMultipleVersions.class); @@ -912,4 +913,4 @@ public class TestCellACLWithMultipleVersions extends SecureTestUtil { } assertEquals(0, AccessControlLists.getTablePermissions(conf, TEST_TABLE.getTableName()).size()); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java index ae08a15..4bc819e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestCellACLs.java @@ -29,7 +29,6 @@ import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Delete; @@ -45,6 +44,8 @@ import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.TestTableName; import org.apache.hadoop.hbase.util.Threads; @@ -60,7 +61,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestCellACLs extends SecureTestUtil { private static final Log LOG = LogFactory.getLog(TestCellACLs.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java index 80f5a97..3270247 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestNamespaceCommands.java @@ -29,7 +29,6 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.NamespaceDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; @@ -44,6 +43,8 @@ import org.apache.hadoop.hbase.protobuf.ProtobufUtil; import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -53,7 +54,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.ListMultimap; import com.google.protobuf.BlockingRpcChannel; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestNamespaceCommands extends SecureTestUtil { private static HBaseTestingUtility UTIL = new HBaseTestingUtility(); private static String TEST_NAMESPACE = "ns1"; @@ -64,14 +65,33 @@ public class TestNamespaceCommands extends SecureTestUtil { // user with all permissions private static User SUPERUSER; + + // user with A permission on global + private static User USER_GLOBAL_ADMIN; + // user with C permission on global + private static User USER_GLOBAL_CREATE; + // user with W permission on global + private static User USER_GLOBAL_WRITE; + // user with R permission on global + private static User USER_GLOBAL_READ; + // user with X permission on global + private static User USER_GLOBAL_EXEC; + + // user with A permission on namespace + private static User USER_NS_ADMIN; + // user with C permission on namespace + private static User USER_NS_CREATE; + // user with W permission on namespace + private static User USER_NS_WRITE; + // user with R permission on namespace. + private static User USER_NS_READ; + // user with X permission on namespace. + private static User USER_NS_EXEC; + // user with rw permissions - private static User USER_RW; - // user with create table permissions alone - private static User USER_CREATE; - // user with permission on namespace for testing all operations. - private static User USER_NSP_WRITE; - // user with admin permission on namespace. - private static User USER_NSP_ADMIN; + private static User USER_TABLE_WRITE; // TODO: WE DO NOT GIVE ANY PERMS TO THIS USER + //user with create table permissions alone + private static User USER_TABLE_CREATE; // TODO: WE DO NOT GIVE ANY PERMS TO THIS USER private static String TEST_TABLE = TEST_NAMESPACE + ":testtable"; private static byte[] TEST_FAMILY = Bytes.toBytes("f1"); @@ -82,27 +102,49 @@ public class TestNamespaceCommands extends SecureTestUtil { enableSecurity(conf); SUPERUSER = User.createUserForTesting(conf, "admin", new String[] { "supergroup" }); - USER_RW = User.createUserForTesting(conf, "rw_user", new String[0]); - USER_CREATE = User.createUserForTesting(conf, "create_user", new String[0]); - USER_NSP_WRITE = User.createUserForTesting(conf, "namespace_write", new String[0]); - USER_NSP_ADMIN = User.createUserForTesting(conf, "namespace_admin", new String[0]); + // Users with global permissions + USER_GLOBAL_ADMIN = User.createUserForTesting(conf, "global_admin", new String[0]); + USER_GLOBAL_CREATE = User.createUserForTesting(conf, "global_create", new String[0]); + USER_GLOBAL_WRITE = User.createUserForTesting(conf, "global_write", new String[0]); + USER_GLOBAL_READ = User.createUserForTesting(conf, "global_read", new String[0]); + USER_GLOBAL_EXEC = User.createUserForTesting(conf, "global_exec", new String[0]); + + USER_NS_ADMIN = User.createUserForTesting(conf, "namespace_admin", new String[0]); + USER_NS_CREATE = User.createUserForTesting(conf, "namespace_create", new String[0]); + USER_NS_WRITE = User.createUserForTesting(conf, "namespace_write", new String[0]); + USER_NS_READ = User.createUserForTesting(conf, "namespace_read", new String[0]); + USER_NS_EXEC = User.createUserForTesting(conf, "namespace_exec", new String[0]); + + USER_TABLE_CREATE = User.createUserForTesting(conf, "table_create", new String[0]); + USER_TABLE_WRITE = User.createUserForTesting(conf, "table_write", new String[0]); + // TODO: other table perms UTIL.startMiniCluster(); // Wait for the ACL table to become available UTIL.waitTableAvailable(AccessControlLists.ACL_TABLE_NAME.getName(), 30 * 1000); ACCESS_CONTROLLER = (AccessController) UTIL.getMiniHBaseCluster().getMaster() - .getMasterCoprocessorHost() + .getRegionServerCoprocessorHost() .findCoprocessor(AccessController.class.getName()); UTIL.getHBaseAdmin().createNamespace(NamespaceDescriptor.create(TEST_NAMESPACE).build()); UTIL.getHBaseAdmin().createNamespace(NamespaceDescriptor.create(TEST_NAMESPACE2).build()); - grantOnNamespace(UTIL, USER_NSP_WRITE.getShortName(), - TEST_NAMESPACE, Permission.Action.WRITE, Permission.Action.CREATE); - - grantOnNamespace(UTIL, USER_NSP_ADMIN.getShortName(), TEST_NAMESPACE, Permission.Action.ADMIN); - grantOnNamespace(UTIL, USER_NSP_ADMIN.getShortName(), TEST_NAMESPACE2, Permission.Action.ADMIN); + // grants on global + grantGlobal(UTIL, USER_GLOBAL_ADMIN.getShortName(), Permission.Action.ADMIN); + grantGlobal(UTIL, USER_GLOBAL_CREATE.getShortName(), Permission.Action.CREATE); + grantGlobal(UTIL, USER_GLOBAL_WRITE.getShortName(), Permission.Action.WRITE); + grantGlobal(UTIL, USER_GLOBAL_READ.getShortName(), Permission.Action.READ); + grantGlobal(UTIL, USER_GLOBAL_EXEC.getShortName(), Permission.Action.EXEC); + + // grants on namespace + grantOnNamespace(UTIL, USER_NS_ADMIN.getShortName(), TEST_NAMESPACE, Permission.Action.ADMIN); + grantOnNamespace(UTIL, USER_NS_CREATE.getShortName(), TEST_NAMESPACE, Permission.Action.CREATE); + grantOnNamespace(UTIL, USER_NS_WRITE.getShortName(), TEST_NAMESPACE, Permission.Action.WRITE); + grantOnNamespace(UTIL, USER_NS_READ.getShortName(), TEST_NAMESPACE, Permission.Action.READ); + grantOnNamespace(UTIL, USER_NS_EXEC.getShortName(), TEST_NAMESPACE, Permission.Action.EXEC); + + grantOnNamespace(UTIL, USER_NS_ADMIN.getShortName(), TEST_NAMESPACE2, Permission.Action.ADMIN); } @AfterClass @@ -117,15 +159,20 @@ public class TestNamespaceCommands extends SecureTestUtil { String userTestNamespace = "userTestNsp"; Table acl = new HTable(conf, AccessControlLists.ACL_TABLE_NAME); try { + ListMultimap perms = + AccessControlLists.getNamespacePermissions(conf, TEST_NAMESPACE); + + perms = AccessControlLists.getNamespacePermissions(conf, TEST_NAMESPACE); + assertEquals(5, perms.size()); + // Grant and check state in ACL table grantOnNamespace(UTIL, userTestNamespace, TEST_NAMESPACE, Permission.Action.WRITE); Result result = acl.get(new Get(Bytes.toBytes(userTestNamespace))); assertTrue(result != null); - ListMultimap perms = - AccessControlLists.getNamespacePermissions(conf, TEST_NAMESPACE); - assertEquals(3, perms.size()); + perms = AccessControlLists.getNamespacePermissions(conf, TEST_NAMESPACE); + assertEquals(6, perms.size()); List namespacePerms = perms.get(userTestNamespace); assertTrue(perms.containsKey(userTestNamespace)); assertEquals(1, namespacePerms.size()); @@ -141,7 +188,7 @@ public class TestNamespaceCommands extends SecureTestUtil { Permission.Action.WRITE); perms = AccessControlLists.getNamespacePermissions(conf, TEST_NAMESPACE); - assertEquals(2, perms.size()); + assertEquals(5, perms.size()); } finally { acl.close(); } @@ -156,15 +203,28 @@ public class TestNamespaceCommands extends SecureTestUtil { return null; } }; - // verify that superuser or hbase admin can modify namespaces. - verifyAllowed(modifyNamespace, SUPERUSER, USER_NSP_ADMIN); - // all others should be denied - verifyDenied(modifyNamespace, USER_NSP_WRITE, USER_CREATE, USER_RW); + + // modifyNamespace: superuser | global(A) | NS(A) + verifyAllowed(modifyNamespace, + SUPERUSER, + USER_GLOBAL_ADMIN); + + verifyDeniedWithException(modifyNamespace, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC); } @Test public void testCreateAndDeleteNamespace() throws Exception { AccessTestAction createNamespace = new AccessTestAction() { + @Override public Object run() throws Exception { ACCESS_CONTROLLER.preCreateNamespace(ObserverContext.createAndPrepare(CP_ENV, null), NamespaceDescriptor.create(TEST_NAMESPACE2).build()); @@ -173,6 +233,7 @@ public class TestNamespaceCommands extends SecureTestUtil { }; AccessTestAction deleteNamespace = new AccessTestAction() { + @Override public Object run() throws Exception { ACCESS_CONTROLLER.preDeleteNamespace(ObserverContext.createAndPrepare(CP_ENV, null), TEST_NAMESPACE2); @@ -180,29 +241,71 @@ public class TestNamespaceCommands extends SecureTestUtil { } }; - // verify that only superuser can create namespaces. - verifyAllowed(createNamespace, SUPERUSER); - // verify that superuser or hbase admin can delete namespaces. - verifyAllowed(deleteNamespace, SUPERUSER, USER_NSP_ADMIN); + // createNamespace: superuser | global(A) + verifyAllowed(createNamespace, + SUPERUSER, + USER_GLOBAL_ADMIN); // all others should be denied - verifyDenied(createNamespace, USER_NSP_WRITE, USER_CREATE, USER_RW, USER_NSP_ADMIN); - verifyDenied(deleteNamespace, USER_NSP_WRITE, USER_CREATE, USER_RW); + verifyDeniedWithException(createNamespace, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); + + // deleteNamespace: superuser | global(A) | NS(A) + verifyAllowed(deleteNamespace, + SUPERUSER, + USER_GLOBAL_ADMIN); + + verifyDeniedWithException(deleteNamespace, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); } @Test public void testGetNamespaceDescriptor() throws Exception { AccessTestAction getNamespaceAction = new AccessTestAction() { + @Override public Object run() throws Exception { ACCESS_CONTROLLER.preGetNamespaceDescriptor(ObserverContext.createAndPrepare(CP_ENV, null), TEST_NAMESPACE); return null; } }; - // verify that superuser or hbase admin can get the namespace descriptor. - verifyAllowed(getNamespaceAction, SUPERUSER, USER_NSP_ADMIN); - // all others should be denied - verifyDenied(getNamespaceAction, USER_NSP_WRITE, USER_CREATE, USER_RW); + // getNamespaceDescriptor : superuser | global(A) | NS(A) + verifyAllowed(getNamespaceAction, + SUPERUSER, + USER_GLOBAL_ADMIN, + USER_NS_ADMIN); + + verifyDeniedWithException(getNamespaceAction, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); } @Test @@ -222,12 +325,30 @@ public class TestNamespaceCommands extends SecureTestUtil { } }; - verifyAllowed(listAction, SUPERUSER, USER_NSP_ADMIN); - verifyDenied(listAction, USER_NSP_WRITE, USER_CREATE, USER_RW); + // listNamespaces : All access* + // * Returned list will only show what you can call getNamespaceDescriptor() + + verifyAllowed(listAction, + SUPERUSER, + USER_GLOBAL_ADMIN, + USER_NS_ADMIN); // we have 3 namespaces: [default, hbase, TEST_NAMESPACE, TEST_NAMESPACE2] assertEquals(4, ((List)SUPERUSER.runAs(listAction)).size()); - assertEquals(2, ((List)USER_NSP_ADMIN.runAs(listAction)).size()); + assertEquals(4, ((List)USER_GLOBAL_ADMIN.runAs(listAction)).size()); + + assertEquals(2, ((List)USER_NS_ADMIN.runAs(listAction)).size()); + + assertEquals(0, ((List)USER_GLOBAL_CREATE.runAs(listAction)).size()); + assertEquals(0, ((List)USER_GLOBAL_WRITE.runAs(listAction)).size()); + assertEquals(0, ((List)USER_GLOBAL_READ.runAs(listAction)).size()); + assertEquals(0, ((List)USER_GLOBAL_EXEC.runAs(listAction)).size()); + assertEquals(0, ((List)USER_NS_CREATE.runAs(listAction)).size()); + assertEquals(0, ((List)USER_NS_WRITE.runAs(listAction)).size()); + assertEquals(0, ((List)USER_NS_READ.runAs(listAction)).size()); + assertEquals(0, ((List)USER_NS_EXEC.runAs(listAction)).size()); + assertEquals(0, ((List)USER_TABLE_CREATE.runAs(listAction)).size()); + assertEquals(0, ((List)USER_TABLE_WRITE.runAs(listAction)).size()); } @Test @@ -237,6 +358,7 @@ public class TestNamespaceCommands extends SecureTestUtil { // Test if client API actions are authorized AccessTestAction grantAction = new AccessTestAction() { + @Override public Object run() throws Exception { Table acl = new HTable(conf, AccessControlLists.ACL_TABLE_NAME); try { @@ -284,16 +406,56 @@ public class TestNamespaceCommands extends SecureTestUtil { } }; - // Only HBase super user should be able to grant and revoke permissions to - // namespaces - verifyAllowed(grantAction, SUPERUSER, USER_NSP_ADMIN); - verifyDenied(grantAction, USER_CREATE, USER_RW); - verifyAllowed(revokeAction, SUPERUSER, USER_NSP_ADMIN); - verifyDenied(revokeAction, USER_CREATE, USER_RW); - - // Only an admin should be able to get the user permission - verifyAllowed(revokeAction, SUPERUSER, USER_NSP_ADMIN); - verifyDeniedWithException(revokeAction, USER_CREATE, USER_RW); + verifyAllowed(grantAction, + SUPERUSER, + USER_GLOBAL_ADMIN); + + verifyDeniedWithException(grantAction, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); + + verifyAllowed(revokeAction, + SUPERUSER, + USER_GLOBAL_ADMIN); + + verifyDeniedWithException(revokeAction, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); + + verifyAllowed(getPermissionsAction, + SUPERUSER, + USER_GLOBAL_ADMIN, + USER_NS_ADMIN); + + verifyDeniedWithException(getPermissionsAction, + USER_GLOBAL_CREATE, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_CREATE, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); } @Test @@ -308,10 +470,22 @@ public class TestNamespaceCommands extends SecureTestUtil { } }; - // Only users with create permissions on namespace should be able to create a new table - verifyAllowed(createTable, SUPERUSER, USER_NSP_WRITE); - - // all others should be denied - verifyDenied(createTable, USER_CREATE, USER_RW); + //createTable : superuser | global(C) | NS(C) + verifyAllowed(createTable, + SUPERUSER, + USER_GLOBAL_CREATE, + USER_NS_CREATE); + + verifyDeniedWithException(createTable, + USER_GLOBAL_ADMIN, + USER_GLOBAL_WRITE, + USER_GLOBAL_READ, + USER_GLOBAL_EXEC, + USER_NS_ADMIN, + USER_NS_WRITE, + USER_NS_READ, + USER_NS_EXEC, + USER_TABLE_CREATE, + USER_TABLE_WRITE); } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java index b14c706..7b53a37 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestScanEarlyTermination.java @@ -28,7 +28,6 @@ import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; @@ -40,6 +39,8 @@ import org.apache.hadoop.hbase.master.MasterCoprocessorHost; import org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.access.Permission.Action; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.TestTableName; import org.apache.log4j.Level; @@ -52,7 +53,7 @@ import org.junit.Rule; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestScanEarlyTermination extends SecureTestUtil { private static final Log LOG = LogFactory.getLog(TestScanEarlyTermination.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java index b732e67..b795127 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestTablePermissions.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.security.access; import static org.junit.Assert.assertEquals; @@ -43,10 +42,11 @@ import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.exceptions.DeserializationException; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.apache.hadoop.io.Text; @@ -62,7 +62,7 @@ import com.google.common.collect.ListMultimap; /** * Test the reading and writing of access permissions on {@code _acl_} table. */ -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestTablePermissions { private static final Log LOG = LogFactory.getLog(TestTablePermissions.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java index 669bcab..9c2bc3c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestZKPermissionsWatcher.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.security.access; import static org.junit.Assert.assertFalse; @@ -31,9 +30,10 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.Waiter.Predicate; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -43,7 +43,7 @@ import org.junit.experimental.categories.Category; /** * Test the reading and writing of access permissions to and from zookeeper. */ -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestZKPermissionsWatcher { private static final Log LOG = LogFactory.getLog(TestZKPermissionsWatcher.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestAuthenticationKey.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestAuthenticationKey.java index 10fe291..9734159 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestAuthenticationKey.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestAuthenticationKey.java @@ -24,12 +24,13 @@ import java.io.UnsupportedEncodingException; import javax.crypto.SecretKey; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; import org.mockito.Mockito; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestAuthenticationKey { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java index e36d6e0..041e112 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestTokenAuthentication.java @@ -15,7 +15,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.security.token; import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION; @@ -40,7 +39,6 @@ import org.apache.hadoop.hbase.CoordinatedStateManager; import org.apache.hadoop.hbase.Coprocessor; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.Server; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; @@ -63,6 +61,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.RegionServerServices; import org.apache.hadoop.hbase.security.SecurityInfo; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.util.Sleeper; @@ -92,7 +92,7 @@ import com.google.protobuf.ServiceException; /** * Tests for authentication token creation and usage */ -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestTokenAuthentication { static { // Setting whatever system properties after recommendation from diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java index b8a07a4..9552ad3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/token/TestZKSecretWatcher.java @@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.AfterClass; @@ -44,7 +45,7 @@ import org.junit.experimental.categories.Category; * Test the synchronization of token authentication master keys through * ZKSecretWatcher */ -@Category(LargeTests.class) +@Category({SecurityTests.class, LargeTests.class}) public class TestZKSecretWatcher { private static Log LOG = LogFactory.getLog(TestZKSecretWatcher.class); private static HBaseTestingUtility TEST_UTIL; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestDefaultScanLabelGeneratorStack.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestDefaultScanLabelGeneratorStack.java index a8ccbc7..2897048 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestDefaultScanLabelGeneratorStack.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestDefaultScanLabelGeneratorStack.java @@ -30,10 +30,10 @@ import org.apache.hadoop.hbase.Cell; import org.apache.hadoop.hbase.CellScanner; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -41,6 +41,8 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -49,7 +51,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestDefaultScanLabelGeneratorStack { public static final String CONFIDENTIAL = "confidential"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java index 2284f88..a06f03d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestEnforcingScanLabelGenerator.java @@ -27,7 +27,6 @@ import java.security.PrivilegedExceptionAction; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -45,7 +46,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestEnforcingScanLabelGenerator { public static final String CONFIDENTIAL = "confidential"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionExpander.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionExpander.java index aac1132..e0c0b98 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionExpander.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionExpander.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.security.visibility; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.security.visibility.expression.ExpressionNode; import org.apache.hadoop.hbase.security.visibility.expression.LeafExpressionNode; @@ -28,7 +29,7 @@ import org.apache.hadoop.hbase.security.visibility.expression.Operator; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestExpressionExpander { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionParser.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionParser.java index a620f8f..7c7f54b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionParser.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestExpressionParser.java @@ -21,6 +21,7 @@ import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.security.visibility.expression.ExpressionNode; import org.apache.hadoop.hbase.security.visibility.expression.LeafExpressionNode; @@ -29,7 +30,7 @@ import org.apache.hadoop.hbase.security.visibility.expression.Operator; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({SecurityTests.class, SmallTests.class}) public class TestExpressionParser { private ExpressionParser parser = new ExpressionParser(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelReplicationWithExpAsString.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelReplicationWithExpAsString.java index 9241c2c..33583de 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelReplicationWithExpAsString.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelReplicationWithExpAsString.java @@ -36,7 +36,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.client.Get; @@ -48,13 +47,15 @@ import org.apache.hadoop.hbase.codec.KeyValueCodecWithTags; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.visibility.VisibilityController.VisibilityReplication; -import org.junit.experimental.categories.Category; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.Before; +import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({ SecurityTests.class, MediumTests.class }) public class TestVisibilityLabelReplicationWithExpAsString extends TestVisibilityLabelsReplication { private static final Log LOG = LogFactory .getLog(TestVisibilityLabelReplicationWithExpAsString.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsOpWithDifferentUsersNoACL.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsOpWithDifferentUsersNoACL.java index c0dcf41..2c4955c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsOpWithDifferentUsersNoACL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsOpWithDifferentUsersNoACL.java @@ -28,10 +28,11 @@ import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.GetAuthsResponse; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -42,7 +43,7 @@ import org.junit.rules.TestName; import com.google.protobuf.ByteString; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsOpWithDifferentUsersNoACL { private static final String PRIVATE = "private"; private static final String CONFIDENTIAL = "confidential"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsReplication.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsReplication.java index 1b1312a..899e63d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsReplication.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsReplication.java @@ -41,7 +41,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Tag; import org.apache.hadoop.hbase.TagRewriteCell; @@ -66,7 +65,8 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit; import org.apache.hadoop.hbase.replication.ReplicationEndpoint; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.security.visibility.VisibilityController.VisibilityReplication; -import org.junit.experimental.categories.Category; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.wal.WAL.Entry; import org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster; @@ -75,9 +75,10 @@ import org.junit.Assert; import org.junit.Before; import org.junit.Rule; import org.junit.Test; +import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({ SecurityTests.class, MediumTests.class }) public class TestVisibilityLabelsReplication { private static final Log LOG = LogFactory.getLog(TestVisibilityLabelsReplication.class); protected static final int NON_VIS_TAG_TYPE = 100; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java index e3cf3b0..d4f5d67 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithACL.java @@ -30,7 +30,6 @@ import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; @@ -46,6 +45,8 @@ import org.apache.hadoop.hbase.security.access.AccessControlLists; import org.apache.hadoop.hbase.security.access.AccessController; import org.apache.hadoop.hbase.security.access.Permission; import org.apache.hadoop.hbase.security.access.SecureTestUtil; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -56,7 +57,7 @@ import org.junit.rules.TestName; import com.google.protobuf.ByteString; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithACL { private static final String PRIVATE = "private"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithCustomVisLabService.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithCustomVisLabService.java index 7202c1a..5cc72d2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithCustomVisLabService.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithCustomVisLabService.java @@ -25,16 +25,17 @@ import java.util.List; import java.util.NavigableMap; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithCustomVisLabService extends TestVisibilityLabels { @BeforeClass diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java index 0ef34b1..dcfed5f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDefaultVisLabelService.java @@ -32,7 +32,6 @@ import java.util.concurrent.atomic.AtomicBoolean; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; @@ -44,6 +43,8 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.NameBytesPair; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.ListLabelsResponse; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.JVMClusterUtil.RegionServerThread; import org.apache.hadoop.hbase.util.Threads; @@ -51,9 +52,10 @@ import org.junit.Assert; import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; + import com.google.protobuf.ByteString; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithDefaultVisLabelService extends TestVisibilityLabels { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java index 6eb272c..5b718f0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDeletes.java @@ -34,7 +34,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Delete; @@ -47,6 +46,8 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.After; import org.junit.AfterClass; @@ -59,7 +60,7 @@ import org.junit.rules.TestName; /** * Tests visibility labels with deletes */ -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithDeletes { private static final String TOPSECRET = "TOPSECRET"; private static final String PUBLIC = "PUBLIC"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDistributedLogReplay.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDistributedLogReplay.java index ad6e45a..8c00db4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDistributedLogReplay.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithDistributedLogReplay.java @@ -20,15 +20,16 @@ package org.apache.hadoop.hbase.security.visibility; import static org.apache.hadoop.hbase.security.visibility.VisibilityConstants.LABELS_TABLE_NAME; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; /** * Test class that tests the visibility labels with distributed log replay feature ON. */ -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithDistributedLogReplay extends TestVisibilityLabelsWithDefaultVisLabelService { @@ -42,8 +43,6 @@ public class TestVisibilityLabelsWithDistributedLogReplay extends conf.setClass(VisibilityUtils.VISIBILITY_LABEL_GENERATOR_CLASS, SimpleScanLabelGenerator.class, ScanLabelGenerator.class); conf.set("hbase.superuser", "admin"); - // Put meta on master to avoid meta server shutdown handling - conf.set("hbase.balancer.tablesOnMaster", "hbase:meta"); TEST_UTIL.startMiniCluster(2); SUPERUSER = User.createUserForTesting(conf, "admin", new String[] { "supergroup" }); USER1 = User.createUserForTesting(conf, "user1", new String[] {}); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithSLGStack.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithSLGStack.java index f782896..371d25a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithSLGStack.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityLabelsWithSLGStack.java @@ -27,7 +27,6 @@ import java.security.PrivilegedExceptionAction; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -36,6 +35,8 @@ import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -44,7 +45,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) public class TestVisibilityLabelsWithSLGStack { public static final String CONFIDENTIAL = "confidential"; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java index 1d15be6..828c89b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/security/visibility/TestVisibilityWithCheckAuths.java @@ -28,7 +28,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Append; @@ -37,6 +36,8 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos.VisibilityLabelsResponse; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.SecurityTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.Assert; @@ -46,7 +47,7 @@ import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.rules.TestName; -@Category(MediumTests.class) +@Category({SecurityTests.class, MediumTests.class}) /** * Test visibility by setting 'hbase.security.visibility.mutations.checkauths' to true */ diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java index 0d87dc2..44f411f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java @@ -42,6 +42,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotEnabledException; import org.apache.hadoop.hbase.Waiter; @@ -424,7 +425,7 @@ public class SnapshotTestingUtils { final SnapshotRegionManifest.StoreFile storeFile) throws IOException { String region = regionInfo.getEncodedName(); String hfile = storeFile.getName(); - HFileLink link = HFileLink.create(conf, table, region, family, hfile); + HFileLink link = HFileLink.build(conf, table, region, family, hfile); if (corruptedFiles.size() % 2 == 0) { fs.delete(link.getAvailablePath(fs), true); corruptedFiles.add(hfile); @@ -481,7 +482,8 @@ public class SnapshotTestingUtils { this.tableRegions = tableRegions; this.snapshotDir = SnapshotDescriptionUtils.getWorkingSnapshotDir(desc, rootDir); new FSTableDescriptors(conf) - .createTableDescriptorForTableDirectory(snapshotDir, htd, false); + .createTableDescriptorForTableDirectory(snapshotDir, + new TableDescriptor(htd), false); } public HTableDescriptor getTableDescriptor() { @@ -575,7 +577,8 @@ public class SnapshotTestingUtils { private RegionData[] createTable(final HTableDescriptor htd, final int nregions) throws IOException { Path tableDir = FSUtils.getTableDir(rootDir, htd.getTableName()); - new FSTableDescriptors(conf).createTableDescriptorForTableDirectory(tableDir, htd, false); + new FSTableDescriptors(conf).createTableDescriptorForTableDirectory(tableDir, + new TableDescriptor(htd), false); assertTrue(nregions % 2 == 0); RegionData[] regions = new RegionData[nregions]; @@ -673,7 +676,7 @@ public class SnapshotTestingUtils { loadData(util, new HTable(util.getConfiguration(), tableName), rows, families); } - public static void loadData(final HBaseTestingUtility util, final Table table, int rows, + public static void loadData(final HBaseTestingUtility util, final HTable table, int rows, byte[]... families) throws IOException, InterruptedException { table.setAutoFlushTo(false); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java index 05a3d22..192009b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestExportSnapshot.java @@ -47,6 +47,8 @@ import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescriptio import org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotFileInfo; import org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos.SnapshotRegionManifest; import org.apache.hadoop.hbase.snapshot.SnapshotTestingUtils.SnapshotMock; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Pair; @@ -60,7 +62,7 @@ import org.junit.experimental.categories.Category; /** * Test Export Snapshot Tool */ -@Category(MediumTests.class) +@Category({VerySlowRegionServerTests.class, MediumTests.class}) public class TestExportSnapshot { private final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java index a1f4605..b96fab6 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestFlushSnapshotFromClient.java @@ -31,24 +31,29 @@ import java.util.concurrent.CountDownLatch; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.commons.logging.impl.Log4JLogger; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.ScannerCallable; import org.apache.hadoop.hbase.ipc.AbstractRpcClient; +import org.apache.hadoop.hbase.ipc.RpcServer; import org.apache.hadoop.hbase.master.HMaster; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.log4j.Level; import org.junit.After; import org.junit.AfterClass; import org.junit.Before; @@ -64,7 +69,7 @@ import org.junit.experimental.categories.Category; * TODO This is essentially a clone of TestSnapshotFromClient. This is worth refactoring this * because there will be a few more flavors of snapshots that need to run these tests. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestFlushSnapshotFromClient { private static final Log LOG = LogFactory.getLog(TestFlushSnapshotFromClient.class); private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreFlushSnapshotFromClient.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreFlushSnapshotFromClient.java index 6b0f5e4..e6bc205 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreFlushSnapshotFromClient.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreFlushSnapshotFromClient.java @@ -23,7 +23,6 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HTable; @@ -31,6 +30,8 @@ import org.apache.hadoop.hbase.master.MasterFileSystem; import org.apache.hadoop.hbase.master.snapshot.SnapshotManager; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; import org.apache.hadoop.hbase.regionserver.snapshot.RegionServerSnapshotManager; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -46,7 +47,7 @@ import org.junit.experimental.categories.Category; * TODO This is essentially a clone of TestRestoreSnapshotFromClient. This is worth refactoring * this because there will be a few more flavors of snapshots that need to run these tests. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestRestoreFlushSnapshotFromClient { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java index 8726c59..7309580 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestRestoreSnapshotHelper.java @@ -30,6 +30,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher; import org.apache.hadoop.hbase.io.HFileLink; @@ -48,7 +49,7 @@ import org.mockito.Mockito; /** * Test the restore/clone operation from a file-system point of view. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestRestoreSnapshotHelper { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java index b10e090..19d5965 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSecureExportSnapshot.java @@ -19,19 +19,20 @@ */ package org.apache.hadoop.hbase.snapshot; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.mapreduce.HadoopSecurityEnabledUserProviderForTesting; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.security.access.AccessControlLists; import org.apache.hadoop.hbase.security.access.SecureTestUtil; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; /** * Reruns TestExportSnapshot using ExportSnapshot in secure mode. */ -@Category(LargeTests.class) +@Category({VerySlowRegionServerTests.class, LargeTests.class}) public class TestSecureExportSnapshot extends TestExportSnapshot { @BeforeClass public static void setUpBeforeClass() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotDescriptionUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotDescriptionUtils.java index 7827aad..f55bb2d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotDescriptionUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/TestSnapshotDescriptionUtils.java @@ -17,7 +17,6 @@ */ package org.apache.hadoop.hbase.snapshot; -import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.fail; @@ -30,8 +29,9 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; import org.junit.After; import org.junit.BeforeClass; @@ -41,7 +41,7 @@ import org.junit.experimental.categories.Category; /** * Test that the {@link SnapshotDescription} helper is helping correctly. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestSnapshotDescriptionUtils { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); private static FileSystem fs; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/trace/TestHTraceHooks.java hbase-server/src/test/java/org/apache/hadoop/hbase/trace/TestHTraceHooks.java index e34d44d..c5a3d2e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/trace/TestHTraceHooks.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/trace/TestHTraceHooks.java @@ -24,11 +24,12 @@ import static org.junit.Assert.assertTrue; import java.util.Collection; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.Waiter; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.htrace.Sampler; import org.htrace.Span; import org.htrace.Trace; @@ -42,7 +43,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Multimap; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestHTraceHooks { private static final byte[] FAMILY_BYTES = "family".getBytes(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java index 0293ea1..5b04ab9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java @@ -32,8 +32,13 @@ import java.util.concurrent.atomic.AtomicLong; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.Cell; +import org.apache.hadoop.hbase.CellUtil; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.RegionLocations; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.HConnection; import org.apache.hadoop.hbase.client.HConnectionManager; import org.apache.hadoop.hbase.client.Result; @@ -322,10 +327,10 @@ public abstract class MultiThreadedAction { public boolean verifyResultAgainstDataGenerator(Result result, boolean verifyValues, boolean verifyCfAndColumnIntegrity) { String rowKeyStr = Bytes.toString(result.getRow()); - // See if we have any data at all. if (result.isEmpty()) { LOG.error("Error checking data for key [" + rowKeyStr + "], no data returned"); + printLocations(result); return false; } @@ -338,6 +343,7 @@ public abstract class MultiThreadedAction { if (verifyCfAndColumnIntegrity && (expectedCfs.length != result.getMap().size())) { LOG.error("Error checking data for key [" + rowKeyStr + "], bad family count: " + result.getMap().size()); + printLocations(result); return false; } @@ -348,6 +354,7 @@ public abstract class MultiThreadedAction { if (columnValues == null) { LOG.error("Error checking data for key [" + rowKeyStr + "], no data for family [" + cfStr + "]]"); + printLocations(result); return false; } @@ -356,6 +363,7 @@ public abstract class MultiThreadedAction { if (!columnValues.containsKey(MUTATE_INFO)) { LOG.error("Error checking data for key [" + rowKeyStr + "], column family [" + cfStr + "], column [" + Bytes.toString(MUTATE_INFO) + "]; value is not found"); + printLocations(result); return false; } @@ -372,6 +380,7 @@ public abstract class MultiThreadedAction { if (columnValues.containsKey(column)) { LOG.error("Error checking data for key [" + rowKeyStr + "], column family [" + cfStr + "], column [" + mutate.getKey() + "]; should be deleted"); + printLocations(result); return false; } byte[] hashCodeBytes = Bytes.toBytes(hashCode); @@ -384,6 +393,7 @@ public abstract class MultiThreadedAction { if (!columnValues.containsKey(INCREMENT)) { LOG.error("Error checking data for key [" + rowKeyStr + "], column family [" + cfStr + "], column [" + Bytes.toString(INCREMENT) + "]; value is not found"); + printLocations(result); return false; } long currentValue = Bytes.toLong(columnValues.remove(INCREMENT)); @@ -394,6 +404,7 @@ public abstract class MultiThreadedAction { if (extra != 0 && (amount == 0 || extra % amount != 0)) { LOG.error("Error checking data for key [" + rowKeyStr + "], column family [" + cfStr + "], column [increment], extra [" + extra + "], amount [" + amount + "]"); + printLocations(result); return false; } if (amount != 0 && extra != amount) { @@ -414,6 +425,7 @@ public abstract class MultiThreadedAction { } LOG.error("Error checking data for key [" + rowKeyStr + "], bad columns for family [" + cfStr + "]: " + colsStr); + printLocations(result); return false; } // See if values check out. @@ -461,6 +473,7 @@ public abstract class MultiThreadedAction { + column + "]; mutation [" + mutation + "], hashCode [" + hashCode + "], verificationNeeded [" + verificationNeeded + "]"); + printLocations(result); return false; } } // end of mutation checking @@ -469,6 +482,7 @@ public abstract class MultiThreadedAction { LOG.error("Error checking data for key [" + rowKeyStr + "], column family [" + cfStr + "], column [" + column + "], mutation [" + mutation + "]; value of length " + bytes.length); + printLocations(result); return false; } } @@ -478,6 +492,48 @@ public abstract class MultiThreadedAction { return true; } + private void printLocations(Result r) { + RegionLocations rl = null; + if (r == null) { + LOG.info("FAILED FOR null Result"); + return; + } + LOG.info("FAILED FOR " + resultToString(r) + " Stale " + r.isStale()); + if (r.getRow() == null) { + return; + } + try { + rl = ((ClusterConnection)connection).locateRegion(tableName, r.getRow(), true, true); + } catch (IOException e) { + LOG.warn("Couldn't get locations for row " + Bytes.toString(r.getRow())); + } + HRegionLocation locations[] = rl.getRegionLocations(); + for (HRegionLocation h : locations) { + LOG.info("LOCATION " + h); + } + } + + private String resultToString(Result result) { + StringBuilder sb = new StringBuilder(); + sb.append("cells="); + if(result.isEmpty()) { + sb.append("NONE"); + return sb.toString(); + } + sb.append("{"); + boolean moreThanOne = false; + for(Cell cell : result.listCells()) { + if(moreThanOne) { + sb.append(", "); + } else { + moreThanOne = true; + } + sb.append(CellUtil.toString(cell, true)); + } + sb.append("}"); + return sb.toString(); + } + // Parse mutate info into a map of => private Map parseMutateInfo(byte[] mutateInfo) { Map mi = new HashMap(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/ProcessBasedLocalHBaseCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/ProcessBasedLocalHBaseCluster.java index a51e532..be7ec79 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/ProcessBasedLocalHBaseCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/ProcessBasedLocalHBaseCluster.java @@ -44,10 +44,11 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.junit.experimental.categories.Category; @@ -57,7 +58,7 @@ import org.junit.experimental.categories.Category; * {@link MiniHBaseCluster}, starts daemons as separate processes, allowing to * do real kill testing. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class ProcessBasedLocalHBaseCluster { private final String hbaseHome, workDir; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedPriorityBlockingQueue.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedPriorityBlockingQueue.java index 57d4a02..34c4ec0 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedPriorityBlockingQueue.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestBoundedPriorityBlockingQueue.java @@ -30,6 +30,7 @@ import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.After; @@ -37,7 +38,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestBoundedPriorityBlockingQueue { private final static int CAPACITY = 16; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java index 8f7e633..21d7490 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBloomFilter.java @@ -24,10 +24,11 @@ import java.io.DataOutputStream; import java.nio.ByteBuffer; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestByteBloomFilter extends TestCase { public void testBasicBloom() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java index c17f2ac..8a48d32 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestByteBufferUtils.java @@ -34,13 +34,14 @@ import java.util.Set; import java.util.SortedSet; import java.util.TreeSet; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.io.WritableUtils; import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestByteBufferUtils { private byte[] array; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java index b65337e..c5bd284 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCompressionTest.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hbase.util; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.io.DataOutputBuffer; @@ -38,7 +39,7 @@ import java.io.IOException; import static org.junit.Assert.*; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestCompressionTest { static final Log LOG = LogFactory.getLog(TestCompressionTest.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java index 2f99cd5..034d6bc 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java @@ -40,7 +40,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.KeyValueUtil; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.IsolationLevel; @@ -61,6 +60,8 @@ import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.ScanInfo; import org.apache.hadoop.hbase.regionserver.StoreScanner; import org.apache.hadoop.hbase.regionserver.wal.WALEdit; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -70,7 +71,7 @@ import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) @RunWith(Parameterized.class) public class TestCoprocessorScanPolicy { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java index bc19af0..3cb1f18 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestDefaultEnvironmentEdge.java @@ -19,6 +19,7 @@ package org.apache.hadoop.hbase.util; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -29,7 +30,7 @@ import static junit.framework.Assert.fail; * Tests to make sure that the default environment edge conforms to appropriate * behaviour. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestDefaultEnvironmentEdge { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java index f42bb2e..d615a29 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java @@ -31,11 +31,12 @@ import org.apache.hadoop.hbase.io.crypto.CipherProvider; import org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider; import org.apache.hadoop.hbase.io.crypto.KeyProvider; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestEncryptionTest { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java index 0f7f504..ea19ea7 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSHDFSUtils.java @@ -29,6 +29,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hdfs.DistributedFileSystem; import org.junit.Before; import org.junit.Test; @@ -38,7 +39,7 @@ import org.mockito.Mockito; /** * Test our recoverLease loop against mocked up filesystem. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestFSHDFSUtils { private static final Log LOG = LogFactory.getLog(TestFSHDFSUtils.class); private static final HBaseTestingUtility HTU = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java index df01d71..a99daf2 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSTableDescriptors.java @@ -30,31 +30,36 @@ import java.util.Arrays; import java.util.Comparator; import java.util.Map; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.hbase.client.TableState; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableDescriptors; import org.apache.hadoop.hbase.TableExistsException; +import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; - /** * Tests for {@link FSTableDescriptors}. */ // Do not support to be executed in he same JVM as other tests -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestFSTableDescriptors { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); - private static final Log LOG = LogFactory.getLog(TestFSTableDescriptors.class); @Test (expected=IllegalArgumentException.class) @@ -71,14 +76,15 @@ public class TestFSTableDescriptors { public void testCreateAndUpdate() throws IOException { Path testdir = UTIL.getDataTestDir("testCreateAndUpdate"); HTableDescriptor htd = new HTableDescriptor(TableName.valueOf("testCreate")); + TableDescriptor td = new TableDescriptor(htd, TableState.State.ENABLED); FileSystem fs = FileSystem.get(UTIL.getConfiguration()); FSTableDescriptors fstd = new FSTableDescriptors(UTIL.getConfiguration(), fs, testdir); - assertTrue(fstd.createTableDescriptor(htd)); - assertFalse(fstd.createTableDescriptor(htd)); - FileStatus[] statuses = fs.listStatus(testdir); - assertTrue("statuses.length=" + statuses.length, statuses.length == 1); + assertTrue(fstd.createTableDescriptor(td)); + assertFalse(fstd.createTableDescriptor(td)); + FileStatus [] statuses = fs.listStatus(testdir); + assertTrue("statuses.length="+statuses.length, statuses.length == 1); for (int i = 0; i < 10; i++) { - fstd.updateTableDescriptor(htd); + fstd.updateTableDescriptor(td); } statuses = fs.listStatus(testdir); assertTrue(statuses.length == 1); @@ -92,20 +98,29 @@ public class TestFSTableDescriptors { Path testdir = UTIL.getDataTestDir("testSequenceidAdvancesOnTableInfo"); HTableDescriptor htd = new HTableDescriptor( TableName.valueOf("testSequenceidAdvancesOnTableInfo")); + TableDescriptor td = new TableDescriptor(htd); FileSystem fs = FileSystem.get(UTIL.getConfiguration()); FSTableDescriptors fstd = new FSTableDescriptors(UTIL.getConfiguration(), fs, testdir); - Path p0 = fstd.updateTableDescriptor(htd); + Path p0 = fstd.updateTableDescriptor(td); int i0 = FSTableDescriptors.getTableInfoSequenceId(p0); - Path p1 = fstd.updateTableDescriptor(htd); + Path p1 = fstd.updateTableDescriptor(td); // Assert we cleaned up the old file. assertTrue(!fs.exists(p0)); int i1 = FSTableDescriptors.getTableInfoSequenceId(p1); assertTrue(i1 == i0 + 1); - Path p2 = fstd.updateTableDescriptor(htd); + Path p2 = fstd.updateTableDescriptor(td); // Assert we cleaned up the old file. assertTrue(!fs.exists(p1)); int i2 = FSTableDescriptors.getTableInfoSequenceId(p2); assertTrue(i2 == i1 + 1); + td = new TableDescriptor(htd, TableState.State.DISABLED); + Path p3 = fstd.updateTableDescriptor(td); + // Assert we cleaned up the old file. + assertTrue(!fs.exists(p2)); + int i3 = FSTableDescriptors.getTableInfoSequenceId(p3); + assertTrue(i3 == i2 + 1); + TableDescriptor descriptor = fstd.getDescriptor(htd.getTableName()); + assertEquals(descriptor, td); } @Test @@ -116,7 +131,8 @@ public class TestFSTableDescriptors { for (int i = 0; i < FSTableDescriptors.WIDTH_OF_SEQUENCE_ID; i++) { sb.append("0"); } - assertEquals(FSTableDescriptors.TABLEINFO_FILE_PREFIX + "." + sb.toString(), p0.getName()); + assertEquals(FSTableDescriptors.TABLEINFO_FILE_PREFIX + "." + sb.toString(), + p0.getName()); // Check a few more. Path p2 = assertWriteAndReadSequenceId(2); Path p10000 = assertWriteAndReadSequenceId(10000); @@ -152,68 +168,98 @@ public class TestFSTableDescriptors { assertNull(htds.remove(htd.getTableName())); } - @Test - public void testReadingHTDFromFS() - throws IOException { + @Test public void testReadingHTDFromFS() throws IOException { final String name = "testReadingHTDFromFS"; FileSystem fs = FileSystem.get(UTIL.getConfiguration()); HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name)); + TableDescriptor td = new TableDescriptor(htd, TableState.State.ENABLED); Path rootdir = UTIL.getDataTestDir(name); FSTableDescriptors fstd = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir); - fstd.createTableDescriptor(htd); - HTableDescriptor htd2 = + fstd.createTableDescriptor(td); + TableDescriptor td2 = FSTableDescriptors.getTableDescriptorFromFs(fs, rootdir, htd.getTableName()); - assertTrue(htd.equals(htd2)); + assertTrue(td.equals(td2)); } - @Test - public void testHTableDescriptors() + @Test public void testReadingOldHTDFromFS() throws IOException, DeserializationException { + final String name = "testReadingOldHTDFromFS"; + FileSystem fs = FileSystem.get(UTIL.getConfiguration()); + Path rootdir = UTIL.getDataTestDir(name); + FSTableDescriptors fstd = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir); + HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name)); + TableDescriptor td = new TableDescriptor(htd, TableState.State.ENABLED); + Path descriptorFile = fstd.updateTableDescriptor(td); + try (FSDataOutputStream out = fs.create(descriptorFile, true)) { + out.write(htd.toByteArray()); + } + FSTableDescriptors fstd2 = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir); + TableDescriptor td2 = fstd2.getDescriptor(htd.getTableName()); + assertEquals(td, td2); + FileStatus descriptorFile2 = + FSTableDescriptors.getTableInfoPath(fs, fstd2.getTableDir(htd.getTableName())); + byte[] buffer = td.toByteArray(); + try (FSDataInputStream in = fs.open(descriptorFile2.getPath())) { + in.readFully(buffer); + } + TableDescriptor td3 = TableDescriptor.parseFrom(buffer); + assertEquals(td, td3); + } + + @Test public void testHTableDescriptors() throws IOException, InterruptedException { final String name = "testHTableDescriptors"; FileSystem fs = FileSystem.get(UTIL.getConfiguration()); // Cleanup old tests if any debris laying around. Path rootdir = new Path(UTIL.getDataTestDir(), name); - FSTableDescriptors htds = new FSTableDescriptorsTest(fs, rootdir); + FSTableDescriptors htds = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir) { + @Override + public HTableDescriptor get(TableName tablename) + throws TableExistsException, FileNotFoundException, IOException { + LOG.info(tablename + ", cachehits=" + this.cachehits); + return super.get(tablename); + } + }; final int count = 10; // Write out table infos. for (int i = 0; i < count; i++) { - HTableDescriptor htd = new HTableDescriptor(name + i); + TableDescriptor htd = new TableDescriptor(new HTableDescriptor(name + i), + TableState.State.ENABLED); htds.createTableDescriptor(htd); } for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + assertTrue(htds.get(TableName.valueOf(name + i)) != null); } for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + assertTrue(htds.get(TableName.valueOf(name + i)) != null); } // Update the table infos for (int i = 0; i < count; i++) { HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name + i)); htd.addFamily(new HColumnDescriptor("" + i)); - htds.updateTableDescriptor(htd); + htds.updateTableDescriptor(new TableDescriptor(htd)); } // Wait a while so mod time we write is for sure different. Thread.sleep(100); for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + assertTrue(htds.get(TableName.valueOf(name + i)) != null); } for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + assertTrue(htds.get(TableName.valueOf(name + i)) != null); } assertEquals(count * 4, htds.invocations); assertTrue("expected=" + (count * 2) + ", actual=" + htds.cachehits, - htds.cachehits >= (count * 2)); + htds.cachehits >= (count * 2)); } @Test public void testHTableDescriptorsNoCache() - throws IOException, InterruptedException { + throws IOException, InterruptedException { final String name = "testHTableDescriptorsNoCache"; FileSystem fs = FileSystem.get(UTIL.getConfiguration()); // Cleanup old tests if any debris laying around. Path rootdir = new Path(UTIL.getDataTestDir(), name); - FSTableDescriptors htds = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir, + FSTableDescriptors htds = new FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir, false, false); final int count = 10; // Write out table infos. @@ -222,38 +268,32 @@ public class TestFSTableDescriptors { htds.createTableDescriptor(htd); } - for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); - } - for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + for (int i = 0; i < 2 * count; i++) { + assertNotNull("Expected HTD, got null instead", htds.get(TableName.valueOf(name + i % 2))); } // Update the table infos for (int i = 0; i < count; i++) { HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(name + i)); htd.addFamily(new HColumnDescriptor("" + i)); - htds.updateTableDescriptor(htd); - } - // Wait a while so mod time we write is for sure different. - Thread.sleep(100); - for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + htds.updateTableDescriptor(new TableDescriptor(htd)); } for (int i = 0; i < count; i++) { - assertTrue(htds.get(TableName.valueOf(name + i)) != null); + assertNotNull("Expected HTD, got null instead", htds.get(TableName.valueOf(name + i))); + assertTrue("Column Family " + i + " missing", + htds.get(TableName.valueOf(name + i)).hasFamily(Bytes.toBytes("" + i))); } assertEquals(count * 4, htds.invocations); - assertTrue("expected=0, actual=" + htds.cachehits, htds.cachehits == 0); + assertEquals("expected=0, actual=" + htds.cachehits, 0, htds.cachehits); } @Test public void testGetAll() - throws IOException, InterruptedException { + throws IOException, InterruptedException { final String name = "testGetAll"; FileSystem fs = FileSystem.get(UTIL.getConfiguration()); // Cleanup old tests if any debris laying around. Path rootdir = new Path(UTIL.getDataTestDir(), name); - FSTableDescriptors htds = new FSTableDescriptorsTest(fs, rootdir); + FSTableDescriptors htds = new FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir); final int count = 4; // Write out table infos. for (int i = 0; i < count; i++) { @@ -264,19 +304,21 @@ public class TestFSTableDescriptors { HTableDescriptor htd = new HTableDescriptor(HTableDescriptor.META_TABLEDESC.getTableName()); htds.createTableDescriptor(htd); - assertTrue(htds.getAll().size() == count + 1); + assertEquals("getAll() didn't return all TableDescriptors, expected: " + + (count + 1) + " got: " + htds.getAll().size(), + count + 1, htds.getAll().size()); } @Test public void testCacheConsistency() - throws IOException, InterruptedException { + throws IOException, InterruptedException { final String name = "testCacheConsistency"; FileSystem fs = FileSystem.get(UTIL.getConfiguration()); // Cleanup old tests if any debris laying around. Path rootdir = new Path(UTIL.getDataTestDir(), name); - FSTableDescriptors chtds = new FSTableDescriptorsTest(fs, rootdir); - FSTableDescriptors nonchtds = new FSTableDescriptorsTest(fs, + FSTableDescriptors chtds = new FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir); + FSTableDescriptors nonchtds = new FSTableDescriptorsTest(UTIL.getConfiguration(), fs, rootdir, false, false); final int count = 10; @@ -288,7 +330,7 @@ public class TestFSTableDescriptors { // Calls to getAll() won't increase the cache counter, do per table. for (int i = 0; i < count; i++) { - assertTrue(chtds.get(TableName.valueOf(name + i)) != null); + assertTrue(chtds.get(TableName.valueOf(name + i)) != null); } assertTrue(nonchtds.getAll().size() == chtds.getAll().size()); @@ -300,12 +342,12 @@ public class TestFSTableDescriptors { // hbase:meta will only increase the cachehit by 1 assertTrue(nonchtds.getAll().size() == chtds.getAll().size()); - for (Map.Entry entry : nonchtds.getAll().entrySet()) { + for (Map.Entry entry: nonchtds.getAll().entrySet()) { String t = (String) entry.getKey(); HTableDescriptor nchtd = (HTableDescriptor) entry.getValue(); assertTrue("expected " + htd.toString() + - " got: " + - chtds.get(TableName.valueOf(t)).toString(), (nchtd.equals(chtds.get(TableName.valueOf(t))))); + " got: " + chtds.get(TableName.valueOf(t)).toString(), + (nchtd.equals(chtds.get(TableName.valueOf(t))))); } } @@ -316,7 +358,8 @@ public class TestFSTableDescriptors { // Cleanup old tests if any detrius laying around. Path rootdir = new Path(UTIL.getDataTestDir(), name); TableDescriptors htds = new FSTableDescriptors(UTIL.getConfiguration(), fs, rootdir); - assertNull("There shouldn't be any HTD for this table", htds.get(TableName.valueOf("NoSuchTable"))); + assertNull("There shouldn't be any HTD for this table", + htds.get(TableName.valueOf("NoSuchTable"))); } @Test @@ -334,18 +377,18 @@ public class TestFSTableDescriptors { @Test public void testTableInfoFileStatusComparator() { - FileStatus bare = new FileStatus( - 0, false, 0, 0, -1, + FileStatus bare = + new FileStatus(0, false, 0, 0, -1, new Path("/tmp", FSTableDescriptors.TABLEINFO_FILE_PREFIX)); - FileStatus future = new FileStatus( - 0, false, 0, 0, -1, + FileStatus future = + new FileStatus(0, false, 0, 0, -1, new Path("/tmp/tablinfo." + System.currentTimeMillis())); - FileStatus farFuture = new FileStatus( - 0, false, 0, 0, -1, + FileStatus farFuture = + new FileStatus(0, false, 0, 0, -1, new Path("/tmp/tablinfo." + System.currentTimeMillis() + 1000)); - FileStatus[] alist = {bare, future, farFuture}; - FileStatus[] blist = {bare, farFuture, future}; - FileStatus[] clist = {farFuture, bare, future}; + FileStatus [] alist = {bare, future, farFuture}; + FileStatus [] blist = {bare, farFuture, future}; + FileStatus [] clist = {farFuture, bare, future}; Comparator c = FSTableDescriptors.TABLEINFO_FILESTATUS_COMPARATOR; Arrays.sort(alist, c); Arrays.sort(blist, c); @@ -354,7 +397,7 @@ public class TestFSTableDescriptors { for (int i = 0; i < alist.length; i++) { assertTrue(alist[i].equals(blist[i])); assertTrue(blist[i].equals(clist[i])); - assertTrue(clist[i].equals(i == 0 ? farFuture : i == 1 ? future : bare)); + assertTrue(clist[i].equals(i == 0? farFuture: i == 1? future: bare)); } } @@ -362,9 +405,8 @@ public class TestFSTableDescriptors { public void testReadingInvalidDirectoryFromFS() throws IOException { FileSystem fs = FileSystem.get(UTIL.getConfiguration()); try { - // .tmp dir is an invalid table name new FSTableDescriptors(UTIL.getConfiguration(), fs, - FSUtils.getRootDir(UTIL.getConfiguration())) + FSUtils.getRootDir(UTIL.getConfiguration())) .get(TableName.valueOf(HConstants.HBASE_TEMP_DIRECTORY)); fail("Shouldn't be able to read a table descriptor for the archive directory."); } catch (Exception e) { @@ -378,38 +420,38 @@ public class TestFSTableDescriptors { Path testdir = UTIL.getDataTestDir("testCreateTableDescriptorUpdatesIfThereExistsAlready"); HTableDescriptor htd = new HTableDescriptor(TableName.valueOf( "testCreateTableDescriptorUpdatesIfThereExistsAlready")); + TableDescriptor td = new TableDescriptor(htd, TableState.State.ENABLED); FileSystem fs = FileSystem.get(UTIL.getConfiguration()); FSTableDescriptors fstd = new FSTableDescriptors(UTIL.getConfiguration(), fs, testdir); - assertTrue(fstd.createTableDescriptor(htd)); - assertFalse(fstd.createTableDescriptor(htd)); + assertTrue(fstd.createTableDescriptor(td)); + assertFalse(fstd.createTableDescriptor(td)); htd.setValue(Bytes.toBytes("mykey"), Bytes.toBytes("myValue")); - assertTrue(fstd.createTableDescriptor(htd)); //this will re-create + assertTrue(fstd.createTableDescriptor(td)); //this will re-create Path tableDir = fstd.getTableDir(htd.getTableName()); Path tmpTableDir = new Path(tableDir, FSTableDescriptors.TMP_DIR); FileStatus[] statuses = fs.listStatus(tmpTableDir); assertTrue(statuses.length == 0); - assertEquals(htd, FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir)); + assertEquals(td, FSTableDescriptors.getTableDescriptorFromFs(fs, tableDir)); } - private static class FSTableDescriptorsTest - extends FSTableDescriptors { + private static class FSTableDescriptorsTest extends FSTableDescriptors { - public FSTableDescriptorsTest(FileSystem fs, Path rootdir) - throws IOException { - this(fs, rootdir, false, true); + public FSTableDescriptorsTest(Configuration conf, FileSystem fs, Path rootdir) + throws IOException { + this(conf, fs, rootdir, false, true); } - public FSTableDescriptorsTest(FileSystem fs, Path rootdir, boolean fsreadonly, boolean usecache) - throws IOException { - super(UTIL.getConfiguration(), fs, rootdir, fsreadonly, usecache); + public FSTableDescriptorsTest(Configuration conf, FileSystem fs, Path rootdir, + boolean fsreadonly, boolean usecache) throws IOException { + super(conf, fs, rootdir, fsreadonly, usecache); } @Override public HTableDescriptor get(TableName tablename) - throws TableExistsException, FileNotFoundException, IOException { + throws TableExistsException, FileNotFoundException, IOException { LOG.info((super.isUsecache() ? "Cached" : "Non-Cached") + - " HTableDescriptor.get() on " + tablename + ", cachehits=" + this.cachehits); + " HTableDescriptor.get() on " + tablename + ", cachehits=" + this.cachehits); return super.get(tablename); } } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java index 51784e9..c8b2285 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSUtils.java @@ -25,11 +25,14 @@ import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertNull; import static org.junit.Assert.assertTrue; +import java.io.DataOutputStream; import java.io.File; import java.io.IOException; +import java.util.Random; import java.util.UUID; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -39,8 +42,12 @@ import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HDFSBlocksDistribution; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.exceptions.DeserializationException; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hdfs.DFSConfigKeys; +import org.apache.hadoop.hdfs.DFSHedgedReadMetrics; +import org.apache.hadoop.hdfs.DFSTestUtil; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -48,7 +55,7 @@ import org.junit.experimental.categories.Category; /** * Test {@link FSUtils}. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestFSUtils { /** * Test path compare and prefix checking. @@ -334,4 +341,153 @@ public class TestFSUtils { assertEquals(expect, fs.getFileStatus(dst).getModificationTime()); cluster.shutdown(); } -} + + /** + * Ugly test that ensures we can get at the hedged read counters in dfsclient. + * Does a bit of preading with hedged reads enabled using code taken from hdfs TestPread. + * @throws Exception + */ + @Test public void testDFSHedgedReadMetrics() throws Exception { + HBaseTestingUtility htu = new HBaseTestingUtility(); + // Enable hedged reads and set it so the threshold is really low. + // Most of this test is taken from HDFS, from TestPread. + Configuration conf = htu.getConfiguration(); + conf.setInt(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THREADPOOL_SIZE, 5); + conf.setLong(DFSConfigKeys.DFS_DFSCLIENT_HEDGED_READ_THRESHOLD_MILLIS, 0); + conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 4096); + conf.setLong(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, 4096); + // Set short retry timeouts so this test runs faster + conf.setInt(DFSConfigKeys.DFS_CLIENT_RETRY_WINDOW_BASE, 0); + conf.setBoolean("dfs.datanode.transferTo.allowed", false); + MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build(); + // Get the metrics. Should be empty. + DFSHedgedReadMetrics metrics = FSUtils.getDFSHedgedReadMetrics(conf); + assertEquals(0, metrics.getHedgedReadOps()); + FileSystem fileSys = cluster.getFileSystem(); + try { + Path p = new Path("preadtest.dat"); + // We need > 1 blocks to test out the hedged reads. + DFSTestUtil.createFile(fileSys, p, 12 * blockSize, 12 * blockSize, + blockSize, (short) 3, seed); + pReadFile(fileSys, p); + cleanupFile(fileSys, p); + assertTrue(metrics.getHedgedReadOps() > 0); + } finally { + fileSys.close(); + cluster.shutdown(); + } + } + + // Below is taken from TestPread over in HDFS. + static final int blockSize = 4096; + static final long seed = 0xDEADBEEFL; + + private void pReadFile(FileSystem fileSys, Path name) throws IOException { + FSDataInputStream stm = fileSys.open(name); + byte[] expected = new byte[12 * blockSize]; + Random rand = new Random(seed); + rand.nextBytes(expected); + // do a sanity check. Read first 4K bytes + byte[] actual = new byte[4096]; + stm.readFully(actual); + checkAndEraseData(actual, 0, expected, "Read Sanity Test"); + // now do a pread for the first 8K bytes + actual = new byte[8192]; + doPread(stm, 0L, actual, 0, 8192); + checkAndEraseData(actual, 0, expected, "Pread Test 1"); + // Now check to see if the normal read returns 4K-8K byte range + actual = new byte[4096]; + stm.readFully(actual); + checkAndEraseData(actual, 4096, expected, "Pread Test 2"); + // Now see if we can cross a single block boundary successfully + // read 4K bytes from blockSize - 2K offset + stm.readFully(blockSize - 2048, actual, 0, 4096); + checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 3"); + // now see if we can cross two block boundaries successfully + // read blockSize + 4K bytes from blockSize - 2K offset + actual = new byte[blockSize + 4096]; + stm.readFully(blockSize - 2048, actual); + checkAndEraseData(actual, (blockSize - 2048), expected, "Pread Test 4"); + // now see if we can cross two block boundaries that are not cached + // read blockSize + 4K bytes from 10*blockSize - 2K offset + actual = new byte[blockSize + 4096]; + stm.readFully(10 * blockSize - 2048, actual); + checkAndEraseData(actual, (10 * blockSize - 2048), expected, "Pread Test 5"); + // now check that even after all these preads, we can still read + // bytes 8K-12K + actual = new byte[4096]; + stm.readFully(actual); + checkAndEraseData(actual, 8192, expected, "Pread Test 6"); + // done + stm.close(); + // check block location caching + stm = fileSys.open(name); + stm.readFully(1, actual, 0, 4096); + stm.readFully(4*blockSize, actual, 0, 4096); + stm.readFully(7*blockSize, actual, 0, 4096); + actual = new byte[3*4096]; + stm.readFully(0*blockSize, actual, 0, 3*4096); + checkAndEraseData(actual, 0, expected, "Pread Test 7"); + actual = new byte[8*4096]; + stm.readFully(3*blockSize, actual, 0, 8*4096); + checkAndEraseData(actual, 3*blockSize, expected, "Pread Test 8"); + // read the tail + stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize/2); + IOException res = null; + try { // read beyond the end of the file + stm.readFully(11*blockSize+blockSize/2, actual, 0, blockSize); + } catch (IOException e) { + // should throw an exception + res = e; + } + assertTrue("Error reading beyond file boundary.", res != null); + + stm.close(); + } + + private void checkAndEraseData(byte[] actual, int from, byte[] expected, String message) { + for (int idx = 0; idx < actual.length; idx++) { + assertEquals(message+" byte "+(from+idx)+" differs. expected "+ + expected[from+idx]+" actual "+actual[idx], + actual[idx], expected[from+idx]); + actual[idx] = 0; + } + } + + private void doPread(FSDataInputStream stm, long position, byte[] buffer, + int offset, int length) throws IOException { + int nread = 0; + // long totalRead = 0; + // DFSInputStream dfstm = null; + + /* Disable. This counts do not add up. Some issue in original hdfs tests? + if (stm.getWrappedStream() instanceof DFSInputStream) { + dfstm = (DFSInputStream) (stm.getWrappedStream()); + totalRead = dfstm.getReadStatistics().getTotalBytesRead(); + } */ + + while (nread < length) { + int nbytes = + stm.read(position + nread, buffer, offset + nread, length - nread); + assertTrue("Error in pread", nbytes > 0); + nread += nbytes; + } + + /* Disable. This counts do not add up. Some issue in original hdfs tests? + if (dfstm != null) { + if (isHedgedRead) { + assertTrue("Expected read statistic to be incremented", + length <= dfstm.getReadStatistics().getTotalBytesRead() - totalRead); + } else { + assertEquals("Expected read statistic to be incremented", length, dfstm + .getReadStatistics().getTotalBytesRead() - totalRead); + } + }*/ + } + + private void cleanupFile(FileSystem fileSys, Path name) throws IOException { + assertTrue(fileSys.exists(name)); + assertTrue(fileSys.delete(name, true)); + assertTrue(!fileSys.exists(name)); + } +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSVisitor.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSVisitor.java index 495caee..d1516ca 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSVisitor.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestFSVisitor.java @@ -19,10 +19,6 @@ package org.apache.hadoop.hbase.util; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; import java.io.IOException; import java.util.UUID; @@ -36,15 +32,16 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.wal.WALSplitter; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.*; import org.junit.experimental.categories.Category; /** * Test {@link FSUtils}. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestFSVisitor { final Log LOG = LogFactory.getLog(getClass()); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java index d2d7928..e13d7d4 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java @@ -30,12 +30,15 @@ import static org.junit.Assert.fail; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.Collection; import java.util.HashMap; +import java.util.HashSet; import java.util.LinkedList; import java.util.List; import java.util.Map; -import java.util.Map.Entry; +import java.util.NavigableMap; +import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; @@ -61,12 +64,12 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.MiniHBaseCluster; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.MetaTableAccessor; import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.client.ClusterConnection; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; import org.apache.hadoop.hbase.client.Delete; @@ -74,10 +77,10 @@ import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HConnection; -import org.apache.hadoop.hbase.client.HConnectionManager; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.MetaScanner; import org.apache.hadoop.hbase.client.Put; +import org.apache.hadoop.hbase.client.RegionReplicaUtil; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; @@ -94,6 +97,8 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; import org.apache.hadoop.hbase.regionserver.HRegionServer; import org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter; import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter.ERROR_CODE; import org.apache.hadoop.hbase.util.HBaseFsck.HbckInfo; @@ -116,8 +121,10 @@ import com.google.common.collect.Multimap; /** * This tests HBaseFsck's ability to detect reasons for inconsistent tables. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestHBaseFsck { + static final int POOL_SIZE = 7; + final static Log LOG = LogFactory.getLog(TestHBaseFsck.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private final static Configuration conf = TEST_UTIL.getConfiguration(); @@ -125,7 +132,10 @@ public class TestHBaseFsck { private final static byte[] FAM = Bytes.toBytes(FAM_STR); private final static int REGION_ONLINE_TIMEOUT = 800; private static RegionStates regionStates; - private static ExecutorService executorService; + private static ExecutorService tableExecutorService; + private static ScheduledThreadPoolExecutor hbfsckExecutorService; + private static ClusterConnection connection; + private static Admin admin; // for the instance, reset every test run private HTable tbl; @@ -138,21 +148,34 @@ public class TestHBaseFsck { @BeforeClass public static void setUpBeforeClass() throws Exception { - TEST_UTIL.getConfiguration().setInt("hbase.regionserver.handler.count", 2); - TEST_UTIL.getConfiguration().setInt("hbase.regionserver.metahandler.count", 2); + conf.setInt("hbase.regionserver.handler.count", 2); + conf.setInt("hbase.regionserver.metahandler.count", 2); + + conf.setInt("hbase.htable.threads.max", POOL_SIZE); + conf.setInt("hbase.hconnection.threads.max", 2 * POOL_SIZE); + conf.setInt("hbase.hconnection.threads.core", POOL_SIZE); TEST_UTIL.startMiniCluster(3); - executorService = new ThreadPoolExecutor(1, Integer.MAX_VALUE, 60, TimeUnit.SECONDS, + tableExecutorService = new ThreadPoolExecutor(1, POOL_SIZE, 60, TimeUnit.SECONDS, new SynchronousQueue(), Threads.newDaemonThreadFactory("testhbck")); + hbfsckExecutorService = new ScheduledThreadPoolExecutor(POOL_SIZE); + AssignmentManager assignmentManager = TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager(); regionStates = assignmentManager.getRegionStates(); - TEST_UTIL.getHBaseAdmin().setBalancerRunning(false, true); + + connection = (ClusterConnection) TEST_UTIL.getConnection(); + + admin = connection.getAdmin(); + admin.setBalancerRunning(false, true); } @AfterClass public static void tearDownAfterClass() throws Exception { + tableExecutorService.shutdown(); + hbfsckExecutorService.shutdown(); + admin.close(); TEST_UTIL.shutdownMiniCluster(); } @@ -167,8 +190,7 @@ public class TestHBaseFsck { // Now let's mess it up and change the assignment in hbase:meta to // point to a different region server - Table meta = new HTable(conf, TableName.META_TABLE_NAME, - executorService); + Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); Scan scan = new Scan(); scan.setStartRow(Bytes.toBytes(table+",,")); ResultScanner scanner = meta.getScanner(scan); @@ -196,7 +218,7 @@ public class TestHBaseFsck { put.add(HConstants.CATALOG_FAMILY, HConstants.STARTCODE_QUALIFIER, Bytes.toBytes(sn.getStartcode())); meta.put(put); - hri = HRegionInfo.getHRegionInfo(res); + hri = MetaTableAccessor.getHRegionInfo(res); break; } } @@ -212,7 +234,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf, false)); // comment needed - what is the purpose of this line - Table t = new HTable(conf, table, executorService); + Table t = connection.getTable(table, tableExecutorService); ResultScanner s = t.getScanner(new Scan()); s.close(); t.close(); @@ -224,11 +246,7 @@ public class TestHBaseFsck { @Test(timeout=180000) public void testFixAssignmentsWhenMETAinTransition() throws Exception { MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); - try (Connection connection = ConnectionFactory.createConnection(TEST_UTIL.getConfiguration())) { - try (Admin admin = connection.getAdmin()) { - admin.closeRegion(cluster.getServerHoldingMeta(), HRegionInfo.FIRST_META_REGIONINFO); - } - } + admin.closeRegion(cluster.getServerHoldingMeta(), HRegionInfo.FIRST_META_REGIONINFO); regionStates.regionOffline(HRegionInfo.FIRST_META_REGIONINFO); new MetaTableLocator().deleteMetaLocation(cluster.getMaster().getZooKeeper()); assertFalse(regionStates.isRegionOnline(HRegionInfo.FIRST_META_REGIONINFO)); @@ -241,10 +259,10 @@ public class TestHBaseFsck { /** * Create a new region in META. */ - private HRegionInfo createRegion(Configuration conf, final HTableDescriptor + private HRegionInfo createRegion(final HTableDescriptor htd, byte[] startKey, byte[] endKey) throws IOException { - Table meta = new HTable(conf, TableName.META_TABLE_NAME, executorService); + Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); HRegionInfo hri = new HRegionInfo(htd.getTableName(), startKey, endKey); MetaTableAccessor.addRegionToMeta(meta, hri); meta.close(); @@ -265,10 +283,10 @@ public class TestHBaseFsck { * This method is used to undeploy a region -- close it and attempt to * remove its state from the Master. */ - private void undeployRegion(HBaseAdmin admin, ServerName sn, + private void undeployRegion(Connection conn, ServerName sn, HRegionInfo hri) throws IOException, InterruptedException { try { - HBaseFsckRepair.closeRegionSilentlyAndWait(admin, sn, hri); + HBaseFsckRepair.closeRegionSilentlyAndWait((HConnection) conn, sn, hri); if (!hri.isMetaTable()) { admin.offline(hri.getRegionName()); } @@ -286,7 +304,7 @@ public class TestHBaseFsck { private void deleteRegion(Configuration conf, final HTableDescriptor htd, byte[] startKey, byte[] endKey, boolean unassign, boolean metaRow, boolean hdfs) throws IOException, InterruptedException { - deleteRegion(conf, htd, startKey, endKey, unassign, metaRow, hdfs, false); + deleteRegion(conf, htd, startKey, endKey, unassign, metaRow, hdfs, false, HRegionInfo.DEFAULT_REPLICA_ID); } /** @@ -295,26 +313,29 @@ public class TestHBaseFsck { * @param metaRow if true remove region's row from META * @param hdfs if true remove region's dir in HDFS * @param regionInfoOnly if true remove a region dir's .regioninfo file + * @param replicaId replica id */ private void deleteRegion(Configuration conf, final HTableDescriptor htd, byte[] startKey, byte[] endKey, boolean unassign, boolean metaRow, - boolean hdfs, boolean regionInfoOnly) throws IOException, InterruptedException { + boolean hdfs, boolean regionInfoOnly, int replicaId) + throws IOException, InterruptedException { LOG.info("** Before delete:"); dumpMeta(htd.getTableName()); - Map hris = tbl.getRegionLocations(); - for (Entry e: hris.entrySet()) { - HRegionInfo hri = e.getKey(); - ServerName hsa = e.getValue(); + List locations = tbl.getAllRegionLocations(); + for (HRegionLocation location : locations) { + HRegionInfo hri = location.getRegionInfo(); + ServerName hsa = location.getServerName(); if (Bytes.compareTo(hri.getStartKey(), startKey) == 0 - && Bytes.compareTo(hri.getEndKey(), endKey) == 0) { + && Bytes.compareTo(hri.getEndKey(), endKey) == 0 + && hri.getReplicaId() == replicaId) { LOG.info("RegionName: " +hri.getRegionNameAsString()); byte[] deleteRow = hri.getRegionName(); if (unassign) { LOG.info("Undeploying region " + hri + " from server " + hsa); - undeployRegion(new HBaseAdmin(conf), hsa, hri); + undeployRegion(connection, hsa, hri); } if (regionInfoOnly) { @@ -340,7 +361,7 @@ public class TestHBaseFsck { } if (metaRow) { - try (Table meta = conn.getTable(TableName.META_TABLE_NAME, executorService)) { + try (Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService)) { Delete delete = new Delete(deleteRow); meta.delete(delete); } @@ -357,28 +378,32 @@ public class TestHBaseFsck { /** * Setup a clean table before we start mucking with it. * + * It will set tbl which needs to be closed after test + * * @throws IOException * @throws InterruptedException * @throws KeeperException */ - Table setupTable(TableName tablename) throws Exception { - return setupTableWithRegionReplica(tablename, 1); + void setupTable(TableName tablename) throws Exception { + setupTableWithRegionReplica(tablename, 1); } /** * Setup a clean table with a certain region_replica count + * + * It will set tbl which needs to be closed after test + * * @param tableName * @param replicaCount - * @return * @throws Exception */ - Table setupTableWithRegionReplica(TableName tablename, int replicaCount) throws Exception { + void setupTableWithRegionReplica(TableName tablename, int replicaCount) throws Exception { HTableDescriptor desc = new HTableDescriptor(tablename); desc.setRegionReplication(replicaCount); HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toString(FAM)); desc.addFamily(hcd); // If a table has no CF's it doesn't get checked - TEST_UTIL.getHBaseAdmin().createTable(desc, SPLITS); - tbl = (HTable)TEST_UTIL.getConnection().getTable(tablename, executorService); + admin.createTable(desc, SPLITS); + tbl = (HTable) connection.getTable(tablename, tableExecutorService); List puts = new ArrayList(); for (byte[] row : ROWKEYS) { Put p = new Put(row); @@ -387,7 +412,6 @@ public class TestHBaseFsck { } tbl.put(puts); tbl.flushCommits(); - return tbl; } /** @@ -409,28 +433,15 @@ public class TestHBaseFsck { * @param tablename * @throws IOException */ - void deleteTable(TableName tablename) throws IOException { - HBaseAdmin admin = new HBaseAdmin(conf); - admin.getConnection().clearRegionCache(); - if (admin.isTableEnabled(tablename)) { - admin.disableTableAsync(tablename); - } - long totalWait = 0; - long maxWait = 30*1000; - long sleepTime = 250; - while (!admin.isTableDisabled(tablename)) { - try { - Thread.sleep(sleepTime); - totalWait += sleepTime; - if (totalWait >= maxWait) { - fail("Waited too long for table to be disabled + " + tablename); - } - } catch (InterruptedException e) { - e.printStackTrace(); - fail("Interrupted when trying to disable table " + tablename); - } + void cleanupTable(TableName tablename) throws IOException { + if (tbl != null) { + tbl.close(); + tbl = null; } - admin.deleteTable(tablename); + + ((ClusterConnection) connection).clearRegionCache(); + TEST_UTIL.deleteTable(tablename); + } /** @@ -453,7 +464,7 @@ public class TestHBaseFsck { assertEquals(0, hbck.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -475,7 +486,7 @@ public class TestHBaseFsck { // We should pass without triggering a RejectedExecutionException } finally { - deleteTable(table); + cleanupTable(table); } } @@ -486,7 +497,6 @@ public class TestHBaseFsck { Path tableinfo = null; try { setupTable(table); - Admin admin = TEST_UTIL.getHBaseAdmin(); Path hbaseTableDir = FSUtils.getTableDir( FSUtils.getRootDir(conf), table); @@ -517,14 +527,13 @@ public class TestHBaseFsck { htd = admin.getTableDescriptor(table); // warms up cached htd on master hbck = doFsck(conf, true); assertNoErrors(hbck); - status = null; status = FSTableDescriptors.getTableInfoPath(fs, hbaseTableDir); assertNotNull(status); htd = admin.getTableDescriptor(table); assertEquals(htd.getValue("NOT_DEFAULT"), "true"); } finally { fs.rename(new Path("/.tableinfo"), tableinfo); - deleteTable(table); + cleanupTable(table); } } @@ -586,8 +595,8 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Now let's mess it up, by adding a region with a duplicate startkey - HRegionInfo hriDupe = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A"), Bytes.toBytes("A2")); + HRegionInfo hriDupe = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A"), Bytes.toBytes("A2")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriDupe); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriDupe); @@ -609,7 +618,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -620,16 +629,99 @@ public class TestHBaseFsck { @Test (timeout=180000) public void testHbckWithRegionReplica() throws Exception { TableName table = - TableName.valueOf("tableWithReplica"); + TableName.valueOf("testHbckWithRegionReplica"); + try { + setupTableWithRegionReplica(table, 2); + admin.flush(table); + assertNoErrors(doFsck(conf, false)); + } finally { + cleanupTable(table); + } + } + + @Test (timeout=180000) + public void testHbckWithFewerReplica() throws Exception { + TableName table = + TableName.valueOf("testHbckWithFewerReplica"); try { setupTableWithRegionReplica(table, 2); + admin.flush(table); assertNoErrors(doFsck(conf, false)); assertEquals(ROWKEYS.length, countRows()); + deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), + Bytes.toBytes("C"), true, false, false, false, 1); // unassign one replica + // check that problem exists + HBaseFsck hbck = doFsck(conf, false); + assertErrors(hbck, new ERROR_CODE[]{ERROR_CODE.NOT_DEPLOYED}); + // fix the problem + hbck = doFsck(conf, true); + // run hbck again to make sure we don't see any errors + hbck = doFsck(conf, false); + assertErrors(hbck, new ERROR_CODE[]{}); } finally { - deleteTable(table); + cleanupTable(table); } } + @Test (timeout=180000) + public void testHbckWithExcessReplica() throws Exception { + TableName table = + TableName.valueOf("testHbckWithExcessReplica"); + try { + setupTableWithRegionReplica(table, 2); + admin.flush(table); + assertNoErrors(doFsck(conf, false)); + assertEquals(ROWKEYS.length, countRows()); + // the next few lines inject a location in meta for a replica, and then + // asks the master to assign the replica (the meta needs to be injected + // for the master to treat the request for assignment as valid; the master + // checks the region is valid either from its memory or meta) + Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); + List regions = admin.getTableRegions(table); + byte[] startKey = Bytes.toBytes("B"); + byte[] endKey = Bytes.toBytes("C"); + byte[] metaKey = null; + HRegionInfo newHri = null; + for (HRegionInfo h : regions) { + if (Bytes.compareTo(h.getStartKey(), startKey) == 0 && + Bytes.compareTo(h.getEndKey(), endKey) == 0 && + h.getReplicaId() == HRegionInfo.DEFAULT_REPLICA_ID) { + metaKey = h.getRegionName(); + //create a hri with replicaId as 2 (since we already have replicas with replicaid 0 and 1) + newHri = RegionReplicaUtil.getRegionInfoForReplica(h, 2); + break; + } + } + Put put = new Put(metaKey); + Collection var = admin.getClusterStatus().getServers(); + ServerName sn = var.toArray(new ServerName[var.size()])[0]; + //add a location with replicaId as 2 (since we already have replicas with replicaid 0 and 1) + MetaTableAccessor.addLocation(put, sn, sn.getStartcode(), 2); + meta.put(put); + meta.flushCommits(); + // assign the new replica + HBaseFsckRepair.fixUnassigned(admin, newHri); + HBaseFsckRepair.waitUntilAssigned(admin, newHri); + // now reset the meta row to its original value + Delete delete = new Delete(metaKey); + delete.addColumns(HConstants.CATALOG_FAMILY, MetaTableAccessor.getServerColumn(2)); + delete.addColumns(HConstants.CATALOG_FAMILY, MetaTableAccessor.getStartCodeColumn(2)); + delete.addColumns(HConstants.CATALOG_FAMILY, MetaTableAccessor.getSeqNumColumn(2)); + meta.delete(delete); + meta.flushCommits(); + meta.close(); + // check that problem exists + HBaseFsck hbck = doFsck(conf, false); + assertErrors(hbck, new ERROR_CODE[]{ERROR_CODE.NOT_IN_META}); + // fix the problem + hbck = doFsck(conf, true); + // run hbck again to make sure we don't see any errors + hbck = doFsck(conf, false); + assertErrors(hbck, new ERROR_CODE[]{}); + } finally { + cleanupTable(table); + } + } /** * Get region info from local cluster. */ @@ -638,9 +730,8 @@ public class TestHBaseFsck { Collection regionServers = status.getServers(); Map> mm = new HashMap>(); - HConnection connection = admin.getConnection(); for (ServerName hsi : regionServers) { - AdminProtos.AdminService.BlockingInterface server = connection.getAdmin(hsi); + AdminProtos.AdminService.BlockingInterface server = ((HConnection) connection).getAdmin(hsi); // list all online regions from this region server List regions = ProtobufUtil.getOnlineRegions(server); @@ -679,8 +770,8 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Now let's mess it up, by adding a region with a duplicate startkey - HRegionInfo hriDupe = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A"), Bytes.toBytes("B")); + HRegionInfo hriDupe = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A"), Bytes.toBytes("B")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriDupe); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() @@ -692,8 +783,7 @@ public class TestHBaseFsck { // different regions with the same start/endkeys since it doesn't // differentiate on ts/regionId! We actually need to recheck // deployments! - HBaseAdmin admin = TEST_UTIL.getHBaseAdmin(); - while (findDeployedHSI(getDeployedHRIs(admin), hriDupe) == null) { + while (findDeployedHSI(getDeployedHRIs((HBaseAdmin) admin), hriDupe) == null) { Thread.sleep(250); } @@ -715,7 +805,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -731,8 +821,8 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Now let's mess it up, by adding a region with a duplicate startkey - HRegionInfo hriDupe = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("B"), Bytes.toBytes("B")); + HRegionInfo hriDupe = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("B")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriDupe); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriDupe); @@ -740,8 +830,8 @@ public class TestHBaseFsck { TEST_UTIL.assertRegionOnServer(hriDupe, server, REGION_ONLINE_TIMEOUT); HBaseFsck hbck = doFsck(conf,false); - assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.DEGENERATE_REGION, - ERROR_CODE.DUPE_STARTKEYS, ERROR_CODE.DUPE_STARTKEYS}); + assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.DEGENERATE_REGION, ERROR_CODE.DUPE_STARTKEYS, + ERROR_CODE.DUPE_STARTKEYS }); assertEquals(2, hbck.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); @@ -754,7 +844,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -771,8 +861,8 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by creating an overlap in the metadata - HRegionInfo hriOverlap = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A2"), Bytes.toBytes("B")); + HRegionInfo hriOverlap = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A2"), Bytes.toBytes("B")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriOverlap); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriOverlap); @@ -794,7 +884,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -815,12 +905,12 @@ public class TestHBaseFsck { // Mess it up by creating an overlap MiniHBaseCluster cluster = TEST_UTIL.getHBaseCluster(); HMaster master = cluster.getMaster(); - HRegionInfo hriOverlap1 = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A"), Bytes.toBytes("AB")); + HRegionInfo hriOverlap1 = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A"), Bytes.toBytes("AB")); master.assignRegion(hriOverlap1); master.getAssignmentManager().waitForAssignment(hriOverlap1); - HRegionInfo hriOverlap2 = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("AB"), Bytes.toBytes("B")); + HRegionInfo hriOverlap2 = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("AB"), Bytes.toBytes("B")); master.assignRegion(hriOverlap2); master.getAssignmentManager().waitForAssignment(hriOverlap2); @@ -849,9 +939,8 @@ public class TestHBaseFsck { } } - HBaseAdmin admin = TEST_UTIL.getHBaseAdmin(); - HBaseFsckRepair.closeRegionSilentlyAndWait(admin, - cluster.getRegionServer(k).getServerName(), hbi.getHdfsHRI()); + HBaseFsckRepair.closeRegionSilentlyAndWait((HConnection) connection, + cluster.getRegionServer(k).getServerName(), hbi.getHdfsHRI()); admin.offline(regionName); break; } @@ -859,14 +948,15 @@ public class TestHBaseFsck { assertNotNull(regionName); assertNotNull(serverName); - Table meta = new HTable(conf, TableName.META_TABLE_NAME, executorService); - Put put = new Put(regionName); - put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER, - Bytes.toBytes(serverName.getHostAndPort())); - meta.put(put); + try (Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService)) { + Put put = new Put(regionName); + put.add(HConstants.CATALOG_FAMILY, HConstants.SERVER_QUALIFIER, + Bytes.toBytes(serverName.getHostAndPort())); + meta.put(put); + } // fix the problem. - HBaseFsck fsck = new HBaseFsck(conf); + HBaseFsck fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -879,6 +969,7 @@ public class TestHBaseFsck { fsck.setSidelineBigOverlaps(true); fsck.setMaxMerge(2); fsck.onlineHbck(); + fsck.close(); // verify that overlaps are fixed, and there are less rows // since one region is sidelined. @@ -887,7 +978,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertTrue(ROWKEYS.length > countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -904,13 +995,13 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by creating an overlap in the metadata - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("A"), - Bytes.toBytes("B"), true, true, false, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + Bytes.toBytes("B"), true, true, false, true, HRegionInfo.DEFAULT_REPLICA_ID); + admin.enableTable(table); - HRegionInfo hriOverlap = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A2"), Bytes.toBytes("B")); + HRegionInfo hriOverlap = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A2"), Bytes.toBytes("B")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriOverlap); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriOverlap); @@ -931,7 +1022,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -949,8 +1040,8 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by creating an overlap in the metadata - HRegionInfo hriOverlap = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A2"), Bytes.toBytes("B2")); + HRegionInfo hriOverlap = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A2"), Bytes.toBytes("B2")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriOverlap); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriOverlap); @@ -958,8 +1049,7 @@ public class TestHBaseFsck { TEST_UTIL.assertRegionOnServer(hriOverlap, server, REGION_ONLINE_TIMEOUT); HBaseFsck hbck = doFsck(conf, false); - assertErrors(hbck, new ERROR_CODE[] { - ERROR_CODE.OVERLAP_IN_REGION_CHAIN, + assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.OVERLAP_IN_REGION_CHAIN, ERROR_CODE.OVERLAP_IN_REGION_CHAIN }); assertEquals(3, hbck.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); @@ -973,7 +1063,7 @@ public class TestHBaseFsck { assertEquals(0, hbck2.getOverlapGroups(table).size()); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -990,10 +1080,10 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the assignment, meta, and hdfs data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("C"), true, true, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { @@ -1008,7 +1098,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf,false)); assertEquals(ROWKEYS.length - 2 , countRows()); // lost a region so lost a row } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1018,17 +1108,16 @@ public class TestHBaseFsck { */ @Test (timeout=180000) public void testHDFSRegioninfoMissing() throws Exception { - TableName table = - TableName.valueOf("tableHDFSRegioininfoMissing"); + TableName table = TableName.valueOf("tableHDFSRegioninfoMissing"); try { setupTable(table); assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the meta data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), - Bytes.toBytes("C"), true, true, false, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + Bytes.toBytes("C"), true, true, false, true, HRegionInfo.DEFAULT_REPLICA_ID); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { @@ -1045,7 +1134,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf, false)); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1062,10 +1151,10 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the meta data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("C"), true, true, false); // don't rm from fs - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { @@ -1081,7 +1170,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf,false)); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1097,10 +1186,10 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the meta data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("C"), false, true, false); // don't rm from fs - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { @@ -1116,7 +1205,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf,false)); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1133,7 +1222,7 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); + admin.flush(table); // Mess it up by leaving a hole in the hdfs data deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), @@ -1151,11 +1240,84 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf,false)); assertEquals(ROWKEYS.length - 2, countRows()); } finally { - deleteTable(table); + cleanupTable(table); } } /** + * This creates and fixes a bad table with a region that is in meta but has + * no deployment or data hdfs. The table has region_replication set to 2. + */ + @Test (timeout=180000) + public void testNotInHdfsWithReplicas() throws Exception { + TableName table = + TableName.valueOf("tableNotInHdfs"); + try { + HRegionInfo[] oldHris = new HRegionInfo[2]; + setupTableWithRegionReplica(table, 2); + assertEquals(ROWKEYS.length, countRows()); + NavigableMap map = + MetaScanner.allTableRegions(TEST_UTIL.getConnection(), + tbl.getName()); + int i = 0; + // store the HRIs of the regions we will mess up + for (Map.Entry m : map.entrySet()) { + if (m.getKey().getStartKey().length > 0 && + m.getKey().getStartKey()[0] == Bytes.toBytes("B")[0]) { + LOG.debug("Initially server hosting " + m.getKey() + " is " + m.getValue()); + oldHris[i++] = m.getKey(); + } + } + // make sure data in regions + admin.flush(table); + + // Mess it up by leaving a hole in the hdfs data + deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), + Bytes.toBytes("C"), false, false, true); // don't rm meta + + HBaseFsck hbck = doFsck(conf, false); + assertErrors(hbck, new ERROR_CODE[] {ERROR_CODE.NOT_IN_HDFS}); + + // fix hole + doFsck(conf, true); + + // check that hole fixed + assertNoErrors(doFsck(conf,false)); + assertEquals(ROWKEYS.length - 2, countRows()); + + // the following code checks whether the old primary/secondary has + // been unassigned and the new primary/secondary has been assigned + i = 0; + HRegionInfo[] newHris = new HRegionInfo[2]; + // get all table's regions from meta + map = MetaScanner.allTableRegions(TEST_UTIL.getConnection(), tbl.getName()); + // get the HRIs of the new regions (hbck created new regions for fixing the hdfs mess-up) + for (Map.Entry m : map.entrySet()) { + if (m.getKey().getStartKey().length > 0 && + m.getKey().getStartKey()[0] == Bytes.toBytes("B")[0]) { + newHris[i++] = m.getKey(); + } + } + // get all the online regions in the regionservers + Collection servers = admin.getClusterStatus().getServers(); + Set onlineRegions = new HashSet(); + for (ServerName s : servers) { + List list = admin.getOnlineRegions(s); + onlineRegions.addAll(list); + } + // the new HRIs must be a subset of the online regions + assertTrue(onlineRegions.containsAll(Arrays.asList(newHris))); + // the old HRIs must not be part of the set (removeAll would return false if + // the set didn't change) + assertFalse(onlineRegions.removeAll(Arrays.asList(oldHris))); + } finally { + cleanupTable(table); + admin.close(); + } + } + + + /** * This creates entries in hbase:meta with no hdfs data. This should cleanly * remove the table. */ @@ -1166,7 +1328,7 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); + admin.flush(table); // Mess it up by deleting hdfs dirs deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes(""), @@ -1193,8 +1355,7 @@ public class TestHBaseFsck { // check that hole fixed assertNoErrors(doFsck(conf,false)); - assertFalse("Table "+ table + " should have been deleted", - TEST_UTIL.getHBaseAdmin().tableExists(table)); + assertFalse("Table " + table + " should have been deleted", admin.tableExists(table)); } public void deleteTableDir(TableName table) throws IOException { @@ -1248,18 +1409,18 @@ public class TestHBaseFsck { // Write the .tableinfo FSTableDescriptors fstd = new FSTableDescriptors(conf); fstd.createTableDescriptor(htdDisabled); - List disabledRegions = TEST_UTIL.createMultiRegionsInMeta( - TEST_UTIL.getConfiguration(), htdDisabled, SPLIT_KEYS); + List disabledRegions = + TEST_UTIL.createMultiRegionsInMeta(conf, htdDisabled, SPLIT_KEYS); // Let's just assign everything to first RS HRegionServer hrs = cluster.getRegionServer(0); // Create region files. - TEST_UTIL.getHBaseAdmin().disableTable(table); - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.disableTable(table); + admin.enableTable(table); // Disable the table and close its regions - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); HRegionInfo region = disabledRegions.remove(0); byte[] regionName = region.getRegionName(); @@ -1283,8 +1444,8 @@ public class TestHBaseFsck { // check result assertNoErrors(doFsck(conf, false)); } finally { - TEST_UTIL.getHBaseAdmin().enableTable(table); - deleteTable(table); + admin.enableTable(table); + cleanupTable(table); } } @@ -1300,14 +1461,14 @@ public class TestHBaseFsck { try { setupTable(table1); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table1); + admin.flush(table1); // Mess them up by leaving a hole in the hdfs data deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("C"), false, false, true); // don't rm meta setupTable(table2); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table2); + admin.flush(table2); // Mess them up by leaving a hole in the hdfs data deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("B"), Bytes.toBytes("C"), false, false, true); // don't rm meta @@ -1330,8 +1491,8 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf, false)); assertEquals(ROWKEYS.length - 2, countRows()); } finally { - deleteTable(table1); - deleteTable(table2); + cleanupTable(table1); + cleanupTable(table2); } } /** @@ -1347,7 +1508,7 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); + admin.flush(table); HRegionLocation location = tbl.getRegionLocation("B"); // Delete one region from meta, but not hdfs, unassign it. @@ -1355,8 +1516,7 @@ public class TestHBaseFsck { Bytes.toBytes("C"), true, true, false); // Create a new meta entry to fake it as a split parent. - meta = new HTable(conf, TableName.META_TABLE_NAME, - executorService); + meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); HRegionInfo hri = location.getRegionInfo(); HRegionInfo a = new HRegionInfo(tbl.getName(), @@ -1369,7 +1529,8 @@ public class TestHBaseFsck { MetaTableAccessor.addRegionToMeta(meta, hri, a, b); meta.flushCommits(); - TEST_UTIL.getHBaseAdmin().flush(TableName.META_TABLE_NAME); + meta.close(); + admin.flush(TableName.META_TABLE_NAME); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { @@ -1378,20 +1539,21 @@ public class TestHBaseFsck { // regular repair cannot fix lingering split parent hbck = doFsck(conf, true); assertErrors(hbck, new ERROR_CODE[] { - ERROR_CODE.LINGERING_SPLIT_PARENT, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + ERROR_CODE.LINGERING_SPLIT_PARENT, ERROR_CODE.HOLE_IN_REGION_CHAIN }); assertFalse(hbck.shouldRerun()); hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.LINGERING_SPLIT_PARENT, ERROR_CODE.HOLE_IN_REGION_CHAIN}); // fix lingering split parent - hbck = new HBaseFsck(conf); + hbck = new HBaseFsck(conf, hbfsckExecutorService); hbck.connect(); hbck.setDisplayFullReport(); // i.e. -details hbck.setTimeLag(0); hbck.setFixSplitParents(true); hbck.onlineHbck(); assertTrue(hbck.shouldRerun()); + hbck.close(); Get get = new Get(hri.getRegionName()); Result result = meta.get(get); @@ -1399,7 +1561,7 @@ public class TestHBaseFsck { HConstants.SPLITA_QUALIFIER).isEmpty()); assertTrue(result.getColumnCells(HConstants.CATALOG_FAMILY, HConstants.SPLITB_QUALIFIER).isEmpty()); - TEST_UTIL.getHBaseAdmin().flush(TableName.META_TABLE_NAME); + admin.flush(TableName.META_TABLE_NAME); // fix other issues doFsck(conf, true); @@ -1408,7 +1570,7 @@ public class TestHBaseFsck { assertNoErrors(doFsck(conf, false)); assertEquals(ROWKEYS.length, countRows()); } finally { - deleteTable(table); + cleanupTable(table); IOUtils.closeQuietly(meta); } } @@ -1427,18 +1589,16 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); - HRegionLocation location = tbl.getRegionLocation("B"); + admin.flush(table); + HRegionLocation location = tbl.getRegionLocation(Bytes.toBytes("B")); - meta = new HTable(conf, TableName.META_TABLE_NAME); + meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); HRegionInfo hri = location.getRegionInfo(); // do a regular split - Admin admin = TEST_UTIL.getHBaseAdmin(); byte[] regionName = location.getRegionInfo().getRegionName(); admin.splitRegion(location.getRegionInfo().getRegionName(), Bytes.toBytes("BM")); - TestEndToEndSplitTransaction.blockUntilRegionSplit( - TEST_UTIL.getConfiguration(), 60000, regionName, true); + TestEndToEndSplitTransaction.blockUntilRegionSplit(conf, 60000, regionName, true); // TODO: fixHdfsHoles does not work against splits, since the parent dir lingers on // for some time until children references are deleted. HBCK erroneously sees this as @@ -1450,7 +1610,7 @@ public class TestHBaseFsck { Get get = new Get(hri.getRegionName()); Result result = meta.get(get); assertNotNull(result); - assertNotNull(HRegionInfo.getHRegionInfo(result)); + assertNotNull(MetaTableAccessor.getHRegionInfo(result)); assertEquals(ROWKEYS.length, countRows()); @@ -1458,7 +1618,7 @@ public class TestHBaseFsck { assertEquals(tbl.getStartKeys().length, SPLITS.length + 1 + 1); //SPLITS + 1 is # regions pre-split. assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); IOUtils.closeQuietly(meta); } } @@ -1469,33 +1629,30 @@ public class TestHBaseFsck { */ @Test(timeout=75000) public void testSplitDaughtersNotInMeta() throws Exception { - TableName table = - TableName.valueOf("testSplitdaughtersNotInMeta"); - Table meta = null; + TableName table = TableName.valueOf("testSplitdaughtersNotInMeta"); + Table meta = connection.getTable(TableName.META_TABLE_NAME, tableExecutorService); try { setupTable(table); assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); - HRegionLocation location = tbl.getRegionLocation("B"); + admin.flush(table); + HRegionLocation location = tbl.getRegionLocation(Bytes.toBytes("B")); - meta = new HTable(conf, TableName.META_TABLE_NAME); HRegionInfo hri = location.getRegionInfo(); // do a regular split - HBaseAdmin admin = TEST_UTIL.getHBaseAdmin(); byte[] regionName = location.getRegionInfo().getRegionName(); admin.splitRegion(location.getRegionInfo().getRegionName(), Bytes.toBytes("BM")); - TestEndToEndSplitTransaction.blockUntilRegionSplit( - TEST_UTIL.getConfiguration(), 60000, regionName, true); + TestEndToEndSplitTransaction.blockUntilRegionSplit(conf, 60000, regionName, true); - PairOfSameType daughters = HRegionInfo.getDaughterRegions(meta.get(new Get(regionName))); + PairOfSameType daughters = + MetaTableAccessor.getDaughterRegions(meta.get(new Get(regionName))); // Delete daughter regions from meta, but not hdfs, unassign it. Map hris = tbl.getRegionLocations(); - undeployRegion(admin, hris.get(daughters.getFirst()), daughters.getFirst()); - undeployRegion(admin, hris.get(daughters.getSecond()), daughters.getSecond()); + undeployRegion(connection, hris.get(daughters.getFirst()), daughters.getFirst()); + undeployRegion(connection, hris.get(daughters.getSecond()), daughters.getSecond()); meta.delete(new Delete(daughters.getFirst().getRegionName())); meta.delete(new Delete(daughters.getSecond().getRegionName())); @@ -1503,24 +1660,26 @@ public class TestHBaseFsck { // Remove daughters from regionStates RegionStates regionStates = TEST_UTIL.getMiniHBaseCluster().getMaster(). - getAssignmentManager().getRegionStates(); + getAssignmentManager().getRegionStates(); regionStates.deleteRegion(daughters.getFirst()); regionStates.deleteRegion(daughters.getSecond()); HBaseFsck hbck = doFsck(conf, false); - assertErrors(hbck, new ERROR_CODE[] {ERROR_CODE.NOT_IN_META_OR_DEPLOYED, - ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.HOLE_IN_REGION_CHAIN}); //no LINGERING_SPLIT_PARENT + assertErrors(hbck, + new ERROR_CODE[] { ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.NOT_IN_META_OR_DEPLOYED, + ERROR_CODE.HOLE_IN_REGION_CHAIN }); //no LINGERING_SPLIT_PARENT // now fix it. The fix should not revert the region split, but add daughters to META hbck = doFsck(conf, true, true, false, false, false, false, false, false, false, false, null); - assertErrors(hbck, new ERROR_CODE[] {ERROR_CODE.NOT_IN_META_OR_DEPLOYED, - ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + assertErrors(hbck, + new ERROR_CODE[] { ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.NOT_IN_META_OR_DEPLOYED, + ERROR_CODE.HOLE_IN_REGION_CHAIN }); // assert that the split hbase:meta entry is still there. Get get = new Get(hri.getRegionName()); Result result = meta.get(get); assertNotNull(result); - assertNotNull(HRegionInfo.getHRegionInfo(result)); + assertNotNull(MetaTableAccessor.getHRegionInfo(result)); assertEquals(ROWKEYS.length, countRows()); @@ -1528,8 +1687,8 @@ public class TestHBaseFsck { assertEquals(tbl.getStartKeys().length, SPLITS.length + 1 + 1); //SPLITS + 1 is # regions pre-split. assertNoErrors(doFsck(conf, false)); //should be fixed by now } finally { - deleteTable(table); - IOUtils.closeQuietly(meta); + meta.close(); + cleanupTable(table); } } @@ -1545,10 +1704,10 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the assignment, meta, and hdfs data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes(""), Bytes.toBytes("A"), true, true, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.FIRST_REGION_STARTKEY_NOT_EMPTY }); @@ -1557,7 +1716,7 @@ public class TestHBaseFsck { // check that hole fixed assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1571,7 +1730,7 @@ public class TestHBaseFsck { TableName.valueOf("testSingleRegionDeployedNotInHdfs"); try { setupTable(table); - TEST_UTIL.getHBaseAdmin().flush(table); + admin.flush(table); // Mess it up by deleting region dir deleteRegion(conf, tbl.getTableDescriptor(), @@ -1585,7 +1744,7 @@ public class TestHBaseFsck { // check that hole fixed assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1602,10 +1761,10 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by leaving a hole in the assignment, meta, and hdfs data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("C"), Bytes.toBytes(""), true, true, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.LAST_REGION_ENDKEY_NOT_EMPTY }); @@ -1614,7 +1773,7 @@ public class TestHBaseFsck { // check that hole fixed assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1631,7 +1790,7 @@ public class TestHBaseFsck { // Mess it up by closing a region deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("A"), - Bytes.toBytes("B"), true, false, false, false); + Bytes.toBytes("B"), true, false, false, false, HRegionInfo.DEFAULT_REPLICA_ID); // verify there is no other errors HBaseFsck hbck = doFsck(conf, false); @@ -1639,7 +1798,7 @@ public class TestHBaseFsck { ERROR_CODE.NOT_DEPLOYED, ERROR_CODE.HOLE_IN_REGION_CHAIN}); // verify that noHdfsChecking report the same errors - HBaseFsck fsck = new HBaseFsck(conf); + HBaseFsck fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -1647,9 +1806,10 @@ public class TestHBaseFsck { fsck.onlineHbck(); assertErrors(fsck, new ERROR_CODE[] { ERROR_CODE.NOT_DEPLOYED, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + fsck.close(); // verify that fixAssignments works fine with noHdfsChecking - fsck = new HBaseFsck(conf); + fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -1661,8 +1821,10 @@ public class TestHBaseFsck { assertNoErrors(fsck); assertEquals(ROWKEYS.length, countRows()); + + fsck.close(); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1681,25 +1843,26 @@ public class TestHBaseFsck { // Mess it up by deleting a region from the metadata deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("A"), - Bytes.toBytes("B"), false, true, false, false); + Bytes.toBytes("B"), false, true, false, false, HRegionInfo.DEFAULT_REPLICA_ID); // verify there is no other errors HBaseFsck hbck = doFsck(conf, false); - assertErrors(hbck, new ERROR_CODE[] { - ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + assertErrors(hbck, + new ERROR_CODE[] { ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN }); // verify that noHdfsChecking report the same errors - HBaseFsck fsck = new HBaseFsck(conf); + HBaseFsck fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); fsck.setCheckHdfs(false); fsck.onlineHbck(); - assertErrors(fsck, new ERROR_CODE[] { - ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + assertErrors(fsck, + new ERROR_CODE[] { ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN }); + fsck.close(); // verify that fixMeta doesn't work with noHdfsChecking - fsck = new HBaseFsck(conf); + fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -1708,8 +1871,9 @@ public class TestHBaseFsck { fsck.setFixMeta(true); fsck.onlineHbck(); assertFalse(fsck.shouldRerun()); - assertErrors(fsck, new ERROR_CODE[] { - ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN}); + assertErrors(fsck, + new ERROR_CODE[] { ERROR_CODE.NOT_IN_META, ERROR_CODE.HOLE_IN_REGION_CHAIN }); + fsck.close(); // fix the cluster so other tests won't be impacted fsck = doFsck(conf, true); @@ -1717,7 +1881,7 @@ public class TestHBaseFsck { fsck = doFsck(conf, true); assertNoErrors(fsck); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1734,13 +1898,13 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // Mess it up by creating an overlap in the metadata - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); deleteRegion(conf, tbl.getTableDescriptor(), Bytes.toBytes("A"), - Bytes.toBytes("B"), true, true, false, true); - TEST_UTIL.getHBaseAdmin().enableTable(table); + Bytes.toBytes("B"), true, true, false, true, HRegionInfo.DEFAULT_REPLICA_ID); + admin.enableTable(table); - HRegionInfo hriOverlap = createRegion(conf, tbl.getTableDescriptor(), - Bytes.toBytes("A2"), Bytes.toBytes("B")); + HRegionInfo hriOverlap = + createRegion(tbl.getTableDescriptor(), Bytes.toBytes("A2"), Bytes.toBytes("B")); TEST_UTIL.getHBaseCluster().getMaster().assignRegion(hriOverlap); TEST_UTIL.getHBaseCluster().getMaster().getAssignmentManager() .waitForAssignment(hriOverlap); @@ -1753,7 +1917,7 @@ public class TestHBaseFsck { ERROR_CODE.HOLE_IN_REGION_CHAIN}); // verify that noHdfsChecking can't detect ORPHAN_HDFS_REGION - HBaseFsck fsck = new HBaseFsck(conf); + HBaseFsck fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -1761,9 +1925,10 @@ public class TestHBaseFsck { fsck.onlineHbck(); assertErrors(fsck, new ERROR_CODE[] { ERROR_CODE.HOLE_IN_REGION_CHAIN}); + fsck.close(); // verify that fixHdfsHoles doesn't work with noHdfsChecking - fsck = new HBaseFsck(conf); + fsck = new HBaseFsck(conf, hbfsckExecutorService); fsck.connect(); fsck.setDisplayFullReport(); // i.e. -details fsck.setTimeLag(0); @@ -1773,13 +1938,13 @@ public class TestHBaseFsck { fsck.setFixHdfsOrphans(true); fsck.onlineHbck(); assertFalse(fsck.shouldRerun()); - assertErrors(fsck, new ERROR_CODE[] { - ERROR_CODE.HOLE_IN_REGION_CHAIN}); + assertErrors(fsck, new ERROR_CODE[] { ERROR_CODE.HOLE_IN_REGION_CHAIN}); + fsck.close(); } finally { - if (TEST_UTIL.getHBaseAdmin().isTableDisabled(table)) { - TEST_UTIL.getHBaseAdmin().enableTable(table); + if (admin.isTableDisabled(table)) { + admin.enableTable(table); } - deleteTable(table); + cleanupTable(table); } } @@ -1819,13 +1984,13 @@ public class TestHBaseFsck { try { setupTable(table); assertEquals(ROWKEYS.length, countRows()); - TEST_UTIL.getHBaseAdmin().flush(table); // flush is async. + admin.flush(table); // flush is async. FileSystem fs = FileSystem.get(conf); Path hfile = getFlushedHFile(fs, table); // Mess it up by leaving a hole in the assignment, meta, and hdfs data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); // create new corrupt file called deadbeef (valid hfile name) Path corrupt = new Path(hfile.getParent(), "deadbeef"); @@ -1844,29 +2009,28 @@ public class TestHBaseFsck { assertEquals(hfcc.getMissing().size(), 0); // Its been fixed, verify that we can enable. - TEST_UTIL.getHBaseAdmin().enableTable(table); + admin.enableTable(table); } finally { - deleteTable(table); + cleanupTable(table); } } /** - * Test that use this should have a timeout, because this method could potentially wait forever. + * Test that use this should have a timeout, because this method could potentially wait forever. */ private void doQuarantineTest(TableName table, HBaseFsck hbck, int check, int corrupt, int fail, int quar, int missing) throws Exception { try { setupTable(table); assertEquals(ROWKEYS.length, countRows()); - TEST_UTIL.getHBaseAdmin().flush(table); // flush is async. + admin.flush(table); // flush is async. // Mess it up by leaving a hole in the assignment, meta, and hdfs data - TEST_UTIL.getHBaseAdmin().disableTable(table); + admin.disableTable(table); String[] args = {"-sidelineCorruptHFiles", "-repairHoles", "-ignorePreCheckPermission", table.getNameAsString()}; - ExecutorService exec = new ScheduledThreadPoolExecutor(10); - HBaseFsck res = hbck.exec(exec, args); + HBaseFsck res = hbck.exec(hbfsckExecutorService, args); HFileCorruptionChecker hfcc = res.getHFilecorruptionChecker(); assertEquals(hfcc.getHFilesChecked(), check); @@ -1876,7 +2040,6 @@ public class TestHBaseFsck { assertEquals(hfcc.getMissing().size(), missing); // its been fixed, verify that we can enable - Admin admin = TEST_UTIL.getHBaseAdmin(); admin.enableTableAsync(table); while (!admin.isTableEnabled(table)) { try { @@ -1887,7 +2050,7 @@ public class TestHBaseFsck { } } } finally { - deleteTable(table); + cleanupTable(table); } } @@ -1898,10 +2061,10 @@ public class TestHBaseFsck { @Test(timeout=180000) public void testQuarantineMissingHFile() throws Exception { TableName table = TableName.valueOf(name.getMethodName()); - ExecutorService exec = new ScheduledThreadPoolExecutor(10); + // inject a fault in the hfcc created. final FileSystem fs = FileSystem.get(conf); - HBaseFsck hbck = new HBaseFsck(conf, exec) { + HBaseFsck hbck = new HBaseFsck(conf, hbfsckExecutorService) { @Override public HFileCorruptionChecker createHFileCorruptionChecker(boolean sidelineCorruptHFiles) throws IOException { return new HFileCorruptionChecker(conf, executor, sidelineCorruptHFiles) { @@ -1917,6 +2080,7 @@ public class TestHBaseFsck { } }; doQuarantineTest(table, hbck, 4, 0, 0, 0, 1); // 4 attempted, but 1 missing. + hbck.close(); } /** @@ -1928,10 +2092,9 @@ public class TestHBaseFsck { @Ignore @Test(timeout=180000) public void testQuarantineMissingFamdir() throws Exception { TableName table = TableName.valueOf(name.getMethodName()); - ExecutorService exec = new ScheduledThreadPoolExecutor(10); // inject a fault in the hfcc created. final FileSystem fs = FileSystem.get(conf); - HBaseFsck hbck = new HBaseFsck(conf, exec) { + HBaseFsck hbck = new HBaseFsck(conf, hbfsckExecutorService) { @Override public HFileCorruptionChecker createHFileCorruptionChecker(boolean sidelineCorruptHFiles) throws IOException { return new HFileCorruptionChecker(conf, executor, sidelineCorruptHFiles) { @@ -1947,6 +2110,7 @@ public class TestHBaseFsck { } }; doQuarantineTest(table, hbck, 3, 0, 0, 0, 1); + hbck.close(); } /** @@ -1956,10 +2120,9 @@ public class TestHBaseFsck { @Test(timeout=180000) public void testQuarantineMissingRegionDir() throws Exception { TableName table = TableName.valueOf(name.getMethodName()); - ExecutorService exec = new ScheduledThreadPoolExecutor(10); // inject a fault in the hfcc created. final FileSystem fs = FileSystem.get(conf); - HBaseFsck hbck = new HBaseFsck(conf, exec) { + HBaseFsck hbck = new HBaseFsck(conf, hbfsckExecutorService) { @Override public HFileCorruptionChecker createHFileCorruptionChecker(boolean sidelineCorruptHFiles) throws IOException { @@ -1976,6 +2139,7 @@ public class TestHBaseFsck { } }; doQuarantineTest(table, hbck, 3, 0, 0, 0, 1); + hbck.close(); } /** @@ -2004,7 +2168,7 @@ public class TestHBaseFsck { // check that reference file fixed assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -2013,22 +2177,22 @@ public class TestHBaseFsck { */ @Test (timeout=180000) public void testMissingRegionInfoQualifier() throws Exception { - TableName table = - TableName.valueOf("testMissingRegionInfoQualifier"); + Connection connection = ConnectionFactory.createConnection(conf); + TableName table = TableName.valueOf("testMissingRegionInfoQualifier"); try { setupTable(table); // Mess it up by removing the RegionInfo for one region. final List deletes = new LinkedList(); - Table meta = new HTable(conf, TableName.META_TABLE_NAME); - MetaScanner.metaScan(conf, new MetaScanner.MetaScannerVisitor() { + Table meta = connection.getTable(TableName.META_TABLE_NAME, hbfsckExecutorService); + MetaScanner.metaScan(connection, new MetaScanner.MetaScannerVisitor() { @Override public boolean processRow(Result rowResult) throws IOException { - HRegionInfo hri = MetaScanner.getHRegionInfo(rowResult); + HRegionInfo hri = MetaTableAccessor.getHRegionInfo(rowResult); if (hri != null && !hri.getTable().isSystemTable()) { Delete delete = new Delete(rowResult.getRow()); - delete.deleteColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); + delete.addColumn(HConstants.CATALOG_FAMILY, HConstants.REGIONINFO_QUALIFIER); deletes.add(delete); } return true; @@ -2056,11 +2220,11 @@ public class TestHBaseFsck { // check that reference file fixed assertFalse(hbck.getErrors().getErrorList().contains(ERROR_CODE.EMPTY_META_CELL)); } finally { - deleteTable(table); + cleanupTable(table); } + connection.close(); } - /** * Test pluggable error reporter. It can be plugged in * from system property or configuration. @@ -2247,20 +2411,14 @@ public class TestHBaseFsck { private void deleteMetaRegion(Configuration conf, boolean unassign, boolean hdfs, boolean regionInfoOnly) throws IOException, InterruptedException { - HConnection connection = HConnectionManager.getConnection(conf); - HRegionLocation metaLocation = connection.locateRegion(TableName.META_TABLE_NAME, - HConstants.EMPTY_START_ROW); + HRegionLocation metaLocation = connection.getRegionLocator(TableName.META_TABLE_NAME) + .getRegionLocation(HConstants.EMPTY_START_ROW); ServerName hsa = metaLocation.getServerName(); HRegionInfo hri = metaLocation.getRegionInfo(); if (unassign) { LOG.info("Undeploying meta region " + hri + " from server " + hsa); - Connection unmanagedConnection = ConnectionFactory.createConnection(conf); - HBaseAdmin admin = (HBaseAdmin) unmanagedConnection.getAdmin(); - try { - undeployRegion(admin, hsa, hri); - } finally { - admin.close(); - unmanagedConnection.close(); + try (Connection unmanagedConnection = ConnectionFactory.createConnection(conf)) { + undeployRegion(unmanagedConnection, hsa, hri); } } @@ -2298,12 +2456,12 @@ public class TestHBaseFsck { HTableDescriptor desc = new HTableDescriptor(table); HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toString(FAM)); desc.addFamily(hcd); // If a table has no CF's it doesn't get checked - TEST_UTIL.getHBaseAdmin().createTable(desc); - tbl = new HTable(TEST_UTIL.getConfiguration(), table, executorService); + admin.createTable(desc); + tbl = (HTable) connection.getTable(table, tableExecutorService); // Mess it up by leaving a hole in the assignment, meta, and hdfs data - deleteRegion(conf, tbl.getTableDescriptor(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW, false, - false, true); + deleteRegion(conf, tbl.getTableDescriptor(), HConstants.EMPTY_START_ROW, + HConstants.EMPTY_END_ROW, false, false, true); HBaseFsck hbck = doFsck(conf, false); assertErrors(hbck, new ERROR_CODE[] { ERROR_CODE.NOT_IN_HDFS }); @@ -2316,7 +2474,7 @@ public class TestHBaseFsck { // check that hole fixed assertNoErrors(doFsck(conf, false)); } finally { - deleteTable(table); + cleanupTable(table); } } @@ -2332,16 +2490,15 @@ public class TestHBaseFsck { assertEquals(ROWKEYS.length, countRows()); // make sure data in regions, if in wal only there is no data loss - TEST_UTIL.getHBaseAdmin().flush(table); - HRegionInfo region1 = tbl.getRegionLocation("A").getRegionInfo(); - HRegionInfo region2 = tbl.getRegionLocation("B").getRegionInfo(); + admin.flush(table); + HRegionInfo region1 = tbl.getRegionLocation(Bytes.toBytes("A")).getRegionInfo(); + HRegionInfo region2 = tbl.getRegionLocation(Bytes.toBytes("B")).getRegionInfo(); int regionCountBeforeMerge = tbl.getRegionLocations().size(); assertNotEquals(region1, region2); // do a region merge - Admin admin = TEST_UTIL.getHBaseAdmin(); admin.mergeRegions(region1.getEncodedNameAsBytes(), region2.getEncodedNameAsBytes(), false); @@ -2364,12 +2521,12 @@ public class TestHBaseFsck { } finally { TEST_UTIL.getHBaseCluster().getMaster().setCatalogJanitorEnabled(true); - deleteTable(table); + cleanupTable(table); IOUtils.closeQuietly(meta); } } - @Test (timeout=180000) + @Test (timeout = 180000) public void testRegionBoundariesCheck() throws Exception { HBaseFsck hbck = doFsck(conf, false); assertNoErrors(hbck); // no errors diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckComparator.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckComparator.java index ca8b1c7..acd62b1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckComparator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckComparator.java @@ -23,6 +23,7 @@ import static org.junit.Assert.assertTrue; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.HBaseFsck.HbckInfo; import org.apache.hadoop.hbase.util.HBaseFsck.MetaEntry; @@ -32,7 +33,7 @@ import org.junit.experimental.categories.Category; /** * Test the comparator used by Hbck. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHBaseFsckComparator { TableName table = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java index cd8c885..27de51d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsckEncryption.java @@ -34,7 +34,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; @@ -49,6 +48,8 @@ import org.apache.hadoop.hbase.regionserver.Store; import org.apache.hadoop.hbase.regionserver.StoreFile; import org.apache.hadoop.hbase.security.EncryptionUtil; import org.apache.hadoop.hbase.security.User; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.hbck.HFileCorruptionChecker; import org.apache.hadoop.hbase.util.hbck.HbckTestingUtil; @@ -57,7 +58,7 @@ import org.junit.Before; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestHBaseFsckEncryption { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHFileArchiveUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHFileArchiveUtil.java index efa3c79..ab14c41 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHFileArchiveUtil.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHFileArchiveUtil.java @@ -23,6 +23,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.HRegionInfo; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -32,7 +33,7 @@ import java.io.IOException; /** * Test that the utility works as expected */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestHFileArchiveUtil { private Path rootDir = new Path("./"); @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIdLock.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIdLock.java index 5ad9a7a..fbfbb47 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIdLock.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIdLock.java @@ -34,10 +34,11 @@ import java.util.concurrent.TimeUnit; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) // Medium as it creates 100 threads; seems better to run it isolated public class TestIdLock { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java index e85e118..4650ced 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestIncrementingEnvironmentEdge.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hbase.util; import static junit.framework.Assert.assertEquals; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -28,7 +29,7 @@ import org.junit.experimental.categories.Category; * Tests that the incrementing environment edge increments time instead of using * the default. */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestIncrementingEnvironmentEdge { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java index 04fa5bf..89dfbc1 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTable.java @@ -37,13 +37,14 @@ import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; /** * Tests merging a normal table's regions */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestMergeTable { private static final Log LOG = LogFactory.getLog(TestMergeTable.class); private final HBaseTestingUtility UTIL = new HBaseTestingUtility(); @@ -116,16 +117,14 @@ public class TestMergeTable { Connection connection = HConnectionManager.getConnection(c); List originalTableRegions = - MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), connection, - desc.getTableName()); + MetaTableAccessor.getTableRegions(connection, desc.getTableName()); LOG.info("originalTableRegions size=" + originalTableRegions.size() + "; " + originalTableRegions); Admin admin = new HBaseAdmin(c); admin.disableTable(desc.getTableName()); HMerge.merge(c, FileSystem.get(c), desc.getTableName()); List postMergeTableRegions = - MetaTableAccessor.getTableRegions(UTIL.getZooKeeperWatcher(), connection, - desc.getTableName()); + MetaTableAccessor.getTableRegions(connection, desc.getTableName()); LOG.info("postMergeTableRegions size=" + postMergeTableRegions.size() + "; " + postMergeTableRegions); assertTrue("originalTableRegions=" + originalTableRegions.size() + diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java index d4c7fa8..0f6d697 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java @@ -16,7 +16,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.util; import java.io.IOException; @@ -34,7 +33,7 @@ import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.TableDescriptor; import org.apache.hadoop.hbase.client.Get; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; @@ -43,12 +42,17 @@ import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.regionserver.InternalScanner; import org.apache.hadoop.hbase.wal.WAL; import org.apache.hadoop.hbase.wal.WALFactory; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hdfs.MiniDFSCluster; import org.apache.hadoop.util.ToolRunner; +import org.junit.After; +import org.junit.Before; +import org.junit.Test; import org.junit.experimental.categories.Category; /** Test stand alone merge tool that can merge arbitrary regions */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestMergeTool extends HBaseTestCase { static final Log LOG = LogFactory.getLog(TestMergeTool.class); HBaseTestingUtility TEST_UTIL; @@ -63,7 +67,7 @@ public class TestMergeTool extends HBaseTestCase { private MiniDFSCluster dfsCluster = null; private WALFactory wals; - @Override + @Before public void setUp() throws Exception { // Set the timeout down else this test will take a while to complete. this.conf.setLong("hbase.zookeeper.recoverable.waittime", 10); @@ -145,14 +149,14 @@ public class TestMergeTool extends HBaseTestCase { try { // Create meta region createMetaRegion(); - new FSTableDescriptors(conf, this.fs, this.testDir).createTableDescriptor(this.desc); + new FSTableDescriptors(this.conf, this.fs, testDir).createTableDescriptor( + new TableDescriptor(this.desc)); /* * Create the regions we will merge */ for (int i = 0; i < sourceRegions.length; i++) { regions[i] = - HRegion.createHRegion(this.sourceRegions[i], this.testDir, this.conf, - this.desc); + HRegion.createHRegion(this.sourceRegions[i], testDir, this.conf, this.desc); /* * Insert data */ @@ -173,7 +177,7 @@ public class TestMergeTool extends HBaseTestCase { } } - @Override + @After public void tearDown() throws Exception { super.tearDown(); for (int i = 0; i < sourceRegions.length; i++) { @@ -254,6 +258,7 @@ public class TestMergeTool extends HBaseTestCase { * Test merge tool. * @throws Exception */ + @Test public void testMergeTool() throws Exception { // First verify we can read the rows from the source regions and that they // contain the right data. diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java index 09b996f..0cf4609 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadEncoded.java @@ -20,8 +20,9 @@ import java.util.ArrayList; import java.util.Collection; import java.util.List; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.experimental.categories.Category; import org.junit.runners.Parameterized.Parameters; @@ -31,7 +32,7 @@ import org.junit.runners.Parameterized.Parameters; * amount of data, but goes through all available data block encoding * algorithms. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class TestMiniClusterLoadEncoded extends TestMiniClusterLoadParallel { /** We do not alternate the multi-put flag in this test. */ diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java index 9b1833f..7b1cd2d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadParallel.java @@ -18,8 +18,9 @@ package org.apache.hadoop.hbase.util; import static org.junit.Assert.assertEquals; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; import org.junit.runner.RunWith; @@ -29,7 +30,7 @@ import org.junit.runners.Parameterized; * A write/read/verify load test on a mini HBase cluster. Tests reading * and writing at the same time. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) @RunWith(Parameterized.class) public class TestMiniClusterLoadParallel extends TestMiniClusterLoadSequential { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java index 7d284f9..d10ce1e 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMiniClusterLoadSequential.java @@ -31,12 +31,13 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableNotFoundException; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.io.compress.Compression; import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.test.LoadTestDataGenerator; import org.junit.After; import org.junit.Before; @@ -50,7 +51,7 @@ import org.junit.runners.Parameterized.Parameters; * A write/read/verify load test on a mini HBase cluster. Tests reading * and then writing. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) @RunWith(Parameterized.class) public class TestMiniClusterLoadSequential { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java index 2bef699..b229e91 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestPoolMap.java @@ -26,6 +26,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import junit.framework.TestCase; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.PoolMap.PoolType; import org.junit.experimental.categories.Category; @@ -33,8 +34,9 @@ import org.junit.runner.RunWith; import org.junit.runners.Suite; @RunWith(Suite.class) -@Suite.SuiteClasses({TestPoolMap.TestRoundRobinPoolType.class, TestPoolMap.TestThreadLocalPoolType.class, TestPoolMap.TestReusablePoolType.class}) -@Category(SmallTests.class) +@Suite.SuiteClasses({TestPoolMap.TestRoundRobinPoolType.class, TestPoolMap.TestThreadLocalPoolType.class, + TestPoolMap.TestReusablePoolType.class}) +@Category({MiscTests.class, SmallTests.class}) public class TestPoolMap { public abstract static class TestPoolType extends TestCase { protected PoolMap poolMap; @@ -72,7 +74,7 @@ public class TestPoolMap { } } - @Category(SmallTests.class) + @Category({MiscTests.class, SmallTests.class}) public static class TestRoundRobinPoolType extends TestPoolType { @Override protected PoolType getPoolType() { @@ -134,7 +136,7 @@ public class TestPoolMap { } - @Category(SmallTests.class) + @Category({MiscTests.class, SmallTests.class}) public static class TestThreadLocalPoolType extends TestPoolType { @Override protected PoolType getPoolType() { @@ -179,7 +181,7 @@ public class TestPoolMap { } - @Category(SmallTests.class) + @Category({MiscTests.class, SmallTests.class}) public static class TestReusablePoolType extends TestPoolType { @Override protected PoolType getPoolType() { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestProcessBasedCluster.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestProcessBasedCluster.java index 76e04e8..e8d22b8 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestProcessBasedCluster.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestProcessBasedCluster.java @@ -30,15 +30,16 @@ import org.apache.hadoop.hbase.HTestConst; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Result; import org.apache.hadoop.hbase.client.ResultScanner; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.junit.Test; import org.junit.experimental.categories.Category; /** * A basic unit test that spins up a local HBase cluster. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestProcessBasedCluster { private static final Log LOG = LogFactory.getLog(TestProcessBasedCluster.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSizeCalculator.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSizeCalculator.java index f3d3359..8b74112 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSizeCalculator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSizeCalculator.java @@ -24,9 +24,10 @@ import org.apache.hadoop.hbase.HRegionLocation; import org.apache.hadoop.hbase.RegionLoad; import org.apache.hadoop.hbase.ServerLoad; import org.apache.hadoop.hbase.ServerName; -import org.apache.hadoop.hbase.testclassification.SmallTests; -import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.Admin; +import org.apache.hadoop.hbase.testclassification.MiscTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.client.RegionLocator; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -42,7 +43,7 @@ import static org.junit.Assert.assertEquals; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestRegionSizeCalculator { private Configuration configuration = new Configuration(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitCalculator.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitCalculator.java index 8101d67..c35491d 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitCalculator.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitCalculator.java @@ -30,6 +30,7 @@ import java.util.UUID; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; @@ -38,7 +39,7 @@ import com.google.common.collect.Multimap; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestRegionSplitCalculator { private static final Log LOG = LogFactory.getLog(TestRegionSplitCalculator.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitter.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitter.java index 475ee19..432c4b3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitter.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRegionSplitter.java @@ -34,10 +34,11 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.RegionSplitter.HexStringSplit; import org.apache.hadoop.hbase.util.RegionSplitter.SplitAlgorithm; import org.apache.hadoop.hbase.util.RegionSplitter.UniformSplit; @@ -50,7 +51,7 @@ import org.junit.experimental.categories.Category; * Tests for {@link RegionSplitter}, which can create a pre-split table or do a * rolling split of an existing table. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestRegionSplitter { private final static Log LOG = LogFactory.getLog(TestRegionSplitter.class); private final static HBaseTestingUtility UTIL = new HBaseTestingUtility(); @@ -207,6 +208,9 @@ public class TestRegionSplitter { xFF, xFF, xFF}, lastRow); assertArrayEquals(splitPoint, new byte[] {(byte)0xef, xFF, xFF, xFF, xFF, xFF, xFF, xFF}); + + splitPoint = splitter.split(new byte[] {'a', 'a', 'a'}, new byte[] {'a', 'a', 'b'}); + assertArrayEquals(splitPoint, new byte[] {'a', 'a', 'a', (byte)0x80 }); } @Test @@ -227,7 +231,7 @@ public class TestRegionSplitter { assertTrue(splitFailsPrecondition(algo, "\\xAA", "\\xAA")); // range error assertFalse(splitFailsPrecondition(algo, "\\x00", "\\x02", 3)); // should be fine assertFalse(splitFailsPrecondition(algo, "\\x00", "\\x0A", 11)); // should be fine - assertTrue(splitFailsPrecondition(algo, "\\x00", "\\x0A", 12)); // too granular + assertFalse(splitFailsPrecondition(algo, "\\x00", "\\x0A", 12)); // should be fine } private boolean splitFailsPrecondition(SplitAlgorithm algo) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java index 860d840..1ecfa2b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestRootPath.java @@ -27,13 +27,14 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.experimental.categories.Category; /** * Test requirement that root directory must be a URI */ -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestRootPath extends TestCase { private static final Log LOG = LogFactory.getLog(TestRootPath.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSortedCopyOnWriteSet.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSortedCopyOnWriteSet.java index 9db78eb..839d1cc 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSortedCopyOnWriteSet.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSortedCopyOnWriteSet.java @@ -24,11 +24,12 @@ import static org.junit.Assert.*; import java.util.Iterator; import com.google.common.collect.Lists; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestSortedCopyOnWriteSet { @Test diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestTableName.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestTableName.java index 605ee68..94070f3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestTableName.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestTableName.java @@ -27,6 +27,7 @@ import static org.junit.Assert.assertNotEquals; import static org.junit.Assert.assertSame; import static org.junit.Assert.fail; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.junit.Test; @@ -37,7 +38,7 @@ import org.junit.runner.Description; /** * Returns a {@code byte[]} containing the name of the currently running test method. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestTableName extends TestWatcher { private TableName tableName; @@ -153,7 +154,7 @@ public class TestTableName extends TestWatcher { @Test public void testValueOf() { - Map inCache = new HashMap(); + Map inCache = new HashMap<>(); // fill cache for (Names name : names) { inCache.put(name.nn, TableName.valueOf(name.ns, name.tn)); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java index 1f6ec70..217f60b 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java @@ -40,7 +40,7 @@ public class HbckTestingUtil { public static HBaseFsck doFsck( Configuration conf, boolean fix, TableName table) throws Exception { - return doFsck(conf, fix, fix, fix, fix,fix, fix, fix, fix, fix, fix, table); + return doFsck(conf, fix, fix, fix, fix, fix, fix, fix, fix, fix, fix, table); } public static HBaseFsck doFsck(Configuration conf, boolean fixAssignments, @@ -66,6 +66,7 @@ public class HbckTestingUtil { fsck.includeTable(table); } fsck.onlineHbck(); + fsck.close(); return fsck; } diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRebuildTestCore.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRebuildTestCore.java index 8319625..c5aaf90 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRebuildTestCore.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRebuildTestCore.java @@ -36,7 +36,6 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.NamespaceDescriptor; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.MetaTableAccessor; @@ -52,6 +51,8 @@ import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Scan; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.zookeeper.KeeperException; @@ -72,7 +73,7 @@ import org.junit.experimental.categories.Category; * since minicluster startup and tear downs seem to leak file handles and * eventually cause out of file handle exceptions. */ -@Category(LargeTests.class) +@Category({MiscTests.class, LargeTests.class}) public class OfflineMetaRebuildTestCore { protected final static Log LOG = LogFactory .getLog(OfflineMetaRebuildTestCore.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java index f4a035f..a3d323c 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildBase.java @@ -25,21 +25,23 @@ import static org.junit.Assert.assertTrue; import java.util.Arrays; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Connection; import org.apache.hadoop.hbase.client.ConnectionFactory; +import org.apache.hadoop.hbase.client.HConnectionManager; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.HBaseFsck; import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter.ERROR_CODE; import org.junit.Test; import org.junit.experimental.categories.Category; - /** * This builds a table, removes info from meta, and then rebuilds meta. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestOfflineMetaRebuildBase extends OfflineMetaRebuildTestCore { + @SuppressWarnings("deprecation") @Test(timeout = 120000) public void testMetaRebuild() throws Exception { wipeOutMeta(); @@ -84,4 +86,4 @@ public class TestOfflineMetaRebuildBase extends OfflineMetaRebuildTestCore { LOG.info("Table " + table + " has " + tableRowCount(conf, table) + " entries."); assertEquals(16, tableRowCount(conf, table)); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildHole.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildHole.java index b8ec604..6320b93 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildHole.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildHole.java @@ -24,13 +24,11 @@ import static org.junit.Assert.assertFalse; import java.util.Arrays; -import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.HBaseFsck; import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter.ERROR_CODE; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -38,7 +36,7 @@ import org.junit.experimental.categories.Category; * This builds a table, removes info from meta, and then fails when attempting * to rebuild meta. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestOfflineMetaRebuildHole extends OfflineMetaRebuildTestCore { @Test(timeout = 120000) @@ -66,15 +64,14 @@ public class TestOfflineMetaRebuildHole extends OfflineMetaRebuildTestCore { // attempt to rebuild meta table from scratch HBaseFsck fsck = new HBaseFsck(conf); assertFalse(fsck.rebuildMeta(false)); + fsck.close(); // bring up the minicluster TEST_UTIL.startMiniZKCluster(); // tables seem enabled by default TEST_UTIL.restartHBaseCluster(3); - ZooKeeperWatcher zkw = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); - LOG.info("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); LOG.info("No more RIT in ZK, now doing final test verification"); int tries = 60; while(TEST_UTIL.getHBaseCluster() @@ -98,4 +95,4 @@ public class TestOfflineMetaRebuildHole extends OfflineMetaRebuildTestCore { ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.NOT_IN_META_OR_DEPLOYED}); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java index 69a962b..e49b154 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/TestOfflineMetaRebuildOverlap.java @@ -24,14 +24,12 @@ import static org.junit.Assert.assertFalse; import java.util.Arrays; -import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.HBaseFsck; import org.apache.hadoop.hbase.util.HBaseFsck.ErrorReporter.ERROR_CODE; import org.apache.hadoop.hbase.util.HBaseFsck.HbckInfo; -import org.apache.hadoop.hbase.zookeeper.ZKAssign; -import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; import org.junit.Test; import org.junit.experimental.categories.Category; @@ -41,7 +39,7 @@ import com.google.common.collect.Multimap; * This builds a table, builds an overlap, and then fails when attempting to * rebuild meta. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestOfflineMetaRebuildOverlap extends OfflineMetaRebuildTestCore { @Test(timeout = 120000) @@ -80,10 +78,8 @@ public class TestOfflineMetaRebuildOverlap extends OfflineMetaRebuildTestCore { TEST_UTIL.startMiniZKCluster(); // tables seem enabled by default TEST_UTIL.restartHBaseCluster(3); - ZooKeeperWatcher zkw = HBaseTestingUtility.getZooKeeperWatcher(TEST_UTIL); - LOG.info("Waiting for no more RIT"); - ZKAssign.blockUntilNoRIT(zkw); + TEST_UTIL.waitUntilNoRegionsInTransition(60000); LOG.info("No more RIT in ZK, now doing final test verification"); int tries = 60; while(TEST_UTIL.getHBaseCluster() @@ -109,4 +105,4 @@ public class TestOfflineMetaRebuildOverlap extends OfflineMetaRebuildTestCore { ERROR_CODE.NOT_IN_META_OR_DEPLOYED, ERROR_CODE.NOT_IN_META_OR_DEPLOYED}); } -} +} \ No newline at end of file diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/IOTestProvider.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/IOTestProvider.java new file mode 100644 index 0000000..d2581a1 --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/IOTestProvider.java @@ -0,0 +1,233 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.wal; + +import java.io.IOException; +import java.util.Collection; +import java.util.List; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.util.FSUtils; +import org.apache.hadoop.hbase.wal.WAL.Entry; + +import static org.apache.hadoop.hbase.wal.DefaultWALProvider.DEFAULT_PROVIDER_ID; +import static org.apache.hadoop.hbase.wal.DefaultWALProvider.META_WAL_PROVIDER_ID; +import static org.apache.hadoop.hbase.wal.DefaultWALProvider.WAL_FILE_NAME_DELIMITER; + + +// imports for things that haven't moved from regionserver.wal yet. +import org.apache.hadoop.hbase.regionserver.wal.FSHLog; +import org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter; +import org.apache.hadoop.hbase.regionserver.wal.WALActionsListener; + +/** + * A WAL Provider that returns a single thread safe WAL that optionally can skip parts of our + * normal interactions with HDFS. + * + * This implementation picks a directory in HDFS based on the same mechanisms as the + * {@link DefaultWALProvider}. Users can configure how much interaction + * we have with HDFS with the configuration property "hbase.wal.iotestprovider.operations". + * The value should be a comma separated list of allowed operations: + *
      + *
    • append : edits will be written to the underlying filesystem + *
    • sync : wal syncs will result in hflush calls + *
    • fileroll : roll requests will result in creating a new file on the underlying + * filesystem. + *
    + * Additionally, the special cases "all" and "none" are recognized. + * If ommited, the value defaults to "all." + * Behavior is undefined if "all" or "none" are paired with additional values. Behavior is also + * undefined if values not listed above are included. + * + * Only those operations listed will occur between the returned WAL and HDFS. All others + * will be no-ops. + * + * Note that in the case of allowing "append" operations but not allowing "fileroll", the returned + * WAL will just keep writing to the same file. This won't avoid all costs associated with file + * management over time, becaue the data set size may result in additional HDFS block allocations. + * + */ +@InterfaceAudience.Private +public class IOTestProvider implements WALProvider { + private static final Log LOG = LogFactory.getLog(IOTestProvider.class); + + private static final String ALLOWED_OPERATIONS = "hbase.wal.iotestprovider.operations"; + private enum AllowedOperations { + all, + append, + sync, + fileroll, + none; + } + + private FSHLog log = null; + + /** + * @param factory factory that made us, identity used for FS layout. may not be null + * @param conf may not be null + * @param listeners may be null + * @param providerId differentiate between providers from one facotry, used for FS layout. may be + * null + */ + @Override + public void init(final WALFactory factory, final Configuration conf, + final List listeners, String providerId) throws IOException { + if (null != log) { + throw new IllegalStateException("WALProvider.init should only be called once."); + } + if (null == providerId) { + providerId = DEFAULT_PROVIDER_ID; + } + final String logPrefix = factory.factoryId + WAL_FILE_NAME_DELIMITER + providerId; + log = new IOTestWAL(FileSystem.get(conf), FSUtils.getRootDir(conf), + DefaultWALProvider.getWALDirectoryName(factory.factoryId), + HConstants.HREGION_OLDLOGDIR_NAME, conf, listeners, + true, logPrefix, META_WAL_PROVIDER_ID.equals(providerId) ? META_WAL_PROVIDER_ID : null); + } + + @Override + public WAL getWAL(final byte[] identifier) throws IOException { + return log; + } + + @Override + public void close() throws IOException { + log.close(); + } + + @Override + public void shutdown() throws IOException { + log.shutdown(); + } + + private static class IOTestWAL extends FSHLog { + + private final boolean doFileRolls; + + // Used to differntiate between roll calls before and after we finish construction. + private final boolean initialized; + + /** + * Create an edit log at the given dir location. + * + * You should never have to load an existing log. If there is a log at + * startup, it should have already been processed and deleted by the time the + * WAL object is started up. + * + * @param fs filesystem handle + * @param rootDir path to where logs and oldlogs + * @param logDir dir where wals are stored + * @param archiveDir dir where wals are archived + * @param conf configuration to use + * @param listeners Listeners on WAL events. Listeners passed here will + * be registered before we do anything else; e.g. the + * Constructor {@link #rollWriter()}. + * @param failIfWALExists If true IOException will be thrown if files related to this wal + * already exist. + * @param prefix should always be hostname and port in distributed env and + * it will be URL encoded before being used. + * If prefix is null, "wal" will be used + * @param suffix will be url encoded. null is treated as empty. non-empty must start with + * {@link DefaultWALProvider#WAL_FILE_NAME_DELIMITER} + * @throws IOException + */ + public IOTestWAL(final FileSystem fs, final Path rootDir, final String logDir, + final String archiveDir, final Configuration conf, + final List listeners, + final boolean failIfWALExists, final String prefix, final String suffix) + throws IOException { + super(fs, rootDir, logDir, archiveDir, conf, listeners, failIfWALExists, prefix, suffix); + Collection operations = conf.getStringCollection(ALLOWED_OPERATIONS); + doFileRolls = operations.isEmpty() || operations.contains(AllowedOperations.all.name()) || + operations.contains(AllowedOperations.fileroll.name()); + initialized = true; + LOG.info("Initialized with file rolling " + (doFileRolls ? "enabled" : "disabled")); + } + + private Writer noRollsWriter; + + // creatWriterInstance is where the new pipeline is set up for doing file rolls + // if we are skipping it, just keep returning the same writer. + @Override + protected Writer createWriterInstance(final Path path) throws IOException { + // we get called from the FSHLog constructor (!); always roll in this case since + // we don't know yet if we're supposed to generally roll and + // we need an initial file in the case of doing appends but no rolls. + if (!initialized || doFileRolls) { + LOG.info("creating new writer instance."); + final ProtobufLogWriter writer = new IOTestWriter(); + writer.init(fs, path, conf, false); + if (!initialized) { + LOG.info("storing initial writer instance in case file rolling isn't allowed."); + noRollsWriter = writer; + } + return writer; + } else { + LOG.info("WAL rolling disabled, returning the first writer."); + // Initial assignment happens during the constructor call, so there ought not be + // a race for first assignment. + return noRollsWriter; + } + } + } + + /** + * Presumes init will be called by a single thread prior to any access of other methods. + */ + private static class IOTestWriter extends ProtobufLogWriter { + private boolean doAppends; + private boolean doSyncs; + + @Override + public void init(FileSystem fs, Path path, Configuration conf, boolean overwritable) throws IOException { + Collection operations = conf.getStringCollection(ALLOWED_OPERATIONS); + if (operations.isEmpty() || operations.contains(AllowedOperations.all.name())) { + doAppends = doSyncs = true; + } else if (operations.contains(AllowedOperations.none.name())) { + doAppends = doSyncs = false; + } else { + doAppends = operations.contains(AllowedOperations.append.name()); + doSyncs = operations.contains(AllowedOperations.sync.name()); + } + LOG.info("IOTestWriter initialized with appends " + (doAppends ? "enabled" : "disabled") + + " and syncs " + (doSyncs ? "enabled" : "disabled")); + super.init(fs, path, conf, overwritable); + } + + @Override + public void append(Entry entry) throws IOException { + if (doAppends) { + super.append(entry); + } + } + + @Override + public void sync() throws IOException { + if (doSyncs) { + super.sync(); + } + } + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestBoundedRegionGroupingProvider.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestBoundedRegionGroupingProvider.java new file mode 100644 index 0000000..1c7813b --- /dev/null +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestBoundedRegionGroupingProvider.java @@ -0,0 +1,183 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.wal; + +import java.io.IOException; +import java.util.HashSet; +import java.util.Random; +import java.util.Set; + +import static org.junit.Assert.assertEquals; +import static org.apache.hadoop.hbase.wal.BoundedRegionGroupingProvider.NUM_REGION_GROUPS; +import static org.apache.hadoop.hbase.wal.BoundedRegionGroupingProvider.DEFAULT_NUM_REGION_GROUPS; +import static org.apache.hadoop.hbase.wal.WALFactory.WAL_PROVIDER; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Rule; +import org.junit.Test; +import org.junit.experimental.categories.Category; +import org.junit.rules.TestName; + +@Category({RegionServerTests.class, LargeTests.class}) +public class TestBoundedRegionGroupingProvider { + protected static final Log LOG = LogFactory.getLog(TestBoundedRegionGroupingProvider.class); + + @Rule + public TestName currentTest = new TestName(); + protected static Configuration conf; + protected static FileSystem fs; + protected final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + + @Before + public void setUp() throws Exception { + FileStatus[] entries = fs.listStatus(new Path("/")); + for (FileStatus dir : entries) { + fs.delete(dir.getPath(), true); + } + } + + @After + public void tearDown() throws Exception { + } + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + conf = TEST_UTIL.getConfiguration(); + // Make block sizes small. + conf.setInt("dfs.blocksize", 1024 * 1024); + // quicker heartbeat interval for faster DN death notification + conf.setInt("dfs.namenode.heartbeat.recheck-interval", 5000); + conf.setInt("dfs.heartbeat.interval", 1); + conf.setInt("dfs.client.socket-timeout", 5000); + + // faster failover with cluster.shutdown();fs.close() idiom + conf.setInt("hbase.ipc.client.connect.max.retries", 1); + conf.setInt("dfs.client.block.recovery.retries", 1); + conf.setInt("hbase.ipc.client.connection.maxidletime", 500); + + conf.setClass(WAL_PROVIDER, BoundedRegionGroupingProvider.class, WALProvider.class); + + TEST_UTIL.startMiniDFSCluster(3); + + fs = TEST_UTIL.getDFSCluster().getFileSystem(); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + TEST_UTIL.shutdownMiniCluster(); + } + + /** + * Write to a log file with three concurrent threads and verifying all data is written. + */ + @Test + public void testConcurrentWrites() throws Exception { + // Run the WPE tool with three threads writing 3000 edits each concurrently. + // When done, verify that all edits were written. + int errCode = WALPerformanceEvaluation.innerMain(new Configuration(conf), + new String [] {"-threads", "3", "-verify", "-noclosefs", "-iterations", "3000"}); + assertEquals(0, errCode); + } + + /** + * Make sure we can successfully run with more regions then our bound. + */ + @Test + public void testMoreRegionsThanBound() throws Exception { + final String parallelism = Integer.toString(DEFAULT_NUM_REGION_GROUPS * 2); + int errCode = WALPerformanceEvaluation.innerMain(new Configuration(conf), + new String [] {"-threads", parallelism, "-verify", "-noclosefs", "-iterations", "3000", + "-regions", parallelism}); + assertEquals(0, errCode); + } + + @Test + public void testBoundsGreaterThanDefault() throws Exception { + final int temp = conf.getInt(NUM_REGION_GROUPS, DEFAULT_NUM_REGION_GROUPS); + try { + conf.setInt(NUM_REGION_GROUPS, temp*4); + final String parallelism = Integer.toString(temp*4); + int errCode = WALPerformanceEvaluation.innerMain(new Configuration(conf), + new String [] {"-threads", parallelism, "-verify", "-noclosefs", "-iterations", "3000", + "-regions", parallelism}); + assertEquals(0, errCode); + } finally { + conf.setInt(NUM_REGION_GROUPS, temp); + } + } + + @Test + public void testMoreRegionsThanBoundWithBoundsGreaterThanDefault() throws Exception { + final int temp = conf.getInt(NUM_REGION_GROUPS, DEFAULT_NUM_REGION_GROUPS); + try { + conf.setInt(NUM_REGION_GROUPS, temp*4); + final String parallelism = Integer.toString(temp*4*2); + int errCode = WALPerformanceEvaluation.innerMain(new Configuration(conf), + new String [] {"-threads", parallelism, "-verify", "-noclosefs", "-iterations", "3000", + "-regions", parallelism}); + assertEquals(0, errCode); + } finally { + conf.setInt(NUM_REGION_GROUPS, temp); + } + } + + /** + * Ensure that we can use Set.add to deduplicate WALs + */ + @Test + public void setMembershipDedups() throws IOException { + final int temp = conf.getInt(NUM_REGION_GROUPS, DEFAULT_NUM_REGION_GROUPS); + WALFactory wals = null; + try { + conf.setInt(NUM_REGION_GROUPS, temp*4); + wals = new WALFactory(conf, null, currentTest.getMethodName()); + final Set seen = new HashSet(temp*4); + final Random random = new Random(); + int count = 0; + // we know that this should see one of the wals more than once + for (int i = 0; i < temp*8; i++) { + final WAL maybeNewWAL = wals.getWAL(Bytes.toBytes(random.nextInt())); + LOG.info("Iteration " + i + ", checking wal " + maybeNewWAL); + if (seen.add(maybeNewWAL)) { + count++; + } + } + assertEquals("received back a different number of WALs that are not equal() to each other " + + "than the bound we placed.", temp*4, count); + } finally { + if (wals != null) { + wals.close(); + } + conf.setInt(NUM_REGION_GROUPS, temp); + } + } +} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java index 28cd849..df8ceaf 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProvider.java @@ -41,9 +41,10 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.After; @@ -58,7 +59,7 @@ import org.junit.rules.TestName; // imports for things that haven't moved from regionserver.wal yet. import org.apache.hadoop.hbase.regionserver.wal.WALEdit; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestDefaultWALProvider { protected static final Log LOG = LogFactory.getLog(TestDefaultWALProvider.class); @@ -146,18 +147,15 @@ public class TestDefaultWALProvider { } - protected void addEdits(WAL log, HRegionInfo hri, TableName tableName, + protected void addEdits(WAL log, HRegionInfo hri, HTableDescriptor htd, int times, AtomicLong sequenceId) throws IOException { - HTableDescriptor htd = new HTableDescriptor(); - htd.addFamily(new HColumnDescriptor("row")); - - final byte [] row = Bytes.toBytes("row"); + final byte[] row = Bytes.toBytes("row"); for (int i = 0; i < times; i++) { long timestamp = System.currentTimeMillis(); WALEdit cols = new WALEdit(); cols.add(new KeyValue(row, row, row, timestamp, row)); - log.append(htd, hri, getWalKey(hri.getEncodedNameAsBytes(), tableName, timestamp), cols, - sequenceId, true, null); + log.append(htd, hri, getWalKey(hri.getEncodedNameAsBytes(), htd.getTableName(), timestamp), + cols, sequenceId, true, null); } log.sync(); } @@ -174,8 +172,8 @@ public class TestDefaultWALProvider { * @param wal * @param regionEncodedName */ - protected void flushRegion(WAL wal, byte[] regionEncodedName) { - wal.startCacheFlush(regionEncodedName); + protected void flushRegion(WAL wal, byte[] regionEncodedName, Set flushedFamilyNames) { + wal.startCacheFlush(regionEncodedName, flushedFamilyNames); wal.completeCacheFlush(regionEncodedName); } @@ -184,45 +182,47 @@ public class TestDefaultWALProvider { @Test public void testLogCleaning() throws Exception { LOG.info("testLogCleaning"); - final TableName tableName = - TableName.valueOf("testLogCleaning"); - final TableName tableName2 = - TableName.valueOf("testLogCleaning2"); + final HTableDescriptor htd = + new HTableDescriptor(TableName.valueOf("testLogCleaning")).addFamily(new HColumnDescriptor( + "row")); + final HTableDescriptor htd2 = + new HTableDescriptor(TableName.valueOf("testLogCleaning2")) + .addFamily(new HColumnDescriptor("row")); final Configuration localConf = new Configuration(conf); localConf.set(WALFactory.WAL_PROVIDER, DefaultWALProvider.class.getName()); final WALFactory wals = new WALFactory(localConf, null, currentTest.getMethodName()); final AtomicLong sequenceId = new AtomicLong(1); try { - HRegionInfo hri = new HRegionInfo(tableName, + HRegionInfo hri = new HRegionInfo(htd.getTableName(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); - HRegionInfo hri2 = new HRegionInfo(tableName2, + HRegionInfo hri2 = new HRegionInfo(htd2.getTableName(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); // we want to mix edits from regions, so pick our own identifier. final WAL log = wals.getWAL(UNSPECIFIED_REGION); // Add a single edit and make sure that rolling won't remove the file // Before HBASE-3198 it used to delete it - addEdits(log, hri, tableName, 1, sequenceId); + addEdits(log, hri, htd, 1, sequenceId); log.rollWriter(); assertEquals(1, DefaultWALProvider.getNumRolledLogFiles(log)); // See if there's anything wrong with more than 1 edit - addEdits(log, hri, tableName, 2, sequenceId); + addEdits(log, hri, htd, 2, sequenceId); log.rollWriter(); assertEquals(2, DefaultWALProvider.getNumRolledLogFiles(log)); // Now mix edits from 2 regions, still no flushing - addEdits(log, hri, tableName, 1, sequenceId); - addEdits(log, hri2, tableName2, 1, sequenceId); - addEdits(log, hri, tableName, 1, sequenceId); - addEdits(log, hri2, tableName2, 1, sequenceId); + addEdits(log, hri, htd, 1, sequenceId); + addEdits(log, hri2, htd2, 1, sequenceId); + addEdits(log, hri, htd, 1, sequenceId); + addEdits(log, hri2, htd2, 1, sequenceId); log.rollWriter(); assertEquals(3, DefaultWALProvider.getNumRolledLogFiles(log)); // Flush the first region, we expect to see the first two files getting // archived. We need to append something or writer won't be rolled. - addEdits(log, hri2, tableName2, 1, sequenceId); - log.startCacheFlush(hri.getEncodedNameAsBytes()); + addEdits(log, hri2, htd2, 1, sequenceId); + log.startCacheFlush(hri.getEncodedNameAsBytes(), htd.getFamiliesKeys()); log.completeCacheFlush(hri.getEncodedNameAsBytes()); log.rollWriter(); assertEquals(2, DefaultWALProvider.getNumRolledLogFiles(log)); @@ -230,8 +230,8 @@ public class TestDefaultWALProvider { // Flush the second region, which removes all the remaining output files // since the oldest was completely flushed and the two others only contain // flush information - addEdits(log, hri2, tableName2, 1, sequenceId); - log.startCacheFlush(hri2.getEncodedNameAsBytes()); + addEdits(log, hri2, htd2, 1, sequenceId); + log.startCacheFlush(hri2.getEncodedNameAsBytes(), htd2.getFamiliesKeys()); log.completeCacheFlush(hri2.getEncodedNameAsBytes()); log.rollWriter(); assertEquals(0, DefaultWALProvider.getNumRolledLogFiles(log)); @@ -254,21 +254,25 @@ public class TestDefaultWALProvider { *

    * @throws IOException */ - @Test + @Test public void testWALArchiving() throws IOException { LOG.debug("testWALArchiving"); - TableName table1 = TableName.valueOf("t1"); - TableName table2 = TableName.valueOf("t2"); + HTableDescriptor table1 = + new HTableDescriptor(TableName.valueOf("t1")).addFamily(new HColumnDescriptor("row")); + HTableDescriptor table2 = + new HTableDescriptor(TableName.valueOf("t2")).addFamily(new HColumnDescriptor("row")); final Configuration localConf = new Configuration(conf); localConf.set(WALFactory.WAL_PROVIDER, DefaultWALProvider.class.getName()); final WALFactory wals = new WALFactory(localConf, null, currentTest.getMethodName()); try { final WAL wal = wals.getWAL(UNSPECIFIED_REGION); assertEquals(0, DefaultWALProvider.getNumRolledLogFiles(wal)); - HRegionInfo hri1 = new HRegionInfo(table1, HConstants.EMPTY_START_ROW, - HConstants.EMPTY_END_ROW); - HRegionInfo hri2 = new HRegionInfo(table2, HConstants.EMPTY_START_ROW, - HConstants.EMPTY_END_ROW); + HRegionInfo hri1 = + new HRegionInfo(table1.getTableName(), HConstants.EMPTY_START_ROW, + HConstants.EMPTY_END_ROW); + HRegionInfo hri2 = + new HRegionInfo(table2.getTableName(), HConstants.EMPTY_START_ROW, + HConstants.EMPTY_END_ROW); // ensure that we don't split the regions. hri1.setSplit(false); hri2.setSplit(false); @@ -287,7 +291,7 @@ public class TestDefaultWALProvider { assertEquals(2, DefaultWALProvider.getNumRolledLogFiles(wal)); // add a waledit to table1, and flush the region. addEdits(wal, hri1, table1, 3, sequenceId1); - flushRegion(wal, hri1.getEncodedNameAsBytes()); + flushRegion(wal, hri1.getEncodedNameAsBytes(), table1.getFamiliesKeys()); // roll log; all old logs should be archived. wal.rollWriter(); assertEquals(0, DefaultWALProvider.getNumRolledLogFiles(wal)); @@ -301,7 +305,7 @@ public class TestDefaultWALProvider { assertEquals(2, DefaultWALProvider.getNumRolledLogFiles(wal)); // add edits for table2, and flush hri1. addEdits(wal, hri2, table2, 2, sequenceId2); - flushRegion(wal, hri1.getEncodedNameAsBytes()); + flushRegion(wal, hri1.getEncodedNameAsBytes(), table2.getFamiliesKeys()); // the log : region-sequenceId map is // log1: region2 (unflushed) // log2: region1 (flushed) @@ -311,7 +315,7 @@ public class TestDefaultWALProvider { assertEquals(2, DefaultWALProvider.getNumRolledLogFiles(wal)); // flush region2, and all logs should be archived. addEdits(wal, hri2, table2, 2, sequenceId2); - flushRegion(wal, hri2.getEncodedNameAsBytes()); + flushRegion(wal, hri2.getEncodedNameAsBytes(), table2.getFamiliesKeys()); wal.rollWriter(); assertEquals(0, DefaultWALProvider.getNumRolledLogFiles(wal)); } finally { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProviderWithHLogKey.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProviderWithHLogKey.java index 0bc8c80..c667e94 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProviderWithHLogKey.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestDefaultWALProviderWithHLogKey.java @@ -17,13 +17,15 @@ */ package org.apache.hadoop.hbase.wal; -import org.apache.hadoop.hbase.testclassification.LargeTests; + import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.junit.experimental.categories.Category; import org.apache.hadoop.hbase.regionserver.wal.HLogKey; -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestDefaultWALProviderWithHLogKey extends TestDefaultWALProvider { @Override WALKey getWalKey(final byte[] info, final TableName tableName, final long timestamp) { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java index 7d30b96..6f05839 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestSecureWAL.java @@ -40,6 +40,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.util.Bytes; @@ -55,7 +56,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestSecureWAL { static final Log LOG = LogFactory.getLog(TestSecureWAL.class); static { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java index b163bd5..bbe4018 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFactory.java @@ -48,10 +48,11 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.coprocessor.SampleRegionWALObserver; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.apache.hadoop.hbase.util.Threads; @@ -78,7 +79,7 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit; /** * WAL tests that can be reused across providers. */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestWALFactory { protected static final Log LOG = LogFactory.getLog(TestWALFactory.class); @@ -479,8 +480,9 @@ public class TestWALFactory { @Test public void testEditAdd() throws IOException { final int COL_COUNT = 10; - final TableName tableName = - TableName.valueOf("tablename"); + final HTableDescriptor htd = + new HTableDescriptor(TableName.valueOf("tablename")).addFamily(new HColumnDescriptor( + "column")); final byte [] row = Bytes.toBytes("row"); WAL.Reader reader = null; try { @@ -495,16 +497,15 @@ public class TestWALFactory { Bytes.toBytes(Integer.toString(i)), timestamp, new byte[] { (byte)(i + '0') })); } - HRegionInfo info = new HRegionInfo(tableName, + HRegionInfo info = new HRegionInfo(htd.getTableName(), row,Bytes.toBytes(Bytes.toString(row) + "1"), false); - HTableDescriptor htd = new HTableDescriptor(); - htd.addFamily(new HColumnDescriptor("column")); final WAL log = wals.getWAL(info.getEncodedNameAsBytes()); - final long txid = log.append(htd, info, new WALKey(info.getEncodedNameAsBytes(), tableName, - System.currentTimeMillis()), cols, sequenceId, true, null); + final long txid = log.append(htd, info, + new WALKey(info.getEncodedNameAsBytes(), htd.getTableName(), System.currentTimeMillis()), + cols, sequenceId, true, null); log.sync(txid); - log.startCacheFlush(info.getEncodedNameAsBytes()); + log.startCacheFlush(info.getEncodedNameAsBytes(), htd.getFamiliesKeys()); log.completeCacheFlush(info.getEncodedNameAsBytes()); log.shutdown(); Path filename = DefaultWALProvider.getCurrentFileName(log); @@ -518,7 +519,7 @@ public class TestWALFactory { WALKey key = entry.getKey(); WALEdit val = entry.getEdit(); assertTrue(Bytes.equals(info.getEncodedNameAsBytes(), key.getEncodedRegionName())); - assertTrue(tableName.equals(key.getTablename())); + assertTrue(htd.getTableName().equals(key.getTablename())); Cell cell = val.getCells().get(0); assertTrue(Bytes.equals(row, cell.getRow())); assertEquals((byte)(i + '0'), cell.getValue()[0]); @@ -537,8 +538,9 @@ public class TestWALFactory { @Test public void testAppend() throws IOException { final int COL_COUNT = 10; - final TableName tableName = - TableName.valueOf("tablename"); + final HTableDescriptor htd = + new HTableDescriptor(TableName.valueOf("tablename")).addFamily(new HColumnDescriptor( + "column")); final byte [] row = Bytes.toBytes("row"); WAL.Reader reader = null; final AtomicLong sequenceId = new AtomicLong(1); @@ -552,15 +554,14 @@ public class TestWALFactory { Bytes.toBytes(Integer.toString(i)), timestamp, new byte[] { (byte)(i + '0') })); } - HRegionInfo hri = new HRegionInfo(tableName, + HRegionInfo hri = new HRegionInfo(htd.getTableName(), HConstants.EMPTY_START_ROW, HConstants.EMPTY_END_ROW); - HTableDescriptor htd = new HTableDescriptor(); - htd.addFamily(new HColumnDescriptor("column")); final WAL log = wals.getWAL(hri.getEncodedNameAsBytes()); - final long txid = log.append(htd, hri, new WALKey(hri.getEncodedNameAsBytes(), tableName, - System.currentTimeMillis()), cols, sequenceId, true, null); + final long txid = log.append(htd, hri, + new WALKey(hri.getEncodedNameAsBytes(), htd.getTableName(), System.currentTimeMillis()), + cols, sequenceId, true, null); log.sync(txid); - log.startCacheFlush(hri.getEncodedNameAsBytes()); + log.startCacheFlush(hri.getEncodedNameAsBytes(), htd.getFamiliesKeys()); log.completeCacheFlush(hri.getEncodedNameAsBytes()); log.shutdown(); Path filename = DefaultWALProvider.getCurrentFileName(log); @@ -572,7 +573,7 @@ public class TestWALFactory { for (Cell val : entry.getEdit().getCells()) { assertTrue(Bytes.equals(hri.getEncodedNameAsBytes(), entry.getKey().getEncodedRegionName())); - assertTrue(tableName.equals(entry.getKey().getTablename())); + assertTrue(htd.getTableName().equals(entry.getKey().getTablename())); assertTrue(Bytes.equals(row, val.getRow())); assertEquals((byte)(idx + '0'), val.getValue()[0]); System.out.println(entry.getKey() + " " + val); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFiltering.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFiltering.java index 43d7e24..a3e38b3 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFiltering.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALFiltering.java @@ -28,6 +28,7 @@ import java.util.TreeMap; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Put; @@ -47,7 +48,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; import com.google.protobuf.ServiceException; -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestWALFiltering { private static final int NUM_MASTERS = 1; private static final int NUM_RS = 4; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALMethods.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALMethods.java index f6bd2b1..0c03019 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALMethods.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALMethods.java @@ -18,12 +18,7 @@ */ package org.apache.hadoop.hbase.wal; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertNotNull; -import static org.junit.Assert.assertNotSame; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; -import static org.mockito.Mockito.mock; +import static org.junit.Assert.*; import java.io.IOException; import java.util.NavigableSet; @@ -32,14 +27,13 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.KeyValueTestUtil; -import org.apache.hadoop.hbase.testclassification.SmallTests; -import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; import org.apache.hadoop.hbase.wal.WALSplitter.EntryBuffers; +import org.apache.hadoop.hbase.wal.WALSplitter.PipelineController; import org.apache.hadoop.hbase.wal.WALSplitter.RegionEntryBuffer; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.FSUtils; import org.junit.Test; @@ -51,7 +45,7 @@ import org.apache.hadoop.hbase.regionserver.wal.WALEdit; /** * Simple testing of a few WAL methods. */ -@Category(SmallTests.class) +@Category({RegionServerTests.class, SmallTests.class}) public class TestWALMethods { private static final byte[] TEST_REGION = Bytes.toBytes("test_region");; private static final TableName TEST_TABLE = @@ -128,10 +122,8 @@ public class TestWALMethods { Configuration conf = new Configuration(); RecoveryMode mode = (conf.getBoolean(HConstants.DISTRIBUTED_LOG_REPLAY_KEY, false) ? RecoveryMode.LOG_REPLAY : RecoveryMode.LOG_SPLITTING); - WALSplitter splitter = new WALSplitter(WALFactory.getInstance(conf), - conf, mock(Path.class), mock(FileSystem.class), null, null, mode); - EntryBuffers sink = splitter.new EntryBuffers(1*1024*1024); + EntryBuffers sink = new EntryBuffers(new PipelineController(), 1*1024*1024); for (int i = 0; i < 1000; i++) { WAL.Entry entry = createTestLogEntry(i); sink.appendEntry(entry); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java index badc609..deaef50 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java @@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.io.crypto.KeyProviderForTesting; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; @@ -64,7 +65,7 @@ import org.apache.hadoop.hbase.regionserver.wal.SecureWALCellCodec; /* * Test that verifies WAL written by SecureProtobufLogWriter is not readable by ProtobufLogReader */ -@Category(MediumTests.class) +@Category({RegionServerTests.class, MediumTests.class}) public class TestWALReaderOnSecureWAL { static final Log LOG = LogFactory.getLog(TestWALReaderOnSecureWAL.class); static { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java index c52a534..e263cdb 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplit.java @@ -41,6 +41,13 @@ import java.util.concurrent.atomic.AtomicLong; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; +import org.apache.commons.logging.impl.Log4JLogger; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; +import org.apache.hadoop.hbase.TableName; +import org.apache.log4j.Level; +import org.apache.hadoop.hdfs.server.datanode.DataNode; +import org.apache.hadoop.hdfs.server.namenode.FSNamesystem; +import org.apache.hadoop.hdfs.server.namenode.LeaseManager; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; @@ -56,7 +63,6 @@ import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.testclassification.LargeTests; -import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.SplitLogTask.RecoveryMode; import org.apache.hadoop.hbase.regionserver.HRegion; import org.apache.hadoop.hbase.wal.WAL.Entry; @@ -95,7 +101,7 @@ import org.apache.hadoop.hbase.regionserver.wal.FaultySequenceFileLogReader; /** * Testing {@link WAL} splitting code. */ -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestWALSplit { { // Uncomment the following lines if more verbosity is needed for diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplitCompressed.java hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplitCompressed.java index 961c2c1..1a0c883 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplitCompressed.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALSplitCompressed.java @@ -21,10 +21,11 @@ package org.apache.hadoop.hbase.wal; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.testclassification.RegionServerTests; import org.junit.BeforeClass; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({RegionServerTests.class, LargeTests.class}) public class TestWALSplitCompressed extends TestWALSplit { @BeforeClass diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java index cb959ca..793cc1f 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestHQuorumPeer.java @@ -29,6 +29,7 @@ import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.zookeeper.server.quorum.QuorumPeerConfig; import org.apache.zookeeper.server.quorum.QuorumPeer.QuorumServer; import org.junit.Before; @@ -41,7 +42,7 @@ import static org.junit.Assert.*; /** * Test for HQuorumPeer. */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestHQuorumPeer { private static final HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); private static int PORT_NO = 21818; diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestRecoverableZooKeeper.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestRecoverableZooKeeper.java index e790013..e83ac74 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestRecoverableZooKeeper.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestRecoverableZooKeeper.java @@ -28,6 +28,7 @@ import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.KeeperException; @@ -40,7 +41,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestRecoverableZooKeeper { private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKConfig.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKConfig.java index 9363a3f..eae7c2a 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKConfig.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKConfig.java @@ -24,11 +24,12 @@ import junit.framework.Assert; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestZKConfig { @Test public void testZKConfigLoading() throws Exception { diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKLeaderManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKLeaderManager.java index d369102..c830b04 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKLeaderManager.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKLeaderManager.java @@ -27,6 +27,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -35,7 +36,7 @@ import org.junit.experimental.categories.Category; /** */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZKLeaderManager { private static Log LOG = LogFactory.getLog(TestZKLeaderManager.class); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java index 9b4cc6a..db4c2fa 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKMulti.java @@ -32,6 +32,7 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.Abortable; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZKUtil.ZKUtilOp; import org.apache.zookeeper.KeeperException; @@ -43,7 +44,7 @@ import org.junit.experimental.categories.Category; /** * Test ZooKeeper multi-update functionality */ -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZKMulti { private static final Log LOG = LogFactory.getLog(TestZKMulti.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTableStateManager.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTableStateManager.java deleted file mode 100644 index e81c89f..0000000 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKTableStateManager.java +++ /dev/null @@ -1,114 +0,0 @@ -/** - * - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.hbase.zookeeper; - -import java.io.IOException; - -import org.apache.commons.logging.Log; -import org.apache.commons.logging.LogFactory; - -import org.apache.hadoop.hbase.Abortable; -import org.apache.hadoop.hbase.CoordinatedStateException; -import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.MediumTests; -import org.apache.hadoop.hbase.TableName; -import org.apache.hadoop.hbase.TableStateManager; -import org.apache.zookeeper.KeeperException; -import org.junit.AfterClass; -import org.junit.BeforeClass; -import org.junit.Test; -import org.junit.experimental.categories.Category; - -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertTrue; -import static org.apache.hadoop.hbase.protobuf.generated.ZooKeeperProtos.Table; - -@Category(MediumTests.class) -public class TestZKTableStateManager { - private static final Log LOG = LogFactory.getLog(TestZKTableStateManager.class); - private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); - - @BeforeClass - public static void setUpBeforeClass() throws Exception { - TEST_UTIL.startMiniZKCluster(); - } - - @AfterClass - public static void tearDownAfterClass() throws Exception { - TEST_UTIL.shutdownMiniZKCluster(); - } - - @Test - public void testTableStates() - throws CoordinatedStateException, IOException, KeeperException, InterruptedException { - final TableName name = - TableName.valueOf("testDisabled"); - Abortable abortable = new Abortable() { - @Override - public void abort(String why, Throwable e) { - LOG.info(why, e); - } - - @Override - public boolean isAborted() { - return false; - } - - }; - ZooKeeperWatcher zkw = new ZooKeeperWatcher(TEST_UTIL.getConfiguration(), - name.getNameAsString(), abortable, true); - TableStateManager zkt = new ZKTableStateManager(zkw); - assertFalse(zkt.isTableState(name, Table.State.ENABLED)); - assertFalse(zkt.isTableState(name, Table.State.DISABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED)); - assertFalse(zkt.isTableState(name, Table.State.ENABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED, Table.State.DISABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED, Table.State.ENABLING)); - assertFalse(zkt.isTablePresent(name)); - zkt.setTableState(name, Table.State.DISABLING); - assertTrue(zkt.isTableState(name, Table.State.DISABLING)); - assertTrue(zkt.isTableState(name, Table.State.DISABLED, Table.State.DISABLING)); - assertFalse(zkt.getTablesInStates(Table.State.DISABLED).contains(name)); - assertTrue(zkt.isTablePresent(name)); - zkt.setTableState(name, Table.State.DISABLED); - assertTrue(zkt.isTableState(name, Table.State.DISABLED)); - assertTrue(zkt.isTableState(name, Table.State.DISABLED, Table.State.DISABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLING)); - assertTrue(zkt.getTablesInStates(Table.State.DISABLED).contains(name)); - assertTrue(zkt.isTablePresent(name)); - zkt.setTableState(name, Table.State.ENABLING); - assertTrue(zkt.isTableState(name, Table.State.ENABLING)); - assertTrue(zkt.isTableState(name, Table.State.DISABLED, Table.State.ENABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED)); - assertFalse(zkt.getTablesInStates(Table.State.DISABLED).contains(name)); - assertTrue(zkt.isTablePresent(name)); - zkt.setTableState(name, Table.State.ENABLED); - assertTrue(zkt.isTableState(name, Table.State.ENABLED)); - assertFalse(zkt.isTableState(name, Table.State.ENABLING)); - assertTrue(zkt.isTablePresent(name)); - zkt.setDeletedTable(name); - assertFalse(zkt.isTableState(name, Table.State.ENABLED)); - assertFalse(zkt.isTableState(name, Table.State.DISABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED)); - assertFalse(zkt.isTableState(name, Table.State.ENABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED, Table.State.DISABLING)); - assertFalse(zkt.isTableState(name, Table.State.DISABLED, Table.State.ENABLING)); - assertFalse(zkt.isTablePresent(name)); - } -} diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperACL.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperACL.java index 26bba14..93a6291 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperACL.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperACL.java @@ -30,6 +30,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.zookeeper.ZooDefs; import org.apache.zookeeper.data.ACL; import org.apache.zookeeper.data.Stat; @@ -40,7 +41,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZooKeeperACL { private final static Log LOG = LogFactory.getLog(TestZooKeeperACL.class); private final static HBaseTestingUtility TEST_UTIL = diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java index b121ff4..1928b18 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperMainServer.java @@ -25,11 +25,12 @@ import java.security.Permission; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.testclassification.SmallTests; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(SmallTests.class) +@Category({MiscTests.class, SmallTests.class}) public class TestZooKeeperMainServer { // ZKMS calls System.exit. Catch the call and prevent exit using trick described up in // http://stackoverflow.com/questions/309396/java-how-to-test-methods-that-call-system-exit diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java index 09ae6ff..010c1c9 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZooKeeperNodeTracker.java @@ -34,6 +34,7 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.*; import org.apache.hadoop.hbase.master.TestActiveMasterManager.NodeDeletionListener; import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.util.Threads; import org.apache.zookeeper.CreateMode; @@ -46,7 +47,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZooKeeperNodeTracker { private static final Log LOG = LogFactory.getLog(TestZooKeeperNodeTracker.class); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/lock/TestZKInterProcessReadWriteLock.java hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/lock/TestZKInterProcessReadWriteLock.java index 171a438..c304842 100644 --- hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/lock/TestZKInterProcessReadWriteLock.java +++ hbase-server/src/test/java/org/apache/hadoop/hbase/zookeeper/lock/TestZKInterProcessReadWriteLock.java @@ -42,8 +42,9 @@ import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.InterProcessLock; import org.apache.hadoop.hbase.InterProcessLock.MetadataHandler; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.MultithreadedTestUtil; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.testclassification.MiscTests; import org.apache.hadoop.hbase.util.Bytes; import org.apache.hadoop.hbase.zookeeper.ZKUtil; import org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher; @@ -55,7 +56,7 @@ import org.junit.experimental.categories.Category; import com.google.common.collect.Lists; -@Category(MediumTests.class) +@Category({MiscTests.class, MediumTests.class}) public class TestZKInterProcessReadWriteLock { private static final Log LOG = diff --git hbase-shell/pom.xml hbase-shell/pom.xml index 7ccb144..5343a26 100644 --- hbase-shell/pom.xml +++ hbase-shell/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. hbase-shell @@ -50,13 +50,34 @@ - - org.apache.maven.plugins - maven-site-plugin - - true - - + + maven-compiler-plugin + + + default-compile + + ${java.default.compiler} + true + false + + + + default-testCompile + + ${java.default.compiler} + true + false + + + + + + org.apache.maven.plugins + maven-site-plugin + + true + + diff --git hbase-shell/src/main/ruby/hbase.rb hbase-shell/src/main/ruby/hbase.rb index 121fd53..5928e7b 100644 --- hbase-shell/src/main/ruby/hbase.rb +++ hbase-shell/src/main/ruby/hbase.rb @@ -65,6 +65,12 @@ module HBaseConstants AUTHORIZATIONS = "AUTHORIZATIONS" SKIP_FLUSH = 'SKIP_FLUSH' CONSISTENCY = "CONSISTENCY" + USER = 'USER' + TABLE = 'TABLE' + NAMESPACE = 'NAMESPACE' + TYPE = 'TYPE' + NONE = 'NONE' + VALUE = 'VALUE' # Load constants from hbase java API def self.promote_constants(constants) @@ -84,6 +90,9 @@ end require 'hbase/hbase' require 'hbase/admin' require 'hbase/table' +require 'hbase/quotas' require 'hbase/replication_admin' require 'hbase/security' require 'hbase/visibility_labels' + +include HBaseQuotasConstants diff --git hbase-shell/src/main/ruby/hbase/admin.rb hbase-shell/src/main/ruby/hbase/admin.rb index 52d9370..09b4181 100644 --- hbase-shell/src/main/ruby/hbase/admin.rb +++ hbase-shell/src/main/ruby/hbase/admin.rb @@ -682,6 +682,7 @@ module Hbase family.setBlockCacheEnabled(JBoolean.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::BLOCKCACHE))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::BLOCKCACHE) family.setScope(JInteger.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::REPLICATION_SCOPE))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::REPLICATION_SCOPE) + family.setCacheDataOnWrite(JBoolean.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::CACHE_DATA_ON_WRITE))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::CACHE_DATA_ON_WRITE) family.setInMemory(JBoolean.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::IN_MEMORY) family.setTimeToLive(JInteger.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::TTL))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::TTL) family.setDataBlockEncoding(org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.valueOf(arg.delete(org.apache.hadoop.hbase.HColumnDescriptor::DATA_BLOCK_ENCODING))) if arg.include?(org.apache.hadoop.hbase.HColumnDescriptor::DATA_BLOCK_ENCODING) diff --git hbase-shell/src/main/ruby/hbase/hbase.rb hbase-shell/src/main/ruby/hbase/hbase.rb index e75535e..9b4b44d 100644 --- hbase-shell/src/main/ruby/hbase/hbase.rb +++ hbase-shell/src/main/ruby/hbase/hbase.rb @@ -21,6 +21,7 @@ include Java require 'hbase/admin' require 'hbase/table' +require 'hbase/quotas' require 'hbase/security' require 'hbase/visibility_labels' @@ -60,5 +61,9 @@ module Hbase def visibility_labels_admin(formatter) ::Hbase::VisibilityLabelsAdmin.new(configuration, formatter) end + + def quotas_admin(formatter) + ::Hbase::QuotasAdmin.new(configuration, formatter) + end end end diff --git hbase-shell/src/main/ruby/hbase/quotas.rb hbase-shell/src/main/ruby/hbase/quotas.rb new file mode 100644 index 0000000..758e2ec --- /dev/null +++ hbase-shell/src/main/ruby/hbase/quotas.rb @@ -0,0 +1,214 @@ +# +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +include Java +java_import java.util.concurrent.TimeUnit +java_import org.apache.hadoop.hbase.TableName +java_import org.apache.hadoop.hbase.quotas.ThrottleType +java_import org.apache.hadoop.hbase.quotas.QuotaFilter +java_import org.apache.hadoop.hbase.quotas.QuotaRetriever +java_import org.apache.hadoop.hbase.quotas.QuotaSettingsFactory + +module HBaseQuotasConstants + GLOBAL_BYPASS = 'GLOBAL_BYPASS' + THROTTLE_TYPE = 'THROTTLE_TYPE' + THROTTLE = 'THROTTLE' + REQUEST = 'REQUEST' +end + +module Hbase + class QuotasAdmin + def initialize(configuration, formatter) + @config = configuration + @connection = org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(configuration) + @admin = @connection.getAdmin() + @formatter = formatter + end + + def throttle(args) + raise(ArgumentError, "Arguments should be a Hash") unless args.kind_of?(Hash) + type = args.fetch(THROTTLE_TYPE, REQUEST) + type, limit, time_unit = _parse_limit(args.delete(LIMIT), ThrottleType, type) + if args.has_key?(USER) + user = args.delete(USER) + if args.has_key?(TABLE) + table = TableName.valueOf(args.delete(TABLE)) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.throttleUser(user, table, type, limit, time_unit) + elsif args.has_key?(NAMESPACE) + namespace = args.delete(NAMESPACE) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.throttleUser(user, namespace, type, limit, time_unit) + else + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.throttleUser(user, type, limit, time_unit) + end + elsif args.has_key?(TABLE) + table = TableName.valueOf(args.delete(TABLE)) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.throttleTable(table, type, limit, time_unit) + elsif args.has_key?(NAMESPACE) + namespace = args.delete(NAMESPACE) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.throttleNamespace(namespace, type, limit, time_unit) + else + raise "One of USER, TABLE or NAMESPACE must be specified" + end + @admin.setQuota(settings) + end + + def unthrottle(args) + raise(ArgumentError, "Arguments should be a Hash") unless args.kind_of?(Hash) + if args.has_key?(USER) + user = args.delete(USER) + if args.has_key?(TABLE) + table = TableName.valueOf(args.delete(TABLE)) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.unthrottleUser(user, table) + elsif args.has_key?(NAMESPACE) + namespace = args.delete(NAMESPACE) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.unthrottleUser(user, namespace) + else + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.unthrottleUser(user) + end + elsif args.has_key?(TABLE) + table = TableName.valueOf(args.delete(TABLE)) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.unthrottleTable(table) + elsif args.has_key?(NAMESPACE) + namespace = args.delete(NAMESPACE) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.unthrottleNamespace(namespace) + else + raise "One of USER, TABLE or NAMESPACE must be specified" + end + @admin.setQuota(settings) + end + + def set_global_bypass(bypass, args) + raise(ArgumentError, "Arguments should be a Hash") unless args.kind_of?(Hash) + + if args.has_key?(USER) + user = args.delete(USER) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + settings = QuotaSettingsFactory.bypassGlobals(user, bypass) + else + raise "Expected USER" + end + @admin.setQuota(settings) + end + + def list_quotas(args = {}) + raise(ArgumentError, "Arguments should be a Hash") unless args.kind_of?(Hash) + + limit = args.delete("LIMIT") || -1 + count = 0 + + filter = QuotaFilter.new() + filter.setUserFilter(args.delete(USER)) if args.has_key?(USER) + filter.setTableFilter(args.delete(TABLE)) if args.has_key?(TABLE) + filter.setNamespaceFilter(args.delete(NAMESPACE)) if args.has_key?(NAMESPACE) + raise(ArgumentError, "Unexpected arguments: " + args.inspect) unless args.empty? + + # Start the scanner + scanner = @admin.getQuotaRetriever(filter) + begin + iter = scanner.iterator + + # Iterate results + while iter.hasNext + if limit > 0 && count >= limit + break + end + + settings = iter.next + owner = { + USER => settings.getUserName(), + TABLE => settings.getTableName(), + NAMESPACE => settings.getNamespace(), + }.delete_if { |k, v| v.nil? }.map {|k, v| k.to_s + " => " + v.to_s} * ', ' + + yield owner, settings.to_s + + count += 1 + end + ensure + scanner.close() + end + + return count + end + + def _parse_size(str_limit) + str_limit = str_limit.downcase + match = /(\d+)([bkmgtp%]*)/.match(str_limit) + if match + if match[2] == '%' + return match[1].to_i + else + return _size_from_str(match[1].to_i, match[2]) + end + else + raise "Invalid size limit syntax" + end + end + + def _parse_limit(str_limit, type_cls, type) + str_limit = str_limit.downcase + match = /(\d+)(req|[bkmgtp])\/(sec|min|hour|day)/.match(str_limit) + if match + if match[2] == 'req' + limit = match[1].to_i + type = type_cls.valueOf(type + "_NUMBER") + else + limit = _size_from_str(match[1].to_i, match[2]) + type = type_cls.valueOf(type + "_SIZE") + end + + if limit <= 0 + raise "Invalid throttle limit, must be greater then 0" + end + + case match[3] + when 'sec' then time_unit = TimeUnit::SECONDS + when 'min' then time_unit = TimeUnit::MINUTES + when 'hour' then time_unit = TimeUnit::HOURS + when 'day' then time_unit = TimeUnit::DAYS + end + + return type, limit, time_unit + else + raise "Invalid throttle limit syntax" + end + end + + def _size_from_str(value, suffix) + case suffix + when 'k' then value <<= 10 + when 'm' then value <<= 20 + when 'g' then value <<= 30 + when 't' then value <<= 40 + when 'p' then value <<= 50 + end + return value + end + end +end \ No newline at end of file diff --git hbase-shell/src/main/ruby/hbase/table.rb hbase-shell/src/main/ruby/hbase/table.rb index 83695bd..b408649 100644 --- hbase-shell/src/main/ruby/hbase/table.rb +++ hbase-shell/src/main/ruby/hbase/table.rb @@ -370,12 +370,12 @@ EOF # Print out results. Result can be Cell or RowResult. res = {} - result.list.each do |kv| - family = String.from_java_bytes(kv.getFamily) - qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(kv.getQualifier) + result.listCells.each do |c| + family = String.from_java_bytes(c.getFamily) + qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(c.getQualifier) column = "#{family}:#{qualifier}" - value = to_string(column, kv, maxlength) + value = to_string(column, c, maxlength) if block_given? yield(column, value) @@ -402,7 +402,7 @@ EOF return nil if result.isEmpty # Fetch cell value - cell = result.list[0] + cell = result.listCells[0] org.apache.hadoop.hbase.util.Bytes::toLong(cell.getValue) end @@ -496,12 +496,12 @@ EOF row = iter.next key = org.apache.hadoop.hbase.util.Bytes::toStringBinary(row.getRow) - row.list.each do |kv| - family = String.from_java_bytes(kv.getFamily) - qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(kv.getQualifier) + row.listCells.each do |c| + family = String.from_java_bytes(c.getFamily) + qualifier = org.apache.hadoop.hbase.util.Bytes::toStringBinary(c.getQualifier) column = "#{family}:#{qualifier}" - cell = to_string(column, kv, maxlength) + cell = to_string(column, c, maxlength) if block_given? yield(key, "column=#{column}, #{cell}") diff --git hbase-shell/src/main/ruby/shell.rb hbase-shell/src/main/ruby/shell.rb index 5870d8f..5db2776 100644 --- hbase-shell/src/main/ruby/shell.rb +++ hbase-shell/src/main/ruby/shell.rb @@ -71,13 +71,16 @@ module Shell class Shell attr_accessor :hbase attr_accessor :formatter + attr_accessor :interactive + alias interactive? interactive @debug = false attr_accessor :debug - def initialize(hbase, formatter) + def initialize(hbase, formatter, interactive=true) self.hbase = hbase self.formatter = formatter + self.interactive = interactive end def hbase_admin @@ -100,6 +103,10 @@ module Shell @hbase_visibility_labels_admin ||= hbase.visibility_labels_admin(formatter) end + def hbase_quotas_admin + @hbase_quotas_admin ||= hbase.quotas_admin(formatter) + end + def export_commands(where) ::Shell.commands.keys.each do |cmd| # here where is the IRB namespace @@ -368,6 +375,15 @@ Shell.load_command_group( ) Shell.load_command_group( + 'quotas', + :full_name => 'CLUSTER QUOTAS TOOLS', + :commands => %w[ + set_quota + list_quotas + ] +) + +Shell.load_command_group( 'security', :full_name => 'SECURITY TOOLS', :comment => "NOTE: Above commands are only applicable if running with the AccessController coprocessor", diff --git hbase-shell/src/main/ruby/shell/commands.rb hbase-shell/src/main/ruby/shell/commands.rb index 54fa204..2128164 100644 --- hbase-shell/src/main/ruby/shell/commands.rb +++ hbase-shell/src/main/ruby/shell/commands.rb @@ -37,13 +37,17 @@ module Shell while rootCause != nil && rootCause.respond_to?(:cause) && rootCause.cause != nil rootCause = rootCause.cause end - puts - puts "ERROR: #{rootCause}" - puts "Backtrace: #{rootCause.backtrace.join("\n ")}" if debug - puts - puts "Here is some help for this command:" - puts help - puts + if @shell.interactive? + puts + puts "ERROR: #{rootCause}" + puts "Backtrace: #{rootCause.backtrace.join("\n ")}" if debug + puts + puts "Here is some help for this command:" + puts help + puts + else + raise rootCause + end end def admin @@ -66,6 +70,10 @@ module Shell @shell.hbase_visibility_labels_admin end + def quotas_admin + @shell.hbase_quotas_admin + end + #---------------------------------------------------------------------- def formatter @@ -91,7 +99,7 @@ module Shell yield rescue => e raise e unless e.respond_to?(:cause) && e.cause != nil - + # Get the special java exception which will be handled cause = e.cause if cause.kind_of?(org.apache.hadoop.hbase.TableNotFoundException) then @@ -130,7 +138,7 @@ module Shell end end - # Throw the other exception which hasn't been handled above + # Throw the other exception which hasn't been handled above raise e end end diff --git hbase-shell/src/main/ruby/shell/commands/grant.rb hbase-shell/src/main/ruby/shell/commands/grant.rb index 0e8a65c..c9338f4 100644 --- hbase-shell/src/main/ruby/shell/commands/grant.rb +++ hbase-shell/src/main/ruby/shell/commands/grant.rb @@ -27,11 +27,14 @@ Syntax : grant [<@namespace> [

    [ [ grant 'bobsmith', 'RWXCA' + hbase> grant '@admins', 'RWXCA' hbase> grant 'bobsmith', 'RWXCA', '@ns1' hbase> grant 'bobsmith', 'RW', 't1', 'f1', 'col1' hbase> grant 'bobsmith', 'RW', 'ns1:t1', 'f1', 'col1' @@ -95,7 +98,7 @@ EOF iter = scanner.iterator while iter.hasNext row = iter.next - row.list.each do |cell| + row.listCells.each do |cell| put = org.apache.hadoop.hbase.client.Put.new(row.getRow) put.add(cell) t.set_cell_permissions(put, permissions) diff --git hbase-shell/src/main/ruby/shell/commands/list_quotas.rb hbase-shell/src/main/ruby/shell/commands/list_quotas.rb new file mode 100644 index 0000000..682bb71 --- /dev/null +++ hbase-shell/src/main/ruby/shell/commands/list_quotas.rb @@ -0,0 +1,52 @@ +# +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +module Shell + module Commands + class ListQuotas < Command + def help + return <<-EOF +List the quota settings added to the system. +You can filter the result based on USER, TABLE, or NAMESPACE. + +For example: + + hbase> list_quotas + hbase> list_quotas USER => 'bob.*' + hbase> list_quotas USER => 'bob.*', TABLE => 't1' + hbase> list_quotas USER => 'bob.*', NAMESPACE => 'ns.*' + hbase> list_quotas TABLE => 'myTable' + hbase> list_quotas NAMESPACE => 'ns.*' +EOF + end + + def command(args = {}) + now = Time.now + formatter.header(["OWNER", "QUOTAS"]) + + #actually do the scanning + count = quotas_admin.list_quotas(args) do |row, cells| + formatter.row([ row, cells ]) + end + + formatter.footer(now, count) + end + end + end +end diff --git hbase-shell/src/main/ruby/shell/commands/revoke.rb hbase-shell/src/main/ruby/shell/commands/revoke.rb index 57a2530..768989b 100644 --- hbase-shell/src/main/ruby/shell/commands/revoke.rb +++ hbase-shell/src/main/ruby/shell/commands/revoke.rb @@ -22,10 +22,17 @@ module Shell def help return <<-EOF Revoke a user's access rights. -Syntax : revoke [
    [ []] +Syntax : revoke [<@namespace> [
    [ []]]] + +Note: Groups and users access are revoked in the same way, but groups are prefixed with an '@' + character. In the same way, tables and namespaces are specified, but namespaces are + prefixed with an '@' character. + For example: hbase> revoke 'bobsmith' + hbase> revoke '@admins' + hbase> revoke 'bobsmith', '@ns1' hbase> revoke 'bobsmith', 't1', 'f1', 'col1' hbase> revoke 'bobsmith', 'ns1:t1', 'f1', 'col1' EOF diff --git hbase-shell/src/main/ruby/shell/commands/set_quota.rb hbase-shell/src/main/ruby/shell/commands/set_quota.rb new file mode 100644 index 0000000..40e8a10 --- /dev/null +++ hbase-shell/src/main/ruby/shell/commands/set_quota.rb @@ -0,0 +1,70 @@ +# +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +module Shell + module Commands + class SetQuota < Command + def help + return <<-EOF +Set a quota for a user, table, or namespace. +Syntax : set_quota TYPE => , + +TYPE => THROTTLE +The request limit can be expressed using the form 100req/sec, 100req/min +and the size limit can be expressed using the form 100k/sec, 100M/min +with (B, K, M, G, T, P) as valid size unit and (sec, min, hour, day) as valid time unit. +Currently the throttle limit is per machine - a limit of 100req/min +means that each machine can execute 100req/min. + +For example: + + hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10req/sec' + hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10M/sec' + hbase> set_quota TYPE => THROTTLE, USER => 'u1', TABLE => 't2', LIMIT => '5K/min' + hbase> set_quota TYPE => THROTTLE, USER => 'u1', NAMESPACE => 'ns2', LIMIT => NONE + hbase> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '10req/sec' + hbase> set_quota TYPE => THROTTLE, TABLE => 't1', LIMIT => '10M/sec' + hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => NONE + hbase> set_quota USER => 'u1', GLOBAL_BYPASS => true +EOF + end + + def command(args = {}) + if args.has_key?(TYPE) + qtype = args.delete(TYPE) + case qtype + when THROTTLE + if args[LIMIT].eql? NONE + args.delete(LIMIT) + quotas_admin.unthrottle(args) + else + quotas_admin.throttle(args) + end + else + raise "Invalid TYPE argument. got " + qtype + end + elsif args.has_key?(GLOBAL_BYPASS) + quotas_admin.set_global_bypass(args.delete(GLOBAL_BYPASS), args) + else + raise "Expected TYPE argument" + end + end + end + end +end diff --git hbase-shell/src/main/ruby/shell/commands/set_visibility.rb hbase-shell/src/main/ruby/shell/commands/set_visibility.rb index b2b57b1..cdb7724 100644 --- hbase-shell/src/main/ruby/shell/commands/set_visibility.rb +++ hbase-shell/src/main/ruby/shell/commands/set_visibility.rb @@ -57,7 +57,7 @@ EOF iter = scanner.iterator while iter.hasNext row = iter.next - row.list.each do |cell| + row.listCells.each do |cell| put = org.apache.hadoop.hbase.client.Put.new(row.getRow) put.add(cell) t.set_cell_visibility(put, visibility) diff --git hbase-shell/src/main/ruby/shell/commands/user_permission.rb hbase-shell/src/main/ruby/shell/commands/user_permission.rb index 5d8bf8a..e4673fc 100644 --- hbase-shell/src/main/ruby/shell/commands/user_permission.rb +++ hbase-shell/src/main/ruby/shell/commands/user_permission.rb @@ -23,9 +23,13 @@ module Shell return <<-EOF Show all permissions for the particular user. Syntax : user_permission
    + +Note: A namespace must always precede with '@' character. + For example: hbase> user_permission + hbase> user_permission '@ns1' hbase> user_permission 'table1' hbase> user_permission 'namespace1:table1' hbase> user_permission '.*' diff --git hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestShell.java hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestShell.java index 146b661..5fbf6a9 100644 --- hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestShell.java +++ hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestShell.java @@ -16,7 +16,6 @@ * See the License for the specific language governing permissions and * limitations under the License. */ - package org.apache.hadoop.hbase.client; import java.io.IOException; @@ -27,10 +26,11 @@ import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.coprocessor.CoprocessorHost; import org.apache.hadoop.hbase.security.access.SecureTestUtil; import org.apache.hadoop.hbase.security.visibility.VisibilityTestUtil; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.jruby.embed.PathType; import org.jruby.embed.ScriptingContainer; import org.junit.AfterClass; @@ -38,7 +38,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(LargeTests.class) +@Category({ClientTests.class, LargeTests.class}) public class TestShell { final Log LOG = LogFactory.getLog(getClass()); private final static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); diff --git hbase-shell/src/test/ruby/shell/noninteractive_test.rb hbase-shell/src/test/ruby/shell/noninteractive_test.rb new file mode 100644 index 0000000..14bdbc7 --- /dev/null +++ hbase-shell/src/test/ruby/shell/noninteractive_test.rb @@ -0,0 +1,42 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +require 'hbase' +require 'shell' +require 'shell/formatter' + +class NonInteractiveTest < Test::Unit::TestCase + def setup + @formatter = ::Shell::Formatter::Console.new() + @hbase = ::Hbase::Hbase.new($TEST_CLUSTER.getConfiguration) + @shell = Shell::Shell.new(@hbase, @formatter, false) + end + + define_test "Shell::Shell noninteractive mode should throw" do + # XXX Exception instead of StandardError because we throw things + # that aren't StandardError + assert_raise(ArgumentError) do + # incorrect number of arguments + @shell.command('create', 'foo') + end + @shell.command('create', 'foo', 'family_1') + exception = assert_raise(RuntimeError) do + # create a table that exists + @shell.command('create', 'foo', 'family_1') + end + assert_equal("Table already exists: foo!", exception.message) + end +end diff --git hbase-shell/src/test/ruby/shell/shell_test.rb hbase-shell/src/test/ruby/shell/shell_test.rb index 988d09e..56b7dc8 100644 --- hbase-shell/src/test/ruby/shell/shell_test.rb +++ hbase-shell/src/test/ruby/shell/shell_test.rb @@ -66,4 +66,14 @@ class ShellTest < Test::Unit::TestCase define_test "Shell::Shell#command should execute a command" do @shell.command('version') end + + #------------------------------------------------------------------------------- + + define_test "Shell::Shell interactive mode should not throw" do + # incorrect number of arguments + @shell.command('create', 'foo') + @shell.command('create', 'foo', 'family_1') + # create a table that exists + @shell.command('create', 'foo', 'family_1') + end end diff --git hbase-testing-util/pom.xml hbase-testing-util/pom.xml index 6697283..624b205 100644 --- hbase-testing-util/pom.xml +++ hbase-testing-util/pom.xml @@ -23,7 +23,7 @@ hbase org.apache.hbase - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT .. hbase-testing-util diff --git hbase-thrift/pom.xml hbase-thrift/pom.xml index c86e830..c3c9ab8 100644 --- hbase-thrift/pom.xml +++ hbase-thrift/pom.xml @@ -1,5 +1,7 @@ - + - + - + @@ -364,7 +393,9 @@ the same time. --> - hadoop.profile1.1 + + hadoop.profile + 1.1 @@ -407,7 +438,8 @@ the same time. --> - !hadoop.profile + + !hadoop.profile @@ -451,7 +483,8 @@ the same time. --> the required classpath that is required in the env of the launch container in the mini mr/yarn cluster --> - ${project.build.directory}/test-classes/mrapp-generated-classpath + ${project.build.directory}/test-classes/mrapp-generated-classpath + @@ -501,7 +534,8 @@ the same time. --> the required classpath that is required in the env of the launch container in the mini mr/yarn cluster --> - ${project.build.directory}/test-classes/mrapp-generated-classpath + ${project.build.directory}/test-classes/mrapp-generated-classpath + diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HttpAuthenticationException.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HttpAuthenticationException.java new file mode 100644 index 0000000..f3c2939 --- /dev/null +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HttpAuthenticationException.java @@ -0,0 +1,37 @@ +/** + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. See accompanying LICENSE file. + */ +package org.apache.hadoop.hbase.thrift; + +public class HttpAuthenticationException extends Exception { + private static final long serialVersionUID = 0; + /** + * @param cause original exception + */ + public HttpAuthenticationException(Throwable cause) { + super(cause); + } + /** + * @param msg exception message + */ + public HttpAuthenticationException(String msg) { + super(msg); + } + /** + * @param msg exception message + * @param cause original exception + */ + public HttpAuthenticationException(String msg, Throwable cause) { + super(msg, cause); + } +} \ No newline at end of file diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java new file mode 100644 index 0000000..d8221a6 --- /dev/null +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftHttpServlet.java @@ -0,0 +1,202 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hbase.thrift; + +import java.io.IOException; +import java.security.PrivilegedExceptionAction; + +import javax.servlet.ServletException; +import javax.servlet.http.HttpServletRequest; +import javax.servlet.http.HttpServletResponse; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.commons.net.util.Base64; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.hbase.security.SecurityUtil; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authorize.AuthorizationException; +import org.apache.hadoop.security.authorize.ProxyUsers; +import org.apache.thrift.TProcessor; +import org.apache.thrift.protocol.TProtocolFactory; +import org.apache.thrift.server.TServlet; +import org.ietf.jgss.GSSContext; +import org.ietf.jgss.GSSCredential; +import org.ietf.jgss.GSSException; +import org.ietf.jgss.GSSManager; +import org.ietf.jgss.GSSName; +import org.ietf.jgss.Oid; + +/** + * Thrift Http Servlet is used for performing Kerberos authentication if security is enabled and + * also used for setting the user specified in "doAs" parameter. + */ +@InterfaceAudience.Private +public class ThriftHttpServlet extends TServlet { + private static final long serialVersionUID = 1L; + public static final Log LOG = LogFactory.getLog(ThriftHttpServlet.class.getName()); + private transient final UserGroupInformation realUser; + private transient final Configuration conf; + private final boolean securityEnabled; + private final boolean doAsEnabled; + private transient ThriftServerRunner.HBaseHandler hbaseHandler; + + public ThriftHttpServlet(TProcessor processor, TProtocolFactory protocolFactory, + UserGroupInformation realUser, Configuration conf, ThriftServerRunner.HBaseHandler + hbaseHandler, boolean securityEnabled, boolean doAsEnabled) { + super(processor, protocolFactory); + this.realUser = realUser; + this.conf = conf; + this.hbaseHandler = hbaseHandler; + this.securityEnabled = securityEnabled; + this.doAsEnabled = doAsEnabled; + } + + @Override + protected void doPost(HttpServletRequest request, HttpServletResponse response) + throws ServletException, IOException { + String effectiveUser = realUser.getShortUserName(); + if (securityEnabled) { + try { + // As Thrift HTTP transport doesn't support SPNEGO yet (THRIFT-889), + // Kerberos authentication is being done at servlet level. + effectiveUser = doKerberosAuth(request); + } catch (HttpAuthenticationException e) { + LOG.error("Kerberos Authentication failed", e); + // Send a 401 to the client + response.setStatus(HttpServletResponse.SC_UNAUTHORIZED); + response.getWriter().println("Authentication Error: " + e.getMessage()); + } + } + String doAsUserFromQuery = request.getHeader("doAs"); + if (doAsUserFromQuery != null) { + if (!doAsEnabled) { + throw new ServletException("Support for proxyuser is not configured"); + } + // create and attempt to authorize a proxy user (the client is attempting + // to do proxy user) + UserGroupInformation ugi = UserGroupInformation.createProxyUser(doAsUserFromQuery, realUser); + // validate the proxy user authorization + try { + ProxyUsers.authorize(ugi, request.getRemoteAddr()); + } catch (AuthorizationException e) { + throw new ServletException(e.getMessage()); + } + effectiveUser = doAsUserFromQuery; + } + hbaseHandler.setEffectiveUser(effectiveUser); + super.doPost(request, response); + } + + /** + * Do the GSS-API kerberos authentication. + * We already have a logged in subject in the form of serviceUGI, + * which GSS-API will extract information from. + */ + private String doKerberosAuth(HttpServletRequest request) + throws HttpAuthenticationException { + try { + return realUser.doAs(new HttpKerberosServerAction(request, realUser)); + } catch (Exception e) { + LOG.error("Failed to perform authentication"); + throw new HttpAuthenticationException(e); + } + } + + + private static class HttpKerberosServerAction implements PrivilegedExceptionAction { + HttpServletRequest request; + UserGroupInformation serviceUGI; + HttpKerberosServerAction(HttpServletRequest request, UserGroupInformation serviceUGI) { + this.request = request; + this.serviceUGI = serviceUGI; + } + + @Override + public String run() throws HttpAuthenticationException { + // Get own Kerberos credentials for accepting connection + GSSManager manager = GSSManager.getInstance(); + GSSContext gssContext = null; + String serverPrincipal = SecurityUtil.getPrincipalWithoutRealm(serviceUGI.getUserName()); + try { + // This Oid for Kerberos GSS-API mechanism. + Oid kerberosMechOid = new Oid("1.2.840.113554.1.2.2"); + // Oid for SPNego GSS-API mechanism. + Oid spnegoMechOid = new Oid("1.3.6.1.5.5.2"); + // Oid for kerberos principal name + Oid krb5PrincipalOid = new Oid("1.2.840.113554.1.2.2.1"); + // GSS name for server + GSSName serverName = manager.createName(serverPrincipal, krb5PrincipalOid); + // GSS credentials for server + GSSCredential serverCreds = manager.createCredential(serverName, + GSSCredential.DEFAULT_LIFETIME, + new Oid[]{kerberosMechOid, spnegoMechOid}, + GSSCredential.ACCEPT_ONLY); + // Create a GSS context + gssContext = manager.createContext(serverCreds); + // Get service ticket from the authorization header + String serviceTicketBase64 = getAuthHeader(request); + byte[] inToken = Base64.decodeBase64(serviceTicketBase64.getBytes()); + gssContext.acceptSecContext(inToken, 0, inToken.length); + // Authenticate or deny based on its context completion + if (!gssContext.isEstablished()) { + throw new HttpAuthenticationException("Kerberos authentication failed: " + + "unable to establish context with the service ticket " + + "provided by the client."); + } + return SecurityUtil.getUserFromPrincipal(gssContext.getSrcName().toString()); + } catch (GSSException e) { + throw new HttpAuthenticationException("Kerberos authentication failed: ", e); + } finally { + if (gssContext != null) { + try { + gssContext.dispose(); + } catch (GSSException e) { + LOG.warn("Error while disposing GSS Context", e); + } + } + } + } + + /** + * Returns the base64 encoded auth header payload + * + * @throws HttpAuthenticationException if a remote or network exception occurs + */ + private String getAuthHeader(HttpServletRequest request) + throws HttpAuthenticationException { + String authHeader = request.getHeader("Authorization"); + // Each http request must have an Authorization header + if (authHeader == null || authHeader.isEmpty()) { + throw new HttpAuthenticationException("Authorization header received " + + "from the client is empty."); + } + String authHeaderBase64String; + int beginIndex = ("Negotiate ").length(); + authHeaderBase64String = authHeader.substring(beginIndex); + // Authorization header must have a payload + if (authHeaderBase64String == null || authHeaderBase64String.isEmpty()) { + throw new HttpAuthenticationException("Authorization header received " + + "from the client does not contain any data."); + } + return authHeaderBase64String; + } + } +} diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java index 052c9e1..59c7e2d 100644 --- hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java @@ -28,10 +28,10 @@ import org.apache.commons.cli.Options; import org.apache.commons.cli.PosixParser; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HBaseInterfaceAudience; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.http.InfoServer; import org.apache.hadoop.hbase.thrift.ThriftServerRunner.ImplType; import org.apache.hadoop.hbase.util.VersionInfo; @@ -168,7 +168,7 @@ public class ThriftServer { try { if (cmd.hasOption("infoport")) { String val = cmd.getOptionValue("infoport"); - conf.setInt("hbase.thrift.info.port", Integer.valueOf(val)); + conf.setInt("hbase.thrift.info.port", Integer.parseInt(val)); LOG.debug("Web UI port set to " + val); } } catch (NumberFormatException e) { diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java index f1539ef..9f23c09 100644 --- hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java @@ -50,7 +50,6 @@ import org.apache.commons.cli.Option; import org.apache.commons.cli.OptionGroup; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; @@ -61,6 +60,7 @@ import org.apache.hadoop.hbase.KeyValue; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.TableNotFoundException; +import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.hbase.client.Append; import org.apache.hadoop.hbase.client.Delete; import org.apache.hadoop.hbase.client.Durability; @@ -99,6 +99,7 @@ import org.apache.hadoop.hbase.util.Strings; import org.apache.hadoop.net.DNS; import org.apache.hadoop.security.SaslRpcServer.SaslGssCallbackHandler; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.security.authorize.ProxyUsers; import org.apache.thrift.TException; import org.apache.thrift.TProcessor; import org.apache.thrift.protocol.TBinaryProtocol; @@ -108,6 +109,7 @@ import org.apache.thrift.protocol.TProtocolFactory; import org.apache.thrift.server.THsHaServer; import org.apache.thrift.server.TNonblockingServer; import org.apache.thrift.server.TServer; +import org.apache.thrift.server.TServlet; import org.apache.thrift.server.TThreadedSelectorServer; import org.apache.thrift.transport.TFramedTransport; import org.apache.thrift.transport.TNonblockingServerSocket; @@ -116,6 +118,13 @@ import org.apache.thrift.transport.TSaslServerTransport; import org.apache.thrift.transport.TServerSocket; import org.apache.thrift.transport.TServerTransport; import org.apache.thrift.transport.TTransportFactory; +import org.mortbay.jetty.Connector; +import org.mortbay.jetty.Server; +import org.mortbay.jetty.nio.SelectChannelConnector; +import org.mortbay.jetty.security.SslSelectChannelConnector; +import org.mortbay.jetty.servlet.Context; +import org.mortbay.jetty.servlet.ServletHolder; +import org.mortbay.thread.QueuedThreadPool; import com.google.common.base.Joiner; import com.google.common.util.concurrent.ThreadFactoryBuilder; @@ -139,6 +148,15 @@ public class ThriftServerRunner implements Runnable { static final String MAX_FRAME_SIZE_CONF_KEY = "hbase.regionserver.thrift.framed.max_frame_size_in_mb"; static final String PORT_CONF_KEY = "hbase.regionserver.thrift.port"; static final String COALESCE_INC_KEY = "hbase.regionserver.thrift.coalesceIncrement"; + static final String USE_HTTP_CONF_KEY = "hbase.regionserver.thrift.http"; + static final String HTTP_MIN_THREADS = "hbase.thrift.http_threads.min"; + static final String HTTP_MAX_THREADS = "hbase.thrift.http_threads.max"; + + static final String THRIFT_SSL_ENABLED = "hbase.thrift.ssl.enabled"; + static final String THRIFT_SSL_KEYSTORE_STORE = "hbase.thrift.ssl.keystore.store"; + static final String THRIFT_SSL_KEYSTORE_PASSWORD = "hbase.thrift.ssl.keystore.password"; + static final String THRIFT_SSL_KEYSTORE_KEYPASSWORD = "hbase.thrift.ssl.keystore.keypassword"; + /** * Thrift quality of protection configuration key. Valid values can be: @@ -153,10 +171,13 @@ public class ThriftServerRunner implements Runnable { private static final String DEFAULT_BIND_ADDR = "0.0.0.0"; public static final int DEFAULT_LISTEN_PORT = 9090; + public static final int HREGION_VERSION = 1; + static final String THRIFT_SUPPORT_PROXYUSER = "hbase.thrift.support.proxyuser"; private final int listenPort; private Configuration conf; volatile TServer tserver; + volatile Server httpServer; private final Hbase.Iface handler; private final ThriftMetrics metrics; private final HBaseHandler hbaseHandler; @@ -165,6 +186,9 @@ public class ThriftServerRunner implements Runnable { private final String qop; private String host; + private final boolean securityEnabled; + private final boolean doAsEnabled; + /** An enum of server implementation selections */ enum ImplType { HS_HA("hsha", true, THsHaServer.class, true), @@ -266,7 +290,7 @@ public class ThriftServerRunner implements Runnable { public ThriftServerRunner(Configuration conf) throws IOException { UserProvider userProvider = UserProvider.instantiate(conf); // login the server principal (if using secure Hadoop) - boolean securityEnabled = userProvider.isHadoopSecurityEnabled() + securityEnabled = userProvider.isHadoopSecurityEnabled() && userProvider.isHBaseSecurityEnabled(); if (securityEnabled) { host = Strings.domainNamePointerToHostName(DNS.getDefaultHost( @@ -284,6 +308,7 @@ public class ThriftServerRunner implements Runnable { hbaseHandler, metrics, conf); this.realUser = userProvider.getCurrent().getUGI(); qop = conf.get(THRIFT_QOP_KEY); + doAsEnabled = conf.getBoolean(THRIFT_SUPPORT_PROXYUSER, false); if (qop != null) { if (!qop.equals("auth") && !qop.equals("auth-int") && !qop.equals("auth-conf")) { @@ -302,21 +327,27 @@ public class ThriftServerRunner implements Runnable { */ @Override public void run() { - realUser.doAs( - new PrivilegedAction() { - @Override - public Object run() { - try { + realUser.doAs(new PrivilegedAction() { + @Override + public Object run() { + try { + if (conf.getBoolean(USE_HTTP_CONF_KEY, false)) { + setupHTTPServer(); + httpServer.start(); + httpServer.join(); + } else { setupServer(); tserver.serve(); - } catch (Exception e) { - LOG.fatal("Cannot run ThriftServer", e); - // Crash the process if the ThriftServer is not running - System.exit(-1); } - return null; + } catch (Exception e) { + LOG.fatal("Cannot run ThriftServer", e); + // Crash the process if the ThriftServer is not running + System.exit(-1); } - }); + return null; + } + }); + } public void shutdown() { @@ -324,6 +355,70 @@ public class ThriftServerRunner implements Runnable { tserver.stop(); tserver = null; } + if (httpServer != null) { + try { + httpServer.stop(); + httpServer = null; + } catch (Exception e) { + LOG.error("Problem encountered in shutting down HTTP server " + e.getCause()); + } + httpServer = null; + } + } + + private void setupHTTPServer() throws IOException { + TProtocolFactory protocolFactory = new TBinaryProtocol.Factory(); + TProcessor processor = new Hbase.Processor(handler); + TServlet thriftHttpServlet = new ThriftHttpServlet(processor, protocolFactory, realUser, + conf, hbaseHandler, securityEnabled, doAsEnabled); + + httpServer = new Server(); + // Context handler + Context context = new Context(httpServer, "/", Context.SESSIONS); + context.setContextPath("/"); + String httpPath = "/*"; + httpServer.setHandler(context); + context.addServlet(new ServletHolder(thriftHttpServlet), httpPath); + + // set up Jetty and run the embedded server + Connector connector = new SelectChannelConnector(); + if(conf.getBoolean(THRIFT_SSL_ENABLED, false)) { + SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(); + String keystore = conf.get(THRIFT_SSL_KEYSTORE_STORE); + String password = HBaseConfiguration.getPassword(conf, + THRIFT_SSL_KEYSTORE_PASSWORD, null); + String keyPassword = HBaseConfiguration.getPassword(conf, + THRIFT_SSL_KEYSTORE_KEYPASSWORD, password); + sslConnector.setKeystore(keystore); + sslConnector.setPassword(password); + sslConnector.setKeyPassword(keyPassword); + connector = sslConnector; + } + String host = getBindAddress(conf).getHostAddress(); + connector.setPort(listenPort); + connector.setHost(host); + httpServer.addConnector(connector); + + if (doAsEnabled) { + ProxyUsers.refreshSuperUserGroupsConfiguration(conf); + } + + // Set the default max thread number to 100 to limit + // the number of concurrent requests so that Thrfit HTTP server doesn't OOM easily. + // Jetty set the default max thread number to 250, if we don't set it. + // + // Our default min thread number 2 is the same as that used by Jetty. + int minThreads = conf.getInt(HTTP_MIN_THREADS, 2); + int maxThreads = conf.getInt(HTTP_MAX_THREADS, 100); + QueuedThreadPool threadPool = new QueuedThreadPool(maxThreads); + threadPool.setMinThreads(minThreads); + httpServer.setThreadPool(threadPool); + + httpServer.setSendServerVersion(false); + httpServer.setSendDateHeader(false); + httpServer.setStopAtShutdown(true); + + LOG.info("Starting Thrift HTTP Server on " + Integer.toString(listenPort)); } /** @@ -744,7 +839,7 @@ public class ThriftServerRunner implements Runnable { region.endKey = ByteBuffer.wrap(info.getEndKey()); region.id = info.getRegionId(); region.name = ByteBuffer.wrap(info.getRegionName()); - region.version = info.getVersion(); + region.version = HREGION_VERSION; // HRegion now not versioned, PB encoding used results.add(region); } return results; @@ -1549,7 +1644,7 @@ public class ThriftServerRunner implements Runnable { region.setEndKey(regionInfo.getEndKey()); region.id = regionInfo.getRegionId(); region.setName(regionInfo.getRegionName()); - region.version = regionInfo.getVersion(); + region.version = HREGION_VERSION; // version not used anymore, PB encoding used. // find region assignment to server ServerName serverName = HRegionInfo.getServerName(startRowResult); diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/HTablePool.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/HTablePool.java new file mode 100644 index 0000000..400f10f --- /dev/null +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/HTablePool.java @@ -0,0 +1,684 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift2; + +import java.io.Closeable; +import java.io.IOException; +import java.util.Collection; +import java.util.List; +import java.util.Map; + +import org.apache.hadoop.hbase.classification.InterfaceAudience; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HTableDescriptor; +import org.apache.hadoop.hbase.TableName; +import org.apache.hadoop.hbase.client.*; +import org.apache.hadoop.hbase.client.coprocessor.Batch; +import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback; +import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp; +import org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.PoolMap; +import org.apache.hadoop.hbase.util.PoolMap.PoolType; + +import com.google.protobuf.Descriptors; +import com.google.protobuf.Message; +import com.google.protobuf.Service; +import com.google.protobuf.ServiceException; + +/** + * A simple pool of HTable instances. + * + * Each HTablePool acts as a pool for all tables. To use, instantiate an + * HTablePool and use {@link #getTable(String)} to get an HTable from the pool. + * + * This method is not needed anymore, clients should call + * HTableInterface.close() rather than returning the tables to the pool + * + * Once you are done with it, close your instance of + * {@link org.apache.hadoop.hbase.client.HTableInterface} + * by calling {@link org.apache.hadoop.hbase.client.HTableInterface#close()} rather than returning + * the tablesto the pool with (deprecated) + * {@link #putTable(org.apache.hadoop.hbase.client.HTableInterface)}. + * + *

    + * A pool can be created with a maxSize which defines the most HTable + * references that will ever be retained for each table. Otherwise the default + * is {@link Integer#MAX_VALUE}. + * + *

    + * Pool will manage its own connections to the cluster. See + * {@link org.apache.hadoop.hbase.client.HConnectionManager}. + * Was @deprecated made @InterfaceAudience.private as of 0.98.1. + * See {@link org.apache.hadoop.hbase.client.HConnection#getTable(String)}, + * Moved to thrift2 module for 2.0 + */ +@InterfaceAudience.Private +public class HTablePool implements Closeable { + private final PoolMap tables; + private final int maxSize; + private final PoolType poolType; + private final Configuration config; + private final HTableInterfaceFactory tableFactory; + + /** + * Default Constructor. Default HBaseConfiguration and no limit on pool size. + */ + public HTablePool() { + this(HBaseConfiguration.create(), Integer.MAX_VALUE); + } + + /** + * Constructor to set maximum versions and use the specified configuration. + * + * @param config + * configuration + * @param maxSize + * maximum number of references to keep for each table + */ + public HTablePool(final Configuration config, final int maxSize) { + this(config, maxSize, null, null); + } + + /** + * Constructor to set maximum versions and use the specified configuration and + * table factory. + * + * @param config + * configuration + * @param maxSize + * maximum number of references to keep for each table + * @param tableFactory + * table factory + */ + public HTablePool(final Configuration config, final int maxSize, + final HTableInterfaceFactory tableFactory) { + this(config, maxSize, tableFactory, PoolType.Reusable); + } + + /** + * Constructor to set maximum versions and use the specified configuration and + * pool type. + * + * @param config + * configuration + * @param maxSize + * maximum number of references to keep for each table + * @param poolType + * pool type which is one of {@link PoolType#Reusable} or + * {@link PoolType#ThreadLocal} + */ + public HTablePool(final Configuration config, final int maxSize, + final PoolType poolType) { + this(config, maxSize, null, poolType); + } + + /** + * Constructor to set maximum versions and use the specified configuration, + * table factory and pool type. The HTablePool supports the + * {@link PoolType#Reusable} and {@link PoolType#ThreadLocal}. If the pool + * type is null or not one of those two values, then it will default to + * {@link PoolType#Reusable}. + * + * @param config + * configuration + * @param maxSize + * maximum number of references to keep for each table + * @param tableFactory + * table factory + * @param poolType + * pool type which is one of {@link PoolType#Reusable} or + * {@link PoolType#ThreadLocal} + */ + public HTablePool(final Configuration config, final int maxSize, + final HTableInterfaceFactory tableFactory, PoolType poolType) { + // Make a new configuration instance so I can safely cleanup when + // done with the pool. + this.config = config == null ? HBaseConfiguration.create() : config; + this.maxSize = maxSize; + this.tableFactory = tableFactory == null ? new HTableFactory() + : tableFactory; + if (poolType == null) { + this.poolType = PoolType.Reusable; + } else { + switch (poolType) { + case Reusable: + case ThreadLocal: + this.poolType = poolType; + break; + default: + this.poolType = PoolType.Reusable; + break; + } + } + this.tables = new PoolMap(this.poolType, + this.maxSize); + } + + /** + * Get a reference to the specified table from the pool. + *

    + *

    + * + * @param tableName + * table name + * @return a reference to the specified table + * @throws RuntimeException + * if there is a problem instantiating the HTable + */ + public HTableInterface getTable(String tableName) { + // call the old getTable implementation renamed to findOrCreateTable + HTableInterface table = findOrCreateTable(tableName); + // return a proxy table so when user closes the proxy, the actual table + // will be returned to the pool + return new PooledHTable(table); + } + + /** + * Get a reference to the specified table from the pool. + *

    + * + * Create a new one if one is not available. + * + * @param tableName + * table name + * @return a reference to the specified table + * @throws RuntimeException + * if there is a problem instantiating the HTable + */ + private HTableInterface findOrCreateTable(String tableName) { + HTableInterface table = tables.get(tableName); + if (table == null) { + table = createHTable(tableName); + } + return table; + } + + /** + * Get a reference to the specified table from the pool. + *

    + * + * Create a new one if one is not available. + * + * @param tableName + * table name + * @return a reference to the specified table + * @throws RuntimeException + * if there is a problem instantiating the HTable + */ + public HTableInterface getTable(byte[] tableName) { + return getTable(Bytes.toString(tableName)); + } + + /** + * This method is not needed anymore, clients should call + * HTableInterface.close() rather than returning the tables to the pool + * + * @param table + * the proxy table user got from pool + * @deprecated + */ + @Deprecated + public void putTable(HTableInterface table) throws IOException { + // we need to be sure nobody puts a proxy implementation in the pool + // but if the client code is not updated + // and it will continue to call putTable() instead of calling close() + // then we need to return the wrapped table to the pool instead of the + // proxy + // table + if (table instanceof PooledHTable) { + returnTable(((PooledHTable) table).getWrappedTable()); + } else { + // normally this should not happen if clients pass back the same + // table + // object they got from the pool + // but if it happens then it's better to reject it + throw new IllegalArgumentException("not a pooled table: " + table); + } + } + + /** + * Puts the specified HTable back into the pool. + *

    + * + * If the pool already contains maxSize references to the table, then + * the table instance gets closed after flushing buffered edits. + * + * @param table + * table + */ + private void returnTable(HTableInterface table) throws IOException { + // this is the old putTable method renamed and made private + String tableName = Bytes.toString(table.getTableName()); + if (tables.size(tableName) >= maxSize) { + // release table instance since we're not reusing it + this.tables.removeValue(tableName, table); + this.tableFactory.releaseHTableInterface(table); + return; + } + tables.put(tableName, table); + } + + protected HTableInterface createHTable(String tableName) { + return this.tableFactory.createHTableInterface(config, + Bytes.toBytes(tableName)); + } + + /** + * Closes all the HTable instances , belonging to the given table, in the + * table pool. + *

    + * Note: this is a 'shutdown' of the given table pool and different from + * {@link #putTable(HTableInterface)}, that is used to return the table + * instance to the pool for future re-use. + * + * @param tableName + */ + public void closeTablePool(final String tableName) throws IOException { + Collection tables = this.tables.values(tableName); + if (tables != null) { + for (HTableInterface table : tables) { + this.tableFactory.releaseHTableInterface(table); + } + } + this.tables.remove(tableName); + } + + /** + * See {@link #closeTablePool(String)}. + * + * @param tableName + */ + public void closeTablePool(final byte[] tableName) throws IOException { + closeTablePool(Bytes.toString(tableName)); + } + + /** + * Closes all the HTable instances , belonging to all tables in the table + * pool. + *

    + * Note: this is a 'shutdown' of all the table pools. + */ + public void close() throws IOException { + for (String tableName : tables.keySet()) { + closeTablePool(tableName); + } + this.tables.clear(); + } + + public int getCurrentPoolSize(String tableName) { + return tables.size(tableName); + } + + /** + * A proxy class that implements HTableInterface.close method to return the + * wrapped table back to the table pool + * + */ + class PooledHTable implements HTableInterface { + + private boolean open = false; + + private HTableInterface table; // actual table implementation + + public PooledHTable(HTableInterface table) { + this.table = table; + this.open = true; + } + + @Override + public byte[] getTableName() { + checkState(); + return table.getTableName(); + } + + @Override + public TableName getName() { + return table.getName(); + } + + @Override + public Configuration getConfiguration() { + checkState(); + return table.getConfiguration(); + } + + @Override + public HTableDescriptor getTableDescriptor() throws IOException { + checkState(); + return table.getTableDescriptor(); + } + + @Override + public boolean exists(Get get) throws IOException { + checkState(); + return table.exists(get); + } + + @Override + public boolean[] existsAll(List gets) throws IOException { + checkState(); + return table.existsAll(gets); + } + + @Override + public Boolean[] exists(List gets) throws IOException { + checkState(); + return table.exists(gets); + } + + @Override + public void batch(List actions, Object[] results) throws IOException, + InterruptedException { + checkState(); + table.batch(actions, results); + } + + /** + * {@inheritDoc} + * @deprecated If any exception is thrown by one of the actions, there is no way to + * retrieve the partially executed results. Use {@link #batch(List, Object[])} instead. + */ + @Deprecated + @Override + public Object[] batch(List actions) throws IOException, + InterruptedException { + checkState(); + return table.batch(actions); + } + + @Override + public Result get(Get get) throws IOException { + checkState(); + return table.get(get); + } + + @Override + public Result[] get(List gets) throws IOException { + checkState(); + return table.get(gets); + } + + @Override + @SuppressWarnings("deprecation") + @Deprecated + public Result getRowOrBefore(byte[] row, byte[] family) throws IOException { + checkState(); + return table.getRowOrBefore(row, family); + } + + @Override + public ResultScanner getScanner(Scan scan) throws IOException { + checkState(); + return table.getScanner(scan); + } + + @Override + public ResultScanner getScanner(byte[] family) throws IOException { + checkState(); + return table.getScanner(family); + } + + @Override + public ResultScanner getScanner(byte[] family, byte[] qualifier) + throws IOException { + checkState(); + return table.getScanner(family, qualifier); + } + + @Override + public void put(Put put) throws IOException { + checkState(); + table.put(put); + } + + @Override + public void put(List puts) throws IOException { + checkState(); + table.put(puts); + } + + @Override + public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, + byte[] value, Put put) throws IOException { + checkState(); + return table.checkAndPut(row, family, qualifier, value, put); + } + + @Override + public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, + CompareOp compareOp, byte[] value, Put put) throws IOException { + checkState(); + return table.checkAndPut(row, family, qualifier, compareOp, value, put); + } + + @Override + public void delete(Delete delete) throws IOException { + checkState(); + table.delete(delete); + } + + @Override + public void delete(List deletes) throws IOException { + checkState(); + table.delete(deletes); + } + + @Override + public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, + byte[] value, Delete delete) throws IOException { + checkState(); + return table.checkAndDelete(row, family, qualifier, value, delete); + } + + @Override + public boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, + CompareOp compareOp, byte[] value, Delete delete) throws IOException { + checkState(); + return table.checkAndDelete(row, family, qualifier, compareOp, value, delete); + } + + @Override + public Result increment(Increment increment) throws IOException { + checkState(); + return table.increment(increment); + } + + @Override + public long incrementColumnValue(byte[] row, byte[] family, + byte[] qualifier, long amount) throws IOException { + checkState(); + return table.incrementColumnValue(row, family, qualifier, amount); + } + + @Override + public long incrementColumnValue(byte[] row, byte[] family, + byte[] qualifier, long amount, Durability durability) throws IOException { + checkState(); + return table.incrementColumnValue(row, family, qualifier, amount, + durability); + } + + @Override + public boolean isAutoFlush() { + checkState(); + return table.isAutoFlush(); + } + + @Override + public void flushCommits() throws IOException { + checkState(); + table.flushCommits(); + } + + /** + * Returns the actual table back to the pool + * + * @throws IOException + */ + public void close() throws IOException { + checkState(); + open = false; + returnTable(table); + } + + @Override + public CoprocessorRpcChannel coprocessorService(byte[] row) { + checkState(); + return table.coprocessorService(row); + } + + @Override + public Map coprocessorService(Class service, + byte[] startKey, byte[] endKey, Batch.Call callable) + throws ServiceException, Throwable { + checkState(); + return table.coprocessorService(service, startKey, endKey, callable); + } + + @Override + public void coprocessorService(Class service, + byte[] startKey, byte[] endKey, Batch.Call callable, Callback callback) + throws ServiceException, Throwable { + checkState(); + table.coprocessorService(service, startKey, endKey, callable, callback); + } + + @Override + public String toString() { + return "PooledHTable{" + ", table=" + table + '}'; + } + + /** + * Expose the wrapped HTable to tests in the same package + * + * @return wrapped htable + */ + HTableInterface getWrappedTable() { + return table; + } + + @Override + public void batchCallback(List actions, + Object[] results, Callback callback) throws IOException, + InterruptedException { + checkState(); + table.batchCallback(actions, results, callback); + } + + /** + * {@inheritDoc} + * @deprecated If any exception is thrown by one of the actions, there is no way to + * retrieve the partially executed results. Use + * {@link #batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)} + * instead. + */ + @Deprecated + @Override + public Object[] batchCallback(List actions, + Callback callback) throws IOException, InterruptedException { + checkState(); + return table.batchCallback(actions, callback); + } + + @Override + public void mutateRow(RowMutations rm) throws IOException { + checkState(); + table.mutateRow(rm); + } + + @Override + public Result append(Append append) throws IOException { + checkState(); + return table.append(append); + } + + @Override + public void setAutoFlush(boolean autoFlush) { + checkState(); + table.setAutoFlush(autoFlush, autoFlush); + } + + @Override + public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail) { + checkState(); + table.setAutoFlush(autoFlush, clearBufferOnFail); + } + + @Override + public void setAutoFlushTo(boolean autoFlush) { + table.setAutoFlushTo(autoFlush); + } + + @Override + public long getWriteBufferSize() { + checkState(); + return table.getWriteBufferSize(); + } + + @Override + public void setWriteBufferSize(long writeBufferSize) throws IOException { + checkState(); + table.setWriteBufferSize(writeBufferSize); + } + + boolean isOpen() { + return open; + } + + private void checkState() { + if (!isOpen()) { + throw new IllegalStateException("Table=" + new String(table.getTableName()) + + " already closed"); + } + } + + @Override + public long incrementColumnValue(byte[] row, byte[] family, + byte[] qualifier, long amount, boolean writeToWAL) throws IOException { + return table.incrementColumnValue(row, family, qualifier, amount, writeToWAL); + } + + @Override + public Map batchCoprocessorService( + Descriptors.MethodDescriptor method, Message request, + byte[] startKey, byte[] endKey, R responsePrototype) throws ServiceException, Throwable { + checkState(); + return table.batchCoprocessorService(method, request, startKey, endKey, + responsePrototype); + } + + @Override + public void batchCoprocessorService( + Descriptors.MethodDescriptor method, Message request, + byte[] startKey, byte[] endKey, R responsePrototype, Callback callback) + throws ServiceException, Throwable { + checkState(); + table.batchCoprocessorService(method, request, startKey, endKey, responsePrototype, callback); + } + + @Override + public boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, CompareOp compareOp, + byte[] value, RowMutations mutation) throws IOException { + checkState(); + return table.checkAndMutate(row, family, qualifier, compareOp, value, mutation); + } + } +} diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java index b055918..41305a6 100644 --- hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftHBaseServiceHandler.java @@ -42,7 +42,6 @@ import org.apache.hadoop.hbase.classification.InterfaceAudience; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.client.HTableFactory; import org.apache.hadoop.hbase.client.HTableInterface; -import org.apache.hadoop.hbase.client.HTablePool; import org.apache.hadoop.hbase.client.ResultScanner; import org.apache.hadoop.hbase.client.Table; import org.apache.hadoop.hbase.security.UserProvider; diff --git hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java index f79276c..72e9117 100644 --- hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java +++ hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java @@ -309,6 +309,15 @@ public class ThriftServer { System.exit(1); } + // Get address to bind + String bindAddress; + if (cmd.hasOption("bind")) { + bindAddress = cmd.getOptionValue("bind"); + conf.set("hbase.thrift.info.bindAddress", bindAddress); + } else { + bindAddress = conf.get("hbase.thrift.info.bindAddress"); + } + // Get port to bind to int listenPort = 0; try { @@ -387,7 +396,7 @@ public class ThriftServer { conf.getBoolean("hbase.regionserver.thrift.framed", false) || nonblocking || hsha; TTransportFactory transportFactory = getTTransportFactory(qop, name, host, framed, conf.getInt("hbase.regionserver.thrift.framed.max_frame_size_in_mb", 2) * 1024 * 1024); - InetSocketAddress inetSocketAddress = bindToPort(cmd.getOptionValue("bind"), listenPort); + InetSocketAddress inetSocketAddress = bindToPort(bindAddress, listenPort); conf.setBoolean("hbase.regionserver.thrift.framed", framed); if (qop != null) { // Create a processor wrapper, to get the caller @@ -409,7 +418,7 @@ public class ThriftServer { try { if (cmd.hasOption("infoport")) { String val = cmd.getOptionValue("infoport"); - conf.setInt("hbase.thrift.info.port", Integer.valueOf(val)); + conf.setInt("hbase.thrift.info.port", Integer.parseInt(val)); log.debug("Web UI port set to " + val); } } catch (NumberFormatException e) { diff --git hbase-thrift/src/main/resources/hbase-webapps/static/hbase_logo_small.png hbase-thrift/src/main/resources/hbase-webapps/static/hbase_logo_small.png index 60452b6..8c6353a 100644 Binary files hbase-thrift/src/main/resources/hbase-webapps/static/hbase_logo_small.png and hbase-thrift/src/main/resources/hbase-webapps/static/hbase_logo_small.png differ diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java index d365471..189f17e 100644 --- hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java @@ -18,9 +18,6 @@ */ package org.apache.hadoop.hbase.thrift; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; - import java.util.ArrayList; import java.util.Collection; import java.util.concurrent.LinkedBlockingQueue; @@ -30,8 +27,9 @@ import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.CompatibilitySingletonFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; -import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.test.MetricsAssertHelper; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.SmallTests; import org.apache.hadoop.hbase.thrift.CallQueue.Call; import org.junit.experimental.categories.Category; import org.junit.runner.RunWith; @@ -43,7 +41,7 @@ import org.junit.Test; * Unit testing for CallQueue, a part of the * org.apache.hadoop.hbase.thrift package. */ -@Category(SmallTests.class) +@Category({ClientTests.class, SmallTests.class}) @RunWith(Parameterized.class) public class TestCallQueue { diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftHttpServer.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftHttpServer.java new file mode 100644 index 0000000..9341ffa --- /dev/null +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftHttpServer.java @@ -0,0 +1,163 @@ +/* + * Copyright The Apache Software Foundation + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with this + * work for additional information regarding copyright ownership. The ASF + * licenses this file to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + * License for the specific language governing permissions and limitations + * under the License. + */ +package org.apache.hadoop.hbase.thrift; + +import java.util.ArrayList; +import java.util.List; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.HConstants; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; +import org.apache.hadoop.hbase.thrift.generated.Hbase; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManagerTestHelper; +import org.apache.hadoop.hbase.util.IncrementingEnvironmentEdge; +import org.apache.thrift.protocol.TBinaryProtocol; +import org.apache.thrift.protocol.TProtocol; +import org.apache.thrift.transport.THttpClient; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.junit.experimental.categories.Category; + +import com.google.common.base.Joiner; + +/** + * Start the HBase Thrift HTTP server on a random port through the command-line + * interface and talk to it from client side. + */ +@Category({ClientTests.class, LargeTests.class}) + +public class TestThriftHttpServer { + + public static final Log LOG = + LogFactory.getLog(TestThriftHttpServer.class); + + private static final HBaseTestingUtility TEST_UTIL = + new HBaseTestingUtility(); + + private Thread httpServerThread; + private volatile Exception httpServerException; + + private Exception clientSideException; + + private ThriftServer thriftServer; + private int port; + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + TEST_UTIL.getConfiguration().setBoolean("hbase.regionserver.thrift.http", true); + TEST_UTIL.getConfiguration().setBoolean("hbase.table.sanity.checks", false); + TEST_UTIL.startMiniCluster(); + //ensure that server time increments every time we do an operation, otherwise + //successive puts having the same timestamp will override each other + EnvironmentEdgeManagerTestHelper.injectEdge(new IncrementingEnvironmentEdge()); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + TEST_UTIL.shutdownMiniCluster(); + EnvironmentEdgeManager.reset(); + } + + private void startHttpServerThread(final String[] args) { + LOG.info("Starting HBase Thrift server with HTTP server: " + Joiner.on(" ").join(args)); + + httpServerException = null; + httpServerThread = new Thread(new Runnable() { + @Override + public void run() { + try { + thriftServer.doMain(args); + } catch (Exception e) { + httpServerException = e; + } + } + }); + httpServerThread.setName(ThriftServer.class.getSimpleName() + + "-httpServer"); + httpServerThread.start(); + } + + @Test(timeout=600000) + public void testRunThriftServer() throws Exception { + List args = new ArrayList(); + port = HBaseTestingUtility.randomFreePort(); + args.add("-" + ThriftServer.PORT_OPTION); + args.add(String.valueOf(port)); + args.add("start"); + + thriftServer = new ThriftServer(TEST_UTIL.getConfiguration()); + startHttpServerThread(args.toArray(new String[args.size()])); + + // wait up to 10s for the server to start + for (int i = 0; i < 100 + && ( thriftServer.serverRunner == null || thriftServer.serverRunner.httpServer == + null); i++) { + Thread.sleep(100); + } + + try { + talkToThriftServer(); + } catch (Exception ex) { + clientSideException = ex; + } finally { + stopHttpServerThread(); + } + + if (clientSideException != null) { + LOG.error("Thrift client threw an exception " + clientSideException); + throw new Exception(clientSideException); + } + } + + private static volatile boolean tableCreated = false; + + private void talkToThriftServer() throws Exception { + THttpClient httpClient = new THttpClient( + "http://"+ HConstants.LOCALHOST + ":" + port); + httpClient.open(); + try { + TProtocol prot; + prot = new TBinaryProtocol(httpClient); + Hbase.Client client = new Hbase.Client(prot); + if (!tableCreated){ + TestThriftServer.createTestTables(client); + tableCreated = true; + } + TestThriftServer.checkTableList(client); + } finally { + httpClient.close(); + } + } + + private void stopHttpServerThread() throws Exception { + LOG.debug("Stopping " + " Thrift HTTP server"); + thriftServer.stop(); + httpServerThread.join(); + if (httpServerException != null) { + LOG.error("Command-line invocation of HBase Thrift server threw an " + + "exception", httpServerException); + throw new Exception(httpServerException); + } + } +} diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java index 71e88d5..d5a020e 100644 --- hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java @@ -36,11 +36,12 @@ import org.apache.hadoop.hbase.CompatibilityFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.HRegionInfo; -import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.filter.ParseFilter; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.test.MetricsAssertHelper; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.thrift.ThriftServerRunner.HBaseHandler; import org.apache.hadoop.hbase.thrift.generated.BatchMutation; import org.apache.hadoop.hbase.thrift.generated.ColumnDescriptor; @@ -64,7 +65,7 @@ import org.junit.experimental.categories.Category; * Unit testing for ThriftServerRunner.HBaseHandler, a part of the * org.apache.hadoop.hbase.thrift package. */ -@Category(LargeTests.class) +@Category({ClientTests.class, LargeTests.class}) public class TestThriftServer { private static final HBaseTestingUtility UTIL = new HBaseTestingUtility(); private static final Log LOG = LogFactory.getLog(TestThriftServer.class); diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java index d45cb16..9446d2f 100644 --- hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java @@ -29,6 +29,7 @@ import java.util.List; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; +import org.apache.hadoop.hbase.testclassification.ClientTests; import org.apache.hadoop.hbase.testclassification.LargeTests; import org.apache.hadoop.hbase.thrift.ThriftServerRunner.ImplType; import org.apache.hadoop.hbase.thrift.generated.Hbase; @@ -56,7 +57,7 @@ import com.google.common.base.Joiner; * Start the HBase Thrift server on a random port through the command-line * interface and talk to it from client side. */ -@Category(LargeTests.class) +@Category({ClientTests.class, LargeTests.class}) @RunWith(Parameterized.class) public class TestThriftServerCmdLine { @@ -240,7 +241,5 @@ public class TestThriftServerCmdLine { throw new Exception(cmdLineException); } } - - } diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestHTablePool.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestHTablePool.java new file mode 100644 index 0000000..2826b05 --- /dev/null +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestHTablePool.java @@ -0,0 +1,366 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.thrift2; + +import java.io.IOException; + +import org.apache.hadoop.hbase.*; +import org.apache.hadoop.hbase.client.HBaseAdmin; +import org.apache.hadoop.hbase.client.HTable; +import org.apache.hadoop.hbase.client.HTableInterface; +import org.apache.hadoop.hbase.client.Table; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.PoolMap.PoolType; +import org.junit.*; +import org.junit.experimental.categories.Category; +import org.junit.runner.RunWith; +import org.junit.runners.Suite; + +/** + * Tests HTablePool. + */ +@RunWith(Suite.class) +@Suite.SuiteClasses({TestHTablePool.TestHTableReusablePool.class, TestHTablePool.TestHTableThreadLocalPool.class}) +@Category({ClientTests.class, MediumTests.class}) +public class TestHTablePool { + private static HBaseTestingUtility TEST_UTIL = new HBaseTestingUtility(); + private final static String TABLENAME = "TestHTablePool"; + + public abstract static class TestHTablePoolType { + + @BeforeClass + public static void setUpBeforeClass() throws Exception { + TEST_UTIL.startMiniCluster(1); + TEST_UTIL.createTable(TableName.valueOf(TABLENAME), HConstants.CATALOG_FAMILY); + } + + @AfterClass + public static void tearDownAfterClass() throws Exception { + TEST_UTIL.shutdownMiniCluster(); + } + + protected abstract PoolType getPoolType(); + + @Test + public void testTableWithStringName() throws Exception { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE, getPoolType()); + String tableName = TABLENAME; + + // Request a table from an empty pool + Table table = pool.getTable(tableName); + Assert.assertNotNull(table); + + // Close table (returns table to the pool) + table.close(); + + // Request a table of the same name + Table sameTable = pool.getTable(tableName); + Assert.assertSame( + ((HTablePool.PooledHTable) table).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable).getWrappedTable()); + } + + @Test + public void testTableWithByteArrayName() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE, getPoolType()); + + // Request a table from an empty pool + Table table = pool.getTable(TABLENAME); + Assert.assertNotNull(table); + + // Close table (returns table to the pool) + table.close(); + + // Request a table of the same name + Table sameTable = pool.getTable(TABLENAME); + Assert.assertSame( + ((HTablePool.PooledHTable) table).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable).getWrappedTable()); + } + + @Test + public void testTablesWithDifferentNames() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE, getPoolType()); + // We add the class to the table name as the HBase cluster is reused + // during the tests: this gives naming unicity. + byte[] otherTable = Bytes.toBytes( + "OtherTable_" + getClass().getSimpleName() + ); + TEST_UTIL.createTable(otherTable, HConstants.CATALOG_FAMILY); + + // Request a table from an empty pool + Table table1 = pool.getTable(TABLENAME); + Table table2 = pool.getTable(otherTable); + Assert.assertNotNull(table2); + + // Close tables (returns tables to the pool) + table1.close(); + table2.close(); + + // Request tables of the same names + Table sameTable1 = pool.getTable(TABLENAME); + Table sameTable2 = pool.getTable(otherTable); + Assert.assertSame( + ((HTablePool.PooledHTable) table1).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); + Assert.assertSame( + ((HTablePool.PooledHTable) table2).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); + } + @Test + public void testProxyImplementationReturned() { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE); + String tableName = TABLENAME;// Request a table from + // an + // empty pool + Table table = pool.getTable(tableName); + + // Test if proxy implementation is returned + Assert.assertTrue(table instanceof HTablePool.PooledHTable); + } + + @Test + public void testDeprecatedUsagePattern() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE); + String tableName = TABLENAME;// Request a table from + // an + // empty pool + + // get table will return proxy implementation + HTableInterface table = pool.getTable(tableName); + + // put back the proxy implementation instead of closing it + pool.putTable(table); + + // Request a table of the same name + Table sameTable = pool.getTable(tableName); + + // test no proxy over proxy created + Assert.assertSame(((HTablePool.PooledHTable) table).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable).getWrappedTable()); + } + + @Test + public void testReturnDifferentTable() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE); + String tableName = TABLENAME;// Request a table from + // an + // empty pool + + // get table will return proxy implementation + final Table table = pool.getTable(tableName); + HTableInterface alienTable = new HTable(TEST_UTIL.getConfiguration(), + TableName.valueOf(TABLENAME)) { + // implementation doesn't matter as long the table is not from + // pool + }; + try { + // put the wrong table in pool + pool.putTable(alienTable); + Assert.fail("alien table accepted in pool"); + } catch (IllegalArgumentException e) { + Assert.assertTrue("alien table rejected", true); + } + } + + @Test + public void testHTablePoolCloseTwice() throws Exception { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), + Integer.MAX_VALUE, getPoolType()); + String tableName = TABLENAME; + + // Request a table from an empty pool + Table table = pool.getTable(tableName); + Assert.assertNotNull(table); + Assert.assertTrue(((HTablePool.PooledHTable) table).isOpen()); + // Close table (returns table to the pool) + table.close(); + // check if the table is closed + Assert.assertFalse(((HTablePool.PooledHTable) table).isOpen()); + try { + table.close(); + Assert.fail("Should not allow table to be closed twice"); + } catch (IllegalStateException ex) { + Assert.assertTrue("table cannot be closed twice", true); + } finally { + pool.close(); + } + } + } + + @Category({ClientTests.class, MediumTests.class}) + public static class TestHTableReusablePool extends TestHTablePoolType { + @Override + protected PoolType getPoolType() { + return PoolType.Reusable; + } + + @Test + public void testTableWithMaxSize() throws Exception { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 2, + getPoolType()); + + // Request tables from an empty pool + Table table1 = pool.getTable(TABLENAME); + Table table2 = pool.getTable(TABLENAME); + Table table3 = pool.getTable(TABLENAME); + + // Close tables (returns tables to the pool) + table1.close(); + table2.close(); + // The pool should reject this one since it is already full + table3.close(); + + // Request tables of the same name + Table sameTable1 = pool.getTable(TABLENAME); + Table sameTable2 = pool.getTable(TABLENAME); + Table sameTable3 = pool.getTable(TABLENAME); + Assert.assertSame( + ((HTablePool.PooledHTable) table1).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); + Assert.assertSame( + ((HTablePool.PooledHTable) table2).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); + Assert.assertNotSame( + ((HTablePool.PooledHTable) table3).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable3).getWrappedTable()); + } + + @Test + public void testCloseTablePool() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 4, + getPoolType()); + HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); + + if (admin.tableExists(TABLENAME)) { + admin.disableTable(TABLENAME); + admin.deleteTable(TABLENAME); + } + + HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(TABLENAME)); + tableDescriptor.addFamily(new HColumnDescriptor("randomFamily")); + admin.createTable(tableDescriptor); + + // Request tables from an empty pool + Table[] tables = new Table[4]; + for (int i = 0; i < 4; ++i) { + tables[i] = pool.getTable(TABLENAME); + } + + pool.closeTablePool(TABLENAME); + + for (int i = 0; i < 4; ++i) { + tables[i].close(); + } + + Assert.assertEquals(4, + pool.getCurrentPoolSize(TABLENAME)); + + pool.closeTablePool(TABLENAME); + + Assert.assertEquals(0, + pool.getCurrentPoolSize(TABLENAME)); + } + } + + @Category({ClientTests.class, MediumTests.class}) + public static class TestHTableThreadLocalPool extends TestHTablePoolType { + @Override + protected PoolType getPoolType() { + return PoolType.ThreadLocal; + } + + @Test + public void testTableWithMaxSize() throws Exception { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 2, + getPoolType()); + + // Request tables from an empty pool + Table table1 = pool.getTable(TABLENAME); + Table table2 = pool.getTable(TABLENAME); + Table table3 = pool.getTable(TABLENAME); + + // Close tables (returns tables to the pool) + table1.close(); + table2.close(); + // The pool should not reject this one since the number of threads + // <= 2 + table3.close(); + + // Request tables of the same name + Table sameTable1 = pool.getTable(TABLENAME); + Table sameTable2 = pool.getTable(TABLENAME); + Table sameTable3 = pool.getTable(TABLENAME); + Assert.assertSame( + ((HTablePool.PooledHTable) table3).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable1).getWrappedTable()); + Assert.assertSame( + ((HTablePool.PooledHTable) table3).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable2).getWrappedTable()); + Assert.assertSame( + ((HTablePool.PooledHTable) table3).getWrappedTable(), + ((HTablePool.PooledHTable) sameTable3).getWrappedTable()); + } + + @Test + public void testCloseTablePool() throws IOException { + HTablePool pool = new HTablePool(TEST_UTIL.getConfiguration(), 4, + getPoolType()); + HBaseAdmin admin = new HBaseAdmin(TEST_UTIL.getConfiguration()); + + if (admin.tableExists(TABLENAME)) { + admin.disableTable(TABLENAME); + admin.deleteTable(TABLENAME); + } + + HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(TABLENAME)); + tableDescriptor.addFamily(new HColumnDescriptor("randomFamily")); + admin.createTable(tableDescriptor); + + // Request tables from an empty pool + Table[] tables = new Table[4]; + for (int i = 0; i < 4; ++i) { + tables[i] = pool.getTable(TABLENAME); + } + + pool.closeTablePool(TABLENAME); + + for (int i = 0; i < 4; ++i) { + tables[i].close(); + } + + Assert.assertEquals(1, + pool.getCurrentPoolSize(TABLENAME)); + + pool.closeTablePool(TABLENAME); + + Assert.assertEquals(0, + pool.getCurrentPoolSize(TABLENAME)); + } + } + +} diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java index 9c2e718..d3de6dd 100644 --- hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java @@ -25,7 +25,6 @@ import org.apache.hadoop.hbase.CompatibilityFactory; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.Get; @@ -38,6 +37,8 @@ import org.apache.hadoop.hbase.client.Durability; import org.apache.hadoop.hbase.filter.ParseFilter; import org.apache.hadoop.hbase.security.UserProvider; import org.apache.hadoop.hbase.test.MetricsAssertHelper; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.thrift.ThriftMetrics; import org.apache.hadoop.hbase.thrift2.generated.TAppend; import org.apache.hadoop.hbase.thrift2.generated.TColumn; @@ -85,7 +86,7 @@ import static java.nio.ByteBuffer.wrap; * Unit testing for ThriftServer.HBaseHandler, a part of the org.apache.hadoop.hbase.thrift2 * package. */ -@Category(MediumTests.class) +@Category({ClientTests.class, MediumTests.class}) public class TestThriftHBaseServiceHandler { public static final Log LOG = LogFactory.getLog(TestThriftHBaseServiceHandler.class); diff --git hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java index 946dbba..80c54df 100644 --- hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java +++ hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java @@ -37,7 +37,6 @@ import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseTestingUtility; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; -import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.TableName; import org.apache.hadoop.hbase.client.Admin; import org.apache.hadoop.hbase.client.HBaseAdmin; @@ -50,6 +49,8 @@ import org.apache.hadoop.hbase.security.visibility.VisibilityClient; import org.apache.hadoop.hbase.security.visibility.VisibilityConstants; import org.apache.hadoop.hbase.security.visibility.VisibilityController; import org.apache.hadoop.hbase.security.visibility.VisibilityUtils; +import org.apache.hadoop.hbase.testclassification.ClientTests; +import org.apache.hadoop.hbase.testclassification.MediumTests; import org.apache.hadoop.hbase.thrift2.generated.TAppend; import org.apache.hadoop.hbase.thrift2.generated.TAuthorization; import org.apache.hadoop.hbase.thrift2.generated.TCellVisibility; @@ -70,7 +71,7 @@ import org.junit.BeforeClass; import org.junit.Test; import org.junit.experimental.categories.Category; -@Category(MediumTests.class) +@Category({ClientTests.class, MediumTests.class}) public class TestThriftHBaseServiceHandlerWithLabels { public static final Log LOG = LogFactory diff --git pom.xml pom.xml index 45a2b5f..5c9d834 100644 --- pom.xml +++ pom.xml @@ -39,7 +39,7 @@ org.apache.hbase hbase pom - 1.0.0-SNAPSHOT + 2.0.0-SNAPSHOT HBase Apache HBase™ is the Hadoop database. Use it when you need @@ -51,7 +51,6 @@ hbase-server hbase-thrift - hbase-rest hbase-shell hbase-protocol hbase-client @@ -63,6 +62,7 @@ hbase-assembly hbase-testing-util hbase-annotations + hbase-rest hbase-checkstyle @@ -517,6 +513,13 @@ target/jacoco.exec + + + + ${test.exclude.pattern} + @@ -838,13 +841,22 @@ 4.4 runtime + + net.sf.xslthl + xslthl + 2.1.0 + runtime + + 1 + images/ ${basedir}/src/main/docbkx true true 100 true + css/freebsd_docbook.css true ${basedir}/src/main/docbkx/customization.xsl 2 @@ -861,9 +873,15 @@ true true - ../images/ - ../css/freebsd_docbook.css ${basedir}/target/docbkx/book + + + + + + + + @@ -873,9 +891,16 @@ pre-site - images/ - css/freebsd_docbook.css ${basedir}/target/docbkx/ + book.xml + + + + + + + + @@ -939,12 +964,23 @@ wagon-ssh 2.2 + + + lt.velykis.maven.skins + reflow-velocity-tools + 1.1.1 + + + + org.apache.velocity + velocity + 1.7 + ${basedir}/src/main/site UTF-8 UTF-8 - ${basedir}/src/main/site/site.vm @@ -985,6 +1021,7 @@ 3.0.3 ${compileSource} + javac-with-errorprone 2.5.1 3.0.0-SNAPSHOT @@ -1054,6 +1091,7 @@ false true 900 + -enableassertions -XX:MaxDirectMemorySize=1G -Xmx1900m -XX:MaxPermSize=256m -Djava.security.egd=file:/dev/./urandom -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true @@ -1166,24 +1204,12 @@ test - hbase-rest - org.apache.hbase - ${project.version} - - - hbase-rest org.apache.hbase + hbase-testing-util ${project.version} - test-jar test - org.apache.hbase - hbase-testing-util - ${project.version} - test - - org.apache.hbase hbase-prefix-tree ${project.version} @@ -1635,8 +1661,8 @@ hadoop-2.0 - - !hadoop.profile + + !hadoop.profile @@ -2057,6 +2083,254 @@ + runMiscTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.MiscTests + + + + + + runCoprocessorTests + + false + + + 1 + 1 + false + true + + org.apache.hadoop.hbase.testclassification.CoprocessorTests + + + + + + runClientTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.ClientTests + + + + + + runMasterTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.MasterTests + + + + + + runMapredTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.MapredTests + + + + + + runMapreduceTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.MapReduceTests + + + + + + runRegionServerTests + + false + + + 1 + 1 + false + true + + org.apache.hadoop.hbase.testclassification.RegionServerTests + + + + + + runVerySlowMapReduceTests + + false + + + 2 + 1 + false + true + + org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests + + + + + + + runVerySlowRegionServerTests + + false + + + 2 + 1 + false + true + + org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests + + + + + + + runFilterTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.FilterTests + + + + + + runIOTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.IOTests + + + + + + runRestTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.RestTests + + + + + + runRPCTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.RPCTests + + + + + + runReplicationTests + + false + + + 1 + 1 + false + true + + org.apache.hadoop.hbase.testclassification.ReplicationTests + + + + + + runSecurityTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.SecurityTests + + + + + + runFlakeyTests + + false + + + 1 + 1 + false + true + org.apache.hadoop.hbase.testclassification.FlakeyTests + + + + + + localTests @@ -2125,6 +2399,15 @@ + + javac + + false + + + javac + + @@ -2149,6 +2432,7 @@ false + org.apache.maven.plugins maven-jxr-plugin @@ -2176,7 +2460,6 @@ devapi - aggregate @@ -2210,9 +2493,8 @@ - org.apache.hbase:hbase-annotations - - + org.apache.hbase:hbase-annotations + diff --git src/main/docbkx/architecture.xml src/main/docbkx/architecture.xml new file mode 100644 index 0000000..16b298a --- /dev/null +++ src/main/docbkx/architecture.xml @@ -0,0 +1,3489 @@ + + + + + Architecture +

    + Overview +
    + NoSQL? + HBase is a type of "NoSQL" database. "NoSQL" is a general term meaning that the database isn't an RDBMS which + supports SQL as its primary access language, but there are many types of NoSQL databases: BerkeleyDB is an + example of a local NoSQL database, whereas HBase is very much a distributed database. Technically speaking, + HBase is really more a "Data Store" than "Data Base" because it lacks many of the features you find in an RDBMS, + such as typed columns, secondary indexes, triggers, and advanced query languages, etc. + + However, HBase has many features which supports both linear and modular scaling. HBase clusters expand + by adding RegionServers that are hosted on commodity class servers. If a cluster expands from 10 to 20 + RegionServers, for example, it doubles both in terms of storage and as well as processing capacity. + RDBMS can scale well, but only up to a point - specifically, the size of a single database server - and for the best + performance requires specialized hardware and storage devices. HBase features of note are: + + Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This + makes it very suitable for tasks such as high-speed counter aggregation. + Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are + automatically split and re-distributed as your data grows. + Automatic RegionServer failover + Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system. + MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both + source and sink. + Java Client API: HBase supports an easy to use Java API for programmatic access. + Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends. + Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization. + Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics. + + +
    + +
    + When Should I Use HBase? + HBase isn't suitable for every problem. + First, make sure you have enough data. If you have hundreds of millions or billions of rows, then + HBase is a good candidate. If you only have a few thousand/million rows, then using a traditional RDBMS + might be a better choice due to the fact that all of your data might wind up on a single node (or two) and + the rest of the cluster may be sitting idle. + + Second, make sure you can live without all the extra features that an RDBMS provides (e.g., typed columns, + secondary indexes, transactions, advanced query languages, etc.) An application built against an RDBMS cannot be + "ported" to HBase by simply changing a JDBC driver, for example. Consider moving from an RDBMS to HBase as a + complete redesign as opposed to a port. + + Third, make sure you have enough hardware. Even HDFS doesn't do well with anything less than + 5 DataNodes (due to things such as HDFS block replication which has a default of 3), plus a NameNode. + + HBase can run quite well stand-alone on a laptop - but this should be considered a development + configuration only. + +
    +
    + What Is The Difference Between HBase and Hadoop/HDFS? + HDFS is a distributed file system that is well suited for the storage of large files. + Its documentation states that it is not, however, a general purpose file system, and does not provide fast individual record lookups in files. + HBase, on the other hand, is built on top of HDFS and provides fast record lookups (and updates) for large tables. + This can sometimes be a point of conceptual confusion. HBase internally puts your data in indexed "StoreFiles" that exist + on HDFS for high-speed lookups. See the and the rest of this chapter for more information on how HBase achieves its goals. + +
    +
    + +
    + Catalog Tables + The catalog table hbase:meta exists as an HBase table and is filtered out of the HBase + shell's list command, but is in fact a table just like any other. +
    + -ROOT- + + The -ROOT- table was removed in HBase 0.96.0. Information here should + be considered historical. + + The -ROOT- table kept track of the location of the + .META table (the previous name for the table now called hbase:meta) prior to HBase + 0.96. The -ROOT- table structure was as follows: + + Key + + .META. region key (.META.,,1) + + + + + Values + + info:regioninfo (serialized HRegionInfo + instance of hbase:meta) + + + info:server (server:port of the RegionServer holding + hbase:meta) + + + info:serverstartcode (start-time of the RegionServer process holding + hbase:meta) + + +
    +
    + hbase:meta + The hbase:meta table (previously called .META.) keeps a list + of all regions in the system. The location of hbase:meta was previously + tracked within the -ROOT- table, but is now stored in Zookeeper. + The hbase:meta table structure is as follows: + + Key + + Region key of the format ([table],[region start key],[region + id]) + + + + Values + + info:regioninfo (serialized + HRegionInfo instance for this region) + + + info:server (server:port of the RegionServer containing this + region) + + + info:serverstartcode (start-time of the RegionServer process + containing this region) + + + When a table is in the process of splitting, two other columns will be created, called + info:splitA and info:splitB. These columns represent the two + daughter regions. The values for these columns are also serialized HRegionInfo instances. + After the region has been split, eventually this row will be deleted. + + Note on HRegionInfo + The empty key is used to denote table start and table end. A region with an empty + start key is the first region in a table. If a region has both an empty start and an + empty end key, it is the only region in the table + + In the (hopefully unlikely) event that programmatic processing of catalog metadata is + required, see the Writables + utility. +
    +
    + Startup Sequencing + First, the location of hbase:meta is looked up in Zookeeper. Next, + hbase:meta is updated with server and startcode values. + For information on region-RegionServer assignment, see . +
    +
    + +
    + Client + The HBase client finds the RegionServers that are serving the particular row range of + interest. It does this by querying the hbase:meta table. See for details. After locating the required region(s), the + client contacts the RegionServer serving that region, rather than going through the master, + and issues the read or write request. This information is cached in the client so that + subsequent requests need not go through the lookup process. Should a region be reassigned + either by the master load balancer or because a RegionServer has died, the client will + requery the catalog tables to determine the new location of the user region. + + See for more information about the impact of the Master on HBase + Client communication. + Administrative functions are done via an instance of Admin + + +
    + Cluster Connections + The API changed in HBase 1.0. Its been cleaned up and users are returned + Interfaces to work against rather than particular types. In HBase 1.0, + obtain a cluster Connection from ConnectionFactory and thereafter, get from it + instances of Table, Admin, and RegionLocator on an as-need basis. When done, close + obtained instances. Finally, be sure to cleanup your Connection instance before + exiting. Connections are heavyweight objects. Create once and keep an instance around. + Table, Admin and RegionLocator instances are lightweight. Create as you go and then + let go as soon as you are done by closing them. See the + Client Package Javadoc Description for example usage of the new HBase 1.0 API. + + For connection configuration information, see . + + Table + instances are not thread-safe. Only one thread can use an instance of Table at + any given time. When creating Table instances, it is advisable to use the same HBaseConfiguration + instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers + which is usually what you want. For example, this is preferred: + HBaseConfiguration conf = HBaseConfiguration.create(); +HTable table1 = new HTable(conf, "myTable"); +HTable table2 = new HTable(conf, "myTable"); + as opposed to this: + HBaseConfiguration conf1 = HBaseConfiguration.create(); +HTable table1 = new HTable(conf1, "myTable"); +HBaseConfiguration conf2 = HBaseConfiguration.create(); +HTable table2 = new HTable(conf2, "myTable"); + + For more information about how connections are handled in the HBase client, + see HConnectionManager. + +
    Connection Pooling + For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads + in a single JVM), you can pre-create an HConnection, as shown in + the following example: + + Pre-Creating a <code>HConnection</code> + // Create a connection to the cluster. +HConnection connection = HConnectionManager.createConnection(Configuration); +HTableInterface table = connection.getTable("myTable"); +// use table as needed, the table returned is lightweight +table.close(); +// use the connection for other access to the cluster +connection.close(); + + Constructing HTableInterface implementation is very lightweight and resources are + controlled. + + <code>HTablePool</code> is Deprecated + Previous versions of this guide discussed HTablePool, which was + deprecated in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by HBASE-6500. + Please use HConnection instead. + +
    +
    +
    WriteBuffer and Batch Methods + If is turned off on + HTable, + Puts are sent to RegionServers when the writebuffer + is filled. The writebuffer is 2MB by default. Before an HTable instance is + discarded, either close() or + flushCommits() should be invoked so Puts + will not be lost. + + Note: htable.delete(Delete); does not go in the writebuffer! This only applies to Puts. + + For additional information on write durability, review the ACID semantics page. + + For fine-grained control of batching of + Puts or Deletes, + see the batch methods on HTable. + +
    +
    External Clients + Information on non-Java clients and custom protocols is covered in + +
    +
    + +
    Client Request Filters + Get and Scan instances can be + optionally configured with filters which are applied on the RegionServer. + + Filters can be confusing because there are many different types, and it is best to approach them by understanding the groups + of Filter functionality. + +
    Structural + Structural Filters contain other Filters. +
    FilterList + FilterList + represents a list of Filters with a relationship of FilterList.Operator.MUST_PASS_ALL or + FilterList.Operator.MUST_PASS_ONE between the Filters. The following example shows an 'or' between two + Filters (checking for either 'my value' or 'my other value' on the same attribute). + +FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ONE); +SingleColumnValueFilter filter1 = new SingleColumnValueFilter( + cf, + column, + CompareOp.EQUAL, + Bytes.toBytes("my value") + ); +list.add(filter1); +SingleColumnValueFilter filter2 = new SingleColumnValueFilter( + cf, + column, + CompareOp.EQUAL, + Bytes.toBytes("my other value") + ); +list.add(filter2); +scan.setFilter(list); + +
    +
    +
    + Column Value +
    + SingleColumnValueFilter + SingleColumnValueFilter + can be used to test column values for equivalence (CompareOp.EQUAL + ), inequality (CompareOp.NOT_EQUAL), or ranges (e.g., + CompareOp.GREATER). The following is example of testing equivalence a + column to a String value "my value"... + +SingleColumnValueFilter filter = new SingleColumnValueFilter( + cf, + column, + CompareOp.EQUAL, + Bytes.toBytes("my value") + ); +scan.setFilter(filter); + +
    +
    +
    + Column Value Comparators + There are several Comparator classes in the Filter package that deserve special + mention. These Comparators are used in concert with other Filters, such as . +
    + RegexStringComparator + RegexStringComparator + supports regular expressions for value comparisons. + +RegexStringComparator comp = new RegexStringComparator("my."); // any value that starts with 'my' +SingleColumnValueFilter filter = new SingleColumnValueFilter( + cf, + column, + CompareOp.EQUAL, + comp + ); +scan.setFilter(filter); + + See the Oracle JavaDoc for supported + RegEx patterns in Java. +
    +
    + SubstringComparator + SubstringComparator + can be used to determine if a given substring exists in a value. The comparison is + case-insensitive. + +SubstringComparator comp = new SubstringComparator("y val"); // looking for 'my value' +SingleColumnValueFilter filter = new SingleColumnValueFilter( + cf, + column, + CompareOp.EQUAL, + comp + ); +scan.setFilter(filter); + +
    +
    + BinaryPrefixComparator + See BinaryPrefixComparator. +
    +
    + BinaryComparator + See BinaryComparator. +
    +
    +
    + KeyValue Metadata + As HBase stores data internally as KeyValue pairs, KeyValue Metadata Filters evaluate + the existence of keys (i.e., ColumnFamily:Column qualifiers) for a row, as opposed to + values the previous section. +
    + FamilyFilter + FamilyFilter + can be used to filter on the ColumnFamily. It is generally a better idea to select + ColumnFamilies in the Scan than to do it with a Filter. +
    +
    + QualifierFilter + QualifierFilter + can be used to filter based on Column (aka Qualifier) name. +
    +
    + ColumnPrefixFilter + ColumnPrefixFilter + can be used to filter based on the lead portion of Column (aka Qualifier) names. + A ColumnPrefixFilter seeks ahead to the first column matching the prefix in each row + and for each involved column family. It can be used to efficiently get a subset of the + columns in very wide rows. + Note: The same column qualifier can be used in different column families. This + filter returns all matching columns. + Example: Find all columns in a row and family that start with "abc" + +HTableInterface t = ...; +byte[] row = ...; +byte[] family = ...; +byte[] prefix = Bytes.toBytes("abc"); +Scan scan = new Scan(row, row); // (optional) limit to one row +scan.addFamily(family); // (optional) limit to one family +Filter f = new ColumnPrefixFilter(prefix); +scan.setFilter(f); +scan.setBatch(10); // set this if there could be many columns returned +ResultScanner rs = t.getScanner(scan); +for (Result r = rs.next(); r != null; r = rs.next()) { + for (KeyValue kv : r.raw()) { + // each kv represents a column + } +} +rs.close(); + +
    +
    + MultipleColumnPrefixFilter + MultipleColumnPrefixFilter + behaves like ColumnPrefixFilter but allows specifying multiple prefixes. + Like ColumnPrefixFilter, MultipleColumnPrefixFilter efficiently seeks ahead to the + first column matching the lowest prefix and also seeks past ranges of columns between + prefixes. It can be used to efficiently get discontinuous sets of columns from very wide + rows. + Example: Find all columns in a row and family that start with "abc" or "xyz" + +HTableInterface t = ...; +byte[] row = ...; +byte[] family = ...; +byte[][] prefixes = new byte[][] {Bytes.toBytes("abc"), Bytes.toBytes("xyz")}; +Scan scan = new Scan(row, row); // (optional) limit to one row +scan.addFamily(family); // (optional) limit to one family +Filter f = new MultipleColumnPrefixFilter(prefixes); +scan.setFilter(f); +scan.setBatch(10); // set this if there could be many columns returned +ResultScanner rs = t.getScanner(scan); +for (Result r = rs.next(); r != null; r = rs.next()) { + for (KeyValue kv : r.raw()) { + // each kv represents a column + } +} +rs.close(); + +
    +
    + ColumnRangeFilter + A ColumnRangeFilter + allows efficient intra row scanning. + A ColumnRangeFilter can seek ahead to the first matching column for each involved + column family. It can be used to efficiently get a 'slice' of the columns of a very wide + row. i.e. you have a million columns in a row but you only want to look at columns + bbbb-bbdd. + Note: The same column qualifier can be used in different column families. This + filter returns all matching columns. + Example: Find all columns in a row and family between "bbbb" (inclusive) and "bbdd" + (inclusive) + +HTableInterface t = ...; +byte[] row = ...; +byte[] family = ...; +byte[] startColumn = Bytes.toBytes("bbbb"); +byte[] endColumn = Bytes.toBytes("bbdd"); +Scan scan = new Scan(row, row); // (optional) limit to one row +scan.addFamily(family); // (optional) limit to one family +Filter f = new ColumnRangeFilter(startColumn, true, endColumn, true); +scan.setFilter(f); +scan.setBatch(10); // set this if there could be many columns returned +ResultScanner rs = t.getScanner(scan); +for (Result r = rs.next(); r != null; r = rs.next()) { + for (KeyValue kv : r.raw()) { + // each kv represents a column + } +} +rs.close(); + + Note: Introduced in HBase 0.92 +
    +
    +
    RowKey +
    RowFilter + It is generally a better idea to use the startRow/stopRow methods on Scan for row selection, however + RowFilter can also be used. +
    +
    +
    Utility +
    FirstKeyOnlyFilter + This is primarily used for rowcount jobs. + See FirstKeyOnlyFilter. +
    +
    +
    + +
    Master + HMaster is the implementation of the Master Server. The Master server is + responsible for monitoring all RegionServer instances in the cluster, and is the interface + for all metadata changes. In a distributed cluster, the Master typically runs on the . J Mohamed Zahoor goes into some more detail on the Master + Architecture in this blog posting, HBase HMaster + Architecture . +
    Startup Behavior + If run in a multi-Master environment, all Masters compete to run the cluster. If the active + Master loses its lease in ZooKeeper (or the Master shuts down), then then the remaining Masters jostle to + take over the Master role. + +
    +
    + Runtime Impact + A common dist-list question involves what happens to an HBase cluster when the Master + goes down. Because the HBase client talks directly to the RegionServers, the cluster can + still function in a "steady state." Additionally, per , hbase:meta exists as an HBase table and is not + resident in the Master. However, the Master controls critical functions such as + RegionServer failover and completing region splits. So while the cluster can still run for + a short time without the Master, the Master should be restarted as soon as possible. + +
    +
    Interface + The methods exposed by HMasterInterface are primarily metadata-oriented methods: + + Table (createTable, modifyTable, removeTable, enable, disable) + + ColumnFamily (addColumn, modifyColumn, removeColumn) + + Region (move, assign, unassign) + + + For example, when the HBaseAdmin method disableTable is invoked, it is serviced by the Master server. + +
    +
    Processes + The Master runs several background threads: + +
    LoadBalancer + Periodically, and when there are no regions in transition, + a load balancer will run and move regions around to balance the cluster's load. + See for configuring this property. + See for more information on region assignment. + +
    +
    CatalogJanitor + Periodically checks and cleans up the hbase:meta table. See for more information on META. +
    +
    + +
    +
    + RegionServer + HRegionServer is the RegionServer implementation. It is responsible for + serving and managing regions. In a distributed cluster, a RegionServer runs on a . +
    + Interface + The methods exposed by HRegionRegionInterface contain both data-oriented + and region-maintenance methods: + + Data (get, put, delete, next, etc.) + + + Region (splitRegion, compactRegion, etc.) + + For example, when the HBaseAdmin method + majorCompact is invoked on a table, the client is actually iterating + through all regions for the specified table and requesting a major compaction directly to + each region. +
    +
    + Processes + The RegionServer runs a variety of background threads: +
    + CompactSplitThread + Checks for splits and handle minor compactions. +
    +
    + MajorCompactionChecker + Checks for major compactions. +
    +
    + MemStoreFlusher + Periodically flushes in-memory writes in the MemStore to StoreFiles. +
    +
    + LogRoller + Periodically checks the RegionServer's WAL. +
    +
    + +
    + Coprocessors + Coprocessors were added in 0.92. There is a thorough Blog Overview + of CoProcessors posted. Documentation will eventually move to this reference + guide, but the blog is the most current information available at this time. +
    + +
    + Block Cache + + HBase provides two different BlockCache implementations: the default onheap + LruBlockCache and BucketCache, which is (usually) offheap. This section + discusses benefits and drawbacks of each implementation, how to choose the appropriate + option, and configuration options for each. + + Block Cache Reporting: UI + See the RegionServer UI for detail on caching deploy. Since HBase-0.98.4, the + Block Cache detail has been significantly extended showing configurations, + sizings, current usage, time-in-the-cache, and even detail on block counts and types. + + +
    + + Cache Choices + LruBlockCache is the original implementation, and is + entirely within the Java heap. BucketCache is mainly + intended for keeping blockcache data offheap, although BucketCache can also + keep data onheap and serve from a file-backed cache. + BucketCache is production ready as of hbase-0.98.6 + To run with BucketCache, you need HBASE-11678. This was included in + hbase-0.98.6. + + + + + Fetching will always be slower when fetching from BucketCache, + as compared to the native onheap LruBlockCache. However, latencies tend to be + less erratic across time, because there is less garbage collection when you use + BucketCache since it is managing BlockCache allocations, not the GC. If the + BucketCache is deployed in offheap mode, this memory is not managed by the + GC at all. This is why you'd use BucketCache, so your latencies are less erratic and to mitigate GCs + and heap fragmentation. See Nick Dimiduk's BlockCache 101 for + comparisons running onheap vs offheap tests. Also see + Comparing BlockCache Deploys + which finds that if your dataset fits inside your LruBlockCache deploy, use it otherwise + if you are experiencing cache churn (or you want your cache to exist beyond the + vagaries of java GC), use BucketCache. + + + When you enable BucketCache, you are enabling a two tier caching + system, an L1 cache which is implemented by an instance of LruBlockCache and + an offheap L2 cache which is implemented by BucketCache. Management of these + two tiers and the policy that dictates how blocks move between them is done by + CombinedBlockCache. It keeps all DATA blocks in the L2 + BucketCache and meta blocks -- INDEX and BLOOM blocks -- + onheap in the L1 LruBlockCache. + See for more detail on going offheap. +
    + +
    + General Cache Configurations + Apart from the cache implementation itself, you can set some general configuration + options to control how the cache performs. See . After setting any of these options, restart or rolling restart your cluster for the + configuration to take effect. Check logs for errors or unexpected behavior. + See also , which discusses a new option + introduced in HBASE-9857. +
    + +
    + LruBlockCache Design + The LruBlockCache is an LRU cache that contains three levels of block priority to + allow for scan-resistance and in-memory ColumnFamilies: + + + Single access priority: The first time a block is loaded from HDFS it normally + has this priority and it will be part of the first group to be considered during + evictions. The advantage is that scanned blocks are more likely to get evicted than + blocks that are getting more usage. + + + Mutli access priority: If a block in the previous priority group is accessed + again, it upgrades to this priority. It is thus part of the second group considered + during evictions. + + + In-memory access priority: If the block's family was configured to be + "in-memory", it will be part of this priority disregarding the number of times it + was accessed. Catalog tables are configured like this. This group is the last one + considered during evictions. + To mark a column family as in-memory, call + HColumnDescriptor.setInMemory(true); if creating a table from java, + or set IN_MEMORY => true when creating or altering a table in + the shell: e.g. hbase(main):003:0> create 't', {NAME => 'f', IN_MEMORY => 'true'} + + + For more information, see the LruBlockCache + source + +
    +
    + LruBlockCache Usage + Block caching is enabled by default for all the user tables which means that any + read operation will load the LRU cache. This might be good for a large number of use + cases, but further tunings are usually required in order to achieve better performance. + An important concept is the working set size, or + WSS, which is: "the amount of memory needed to compute the answer to a problem". For a + website, this would be the data that's needed to answer the queries over a short amount + of time. + The way to calculate how much memory is available in HBase for caching is: + + number of region servers * heap size * hfile.block.cache.size * 0.99 + + The default value for the block cache is 0.25 which represents 25% of the available + heap. The last value (99%) is the default acceptable loading factor in the LRU cache + after which eviction is started. The reason it is included in this equation is that it + would be unrealistic to say that it is possible to use 100% of the available memory + since this would make the process blocking from the point where it loads new blocks. + Here are some examples: + + + One region server with the default heap size (1 GB) and the default block cache + size will have 253 MB of block cache available. + + + 20 region servers with the heap size set to 8 GB and a default block cache size + will have 39.6 of block cache. + + + 100 region servers with the heap size set to 24 GB and a block cache size of 0.5 + will have about 1.16 TB of block cache. + + + Your data is not the only resident of the block cache. Here are others that you may have to take into account: + + + + Catalog Tables + + The -ROOT- (prior to HBase 0.96. See ) and hbase:meta tables are forced + into the block cache and have the in-memory priority which means that they are + harder to evict. The former never uses more than a few hundreds of bytes while the + latter can occupy a few MBs (depending on the number of regions). + + + + HFiles Indexes + + An hfile is the file format that HBase uses to store + data in HDFS. It contains a multi-layered index which allows HBase to seek to the + data without having to read the whole file. The size of those indexes is a factor + of the block size (64KB by default), the size of your keys and the amount of data + you are storing. For big data sets it's not unusual to see numbers around 1GB per + region server, although not all of it will be in cache because the LRU will evict + indexes that aren't used. + + + + Keys + + The values that are stored are only half the picture, since each value is + stored along with its keys (row key, family qualifier, and timestamp). See . + + + + Bloom Filters + + Just like the HFile indexes, those data structures (when enabled) are stored + in the LRU. + + + + Currently the recommended way to measure HFile indexes and bloom filters sizes is to + look at the region server web UI and checkout the relevant metrics. For keys, sampling + can be done by using the HFile command line tool and look for the average key size + metric. Since HBase 0.98.3, you can view detail on BlockCache stats and metrics + in a special Block Cache section in the UI. + It's generally bad to use block caching when the WSS doesn't fit in memory. This is + the case when you have for example 40GB available across all your region servers' block + caches but you need to process 1TB of data. One of the reasons is that the churn + generated by the evictions will trigger more garbage collections unnecessarily. Here are + two use cases: + + + Fully random reading pattern: This is a case where you almost never access the + same row twice within a short amount of time such that the chance of hitting a + cached block is close to 0. Setting block caching on such a table is a waste of + memory and CPU cycles, more so that it will generate more garbage to pick up by the + JVM. For more information on monitoring GC, see . + + + Mapping a table: In a typical MapReduce job that takes a table in input, every + row will be read only once so there's no need to put them into the block cache. The + Scan object has the option of turning this off via the setCaching method (set it to + false). You can still keep block caching turned on on this table if you need fast + random read access. An example would be counting the number of rows in a table that + serves live traffic, caching every block of that table would create massive churn + and would surely evict data that's currently in use. + + +
    + Caching META blocks only (DATA blocks in fscache) + An interesting setup is one where we cache META blocks only and we read DATA + blocks in on each access. If the DATA blocks fit inside fscache, this alternative + may make sense when access is completely random across a very large dataset. + To enable this setup, alter your table and for each column family + set BLOCKCACHE => 'false'. You are 'disabling' the + BlockCache for this column family only you can never disable the caching of + META blocks. Since + HBASE-4683 Always cache index and bloom blocks, + we will cache META blocks even if the BlockCache is disabled. + +
    +
    +
    + Offheap Block Cache +
    + How to Enable BucketCache + The usual deploy of BucketCache is via a managing class that sets up two caching tiers: an L1 onheap cache + implemented by LruBlockCache and a second L2 cache implemented with BucketCache. The managing class is CombinedBlockCache by default. + The just-previous link describes the caching 'policy' implemented by CombinedBlockCache. In short, it works + by keeping meta blocks -- INDEX and BLOOM in the L1, onheap LruBlockCache tier -- and DATA + blocks are kept in the L2, BucketCache tier. It is possible to amend this behavior in + HBase since version 1.0 and ask that a column family have both its meta and DATA blocks hosted onheap in the L1 tier by + setting cacheDataInL1 via + (HColumnDescriptor.setCacheDataInL1(true) + or in the shell, creating or amending column families setting CACHE_DATA_IN_L1 + to true: e.g. hbase(main):003:0> create 't', {NAME => 't', CONFIGURATION => {CACHE_DATA_IN_L1 => 'true'}} + + The BucketCache Block Cache can be deployed onheap, offheap, or file based. + You set which via the + hbase.bucketcache.ioengine setting. Setting it to + heap will have BucketCache deployed inside the + allocated java heap. Setting it to offheap will have + BucketCache make its allocations offheap, + and an ioengine setting of file:PATH_TO_FILE will direct + BucketCache to use a file caching (Useful in particular if you have some fast i/o attached to the box such + as SSDs). + + It is possible to deploy an L1+L2 setup where we bypass the CombinedBlockCache + policy and have BucketCache working as a strict L2 cache to the L1 + LruBlockCache. For such a setup, set CacheConfig.BUCKET_CACHE_COMBINED_KEY to + false. In this mode, on eviction from L1, blocks go to L2. + When a block is cached, it is cached first in L1. When we go to look for a cached block, + we look first in L1 and if none found, then search L2. Let us call this deploy format, + Raw L1+L2. + Other BucketCache configs include: specifying a location to persist cache to across + restarts, how many threads to use writing the cache, etc. See the + CacheConfig.html + class for configuration options and descriptions. + + + BucketCache Example Configuration + This sample provides a configuration for a 4 GB offheap BucketCache with a 1 GB + onheap cache. Configuration is performed on the RegionServer. Setting + hbase.bucketcache.ioengine and + hbase.bucketcache.size > 0 enables CombinedBlockCache. + Let us presume that the RegionServer has been set to run with a 5G heap: + i.e. HBASE_HEAPSIZE=5g. + + + First, edit the RegionServer's hbase-env.sh and set + HBASE_OFFHEAPSIZE to a value greater than the offheap size wanted, in + this case, 4 GB (expressed as 4G). Lets set it to 5G. That'll be 4G + for our offheap cache and 1G for any other uses of offheap memory (there are + other users of offheap memory other than BlockCache; e.g. DFSClient + in RegionServer can make use of offheap memory). See . + HBASE_OFFHEAPSIZE=5G + + + Next, add the following configuration to the RegionServer's + hbase-site.xml. + + + hbase.bucketcache.ioengine + offheap + + + hfile.block.cache.size + 0.2 + + + hbase.bucketcache.size + 4196 +]]> + + + + Restart or rolling restart your cluster, and check the logs for any + issues. + + + In the above, we set bucketcache to be 4G. The onheap lrublockcache we + configured to have 0.2 of the RegionServer's heap size (0.2 * 5G = 1G). + In other words, you configure the L1 LruBlockCache as you would normally, + as you would when there is no L2 BucketCache present. + + HBASE-10641 introduced the ability to configure multiple sizes for the + buckets of the bucketcache, in HBase 0.98 and newer. To configurable multiple bucket + sizes, configure the new property (instead of + ) to a comma-separated list of block sizes, + ordered from smallest to largest, with no spaces. The goal is to optimize the bucket + sizes based on your data access patterns. The following example configures buckets of + size 4096 and 8192. + + hfile.block.cache.sizes + 4096,8192 + + ]]> + + Direct Memory Usage In HBase + The default maximum direct memory varies by JVM. Traditionally it is 64M + or some relation to allocated heap size (-Xmx) or no limit at all (JDK7 apparently). + HBase servers use direct memory, in particular short-circuit reading, the hosted DFSClient will + allocate direct memory buffers. If you do offheap block caching, you'll + be making use of direct memory. Starting your JVM, make sure + the -XX:MaxDirectMemorySize setting in + conf/hbase-env.sh is set to some value that is + higher than what you have allocated to your offheap blockcache + (hbase.bucketcache.size). It should be larger than your offheap block + cache and then some for DFSClient usage (How much the DFSClient uses is not + easy to quantify; it is the number of open hfiles * hbase.dfs.client.read.shortcircuit.buffer.size + where hbase.dfs.client.read.shortcircuit.buffer.size is set to 128k in HBase -- see hbase-default.xml + default configurations). + Direct memory, which is part of the Java process heap, is separate from the object + heap allocated by -Xmx. The value allocated by MaxDirectMemorySize must not exceed + physical RAM, and is likely to be less than the total available RAM due to other + memory requirements and system constraints. + + You can see how much memory -- onheap and offheap/direct -- a RegionServer is + configured to use and how much it is using at any one time by looking at the + Server Metrics: Memory tab in the UI. It can also be gotten + via JMX. In particular the direct memory currently used by the server can be found + on the java.nio.type=BufferPool,name=direct bean. Terracotta has + a good write up on using offheap memory in java. It is for their product + BigMemory but alot of the issues noted apply in general to any attempt at going + offheap. Check it out. + + hbase.bucketcache.percentage.in.combinedcache + This is a pre-HBase 1.0 configuration removed because it + was confusing. It was a float that you would set to some value + between 0.0 and 1.0. Its default was 0.9. If the deploy was using + CombinedBlockCache, then the LruBlockCache L1 size was calculated to + be (1 - hbase.bucketcache.percentage.in.combinedcache) * size-of-bucketcache + and the BucketCache size was hbase.bucketcache.percentage.in.combinedcache * size-of-bucket-cache. + where size-of-bucket-cache itself is EITHER the value of the configuration hbase.bucketcache.size + IF it was specified as megabytes OR hbase.bucketcache.size * -XX:MaxDirectMemorySize if + hbase.bucketcache.size between 0 and 1.0. + + In 1.0, it should be more straight-forward. L1 LruBlockCache size + is set as a fraction of java heap using hfile.block.cache.size setting + (not the best name) and L2 is set as above either in absolute + megabytes or as a fraction of allocated maximum direct memory. + + +
    +
    +
    + Comprewssed Blockcache + HBASE-11331 introduced lazy blockcache decompression, more simply referred to + as compressed blockcache. When compressed blockcache is enabled. data and encoded data + blocks are cached in the blockcache in their on-disk format, rather than being + decompressed and decrypted before caching. + For a RegionServer + hosting more data than can fit into cache, enabling this feature with SNAPPY compression + has been shown to result in 50% increase in throughput and 30% improvement in mean + latency while, increasing garbage collection by 80% and increasing overall CPU load by + 2%. See HBASE-11331 for more details about how performance was measured and achieved. + For a RegionServer hosting data that can comfortably fit into cache, or if your workload + is sensitive to extra CPU or garbage-collection load, you may receive less + benefit. + Compressed blockcache is disabled by default. To enable it, set + hbase.block.data.cachecompressed to true in + hbase-site.xml on all RegionServers. +
    +
    + +
    + Write Ahead Log (WAL) + +
    + Purpose + The Write Ahead Log (WAL) records all changes to data in + HBase, to file-based storage. Under normal operations, the WAL is not needed because + data changes move from the MemStore to StoreFiles. However, if a RegionServer crashes or + becomes unavailable before the MemStore is flushed, the WAL ensures that the changes to + the data can be replayed. If writing to the WAL fails, the entire operation to modify the + data fails. + + HBase uses an implementation of the WAL interface. Usually, there is only one instance of a WAL per RegionServer. + The RegionServer records Puts and Deletes to it, before recording them to the for the affected . + + + The HLog + + Prior to 2.0, the interface for WALs in HBase was named HLog. + In 0.94, HLog was the name of the implementation of the WAL. You will likely find + references to the HLog in documentation tailored to these older versions. + + + The WAL resides in HDFS in the /hbase/WALs/ directory (prior to + HBase 0.94, they were stored in /hbase/.logs/), with subdirectories per + region. + For more general information about the concept of write ahead logs, see the + Wikipedia Write-Ahead Log + article. +
    +
    + WAL Flushing + TODO (describe). +
    + +
    + WAL Splitting + + A RegionServer serves many regions. All of the regions in a region server share the + same active WAL file. Each edit in the WAL file includes information about which region + it belongs to. When a region is opened, the edits in the WAL file which belong to that + region need to be replayed. Therefore, edits in the WAL file must be grouped by region + so that particular sets can be replayed to regenerate the data in a particular region. + The process of grouping the WAL edits by region is called log + splitting. It is a critical process for recovering data if a region server + fails. + Log splitting is done by the HMaster during cluster start-up or by the ServerShutdownHandler + as a region server shuts down. So that consistency is guaranteed, affected regions + are unavailable until data is restored. All WAL edits need to be recovered and replayed + before a given region can become available again. As a result, regions affected by + log splitting are unavailable until the process completes. + + Log Splitting, Step by Step + + The <filename>/hbase/WALs/<host>,<port>,<startcode></filename> directory is renamed. + Renaming the directory is important because a RegionServer may still be up and + accepting requests even if the HMaster thinks it is down. If the RegionServer does + not respond immediately and does not heartbeat its ZooKeeper session, the HMaster + may interpret this as a RegionServer failure. Renaming the logs directory ensures + that existing, valid WAL files which are still in use by an active but busy + RegionServer are not written to by accident. + The new directory is named according to the following pattern: + ,,-splitting]]> + An example of such a renamed directory might look like the following: + /hbase/WALs/srv.example.com,60020,1254173957298-splitting + + + Each log file is split, one at a time. + The log splitter reads the log file one edit entry at a time and puts each edit + entry into the buffer corresponding to the edit’s region. At the same time, the + splitter starts several writer threads. Writer threads pick up a corresponding + buffer and write the edit entries in the buffer to a temporary recovered edit + file. The temporary edit file is stored to disk with the following naming pattern: + //recovered.edits/.temp]]> + This file is used to store all the edits in the WAL log for this region. After + log splitting completes, the .temp file is renamed to the + sequence ID of the first log written to the file. + To determine whether all edits have been written, the sequence ID is compared to + the sequence of the last edit that was written to the HFile. If the sequence of the + last edit is greater than or equal to the sequence ID included in the file name, it + is clear that all writes from the edit file have been completed. + + + After log splitting is complete, each affected region is assigned to a + RegionServer. + When the region is opened, the recovered.edits folder is checked for recovered + edits files. If any such files are present, they are replayed by reading the edits + and saving them to the MemStore. After all edit files are replayed, the contents of + the MemStore are written to disk (HFile) and the edit files are deleted. + + + +
    + Handling of Errors During Log Splitting + + If you set the hbase.hlog.split.skip.errors option to + true, errors are treated as follows: + + + Any error encountered during splitting will be logged. + + + The problematic WAL log will be moved into the .corrupt + directory under the hbase rootdir, + + + Processing of the WAL will continue + + + If the hbase.hlog.split.skip.errors optionset to + false, the default, the exception will be propagated and the + split will be logged as failed. See HBASE-2958 When + hbase.hlog.split.skip.errors is set to false, we fail the split but thats + it. We need to do more than just fail split if this flag is set. + +
    + How EOFExceptions are treated when splitting a crashed RegionServers' + WALs + + If an EOFException occurs while splitting logs, the split proceeds even when + hbase.hlog.split.skip.errors is set to + false. An EOFException while reading the last log in the set of + files to split is likely, because the RegionServer is likely to be in the process of + writing a record at the time of a crash. For background, see HBASE-2643 + Figure how to deal with eof splitting logs +
    +
    + +
    + Performance Improvements during Log Splitting + + WAL log splitting and recovery can be resource intensive and take a long time, + depending on the number of RegionServers involved in the crash and the size of the + regions. and were developed to improve + performance during log splitting. + +
    + Distributed Log Splitting + Distributed Log Splitting was added in HBase version 0.92 + (HBASE-1364) + by Prakash Khemani from Facebook. It reduces the time to complete log splitting + dramatically, improving the availability of regions and tables. For + example, recovering a crashed cluster took around 9 hours with single-threaded log + splitting, but only about six minutes with distributed log splitting. + The information in this section is sourced from Jimmy Xiang's blog post at . + + + Enabling or Disabling Distributed Log Splitting + Distributed log processing is enabled by default since HBase 0.92. The setting + is controlled by the hbase.master.distributed.log.splitting + property, which can be set to true or false, + but defaults to true. + + + Distributed Log Splitting, Step by Step + After configuring distributed log splitting, the HMaster controls the process. + The HMaster enrolls each RegionServer in the log splitting process, and the actual + work of splitting the logs is done by the RegionServers. The general process for + log splitting, as described in still applies here. + + If distributed log processing is enabled, the HMaster creates a + split log manager instance when the cluster is started. + The split log manager manages all log files which need + to be scanned and split. The split log manager places all the logs into the + ZooKeeper splitlog node (/hbase/splitlog) as tasks. You can + view the contents of the splitlog by issuing the following + zkcli command. Example output is shown. + ls /hbase/splitlog +[hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost8.sample.com%2C57020%2C1340474893275-splitting%2Fhost8.sample.com%253A57020.1340474893900, +hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost3.sample.com%2C57020%2C1340474893299-splitting%2Fhost3.sample.com%253A57020.1340474893931, +hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost4.sample.com%2C57020%2C1340474893287-splitting%2Fhost4.sample.com%253A57020.1340474893946] + + The output contains some non-ASCII characters. When decoded, it looks much + more simple: + +[hdfs://host2.sample.com:56020/hbase/.logs +/host8.sample.com,57020,1340474893275-splitting +/host8.sample.com%3A57020.1340474893900, +hdfs://host2.sample.com:56020/hbase/.logs +/host3.sample.com,57020,1340474893299-splitting +/host3.sample.com%3A57020.1340474893931, +hdfs://host2.sample.com:56020/hbase/.logs +/host4.sample.com,57020,1340474893287-splitting +/host4.sample.com%3A57020.1340474893946] + + The listing represents WAL file names to be scanned and split, which is a + list of log splitting tasks. + + + The split log manager monitors the log-splitting tasks and workers. + The split log manager is responsible for the following ongoing tasks: + + + Once the split log manager publishes all the tasks to the splitlog + znode, it monitors these task nodes and waits for them to be + processed. + + + Checks to see if there are any dead split log + workers queued up. If it finds tasks claimed by unresponsive workers, it + will resubmit those tasks. If the resubmit fails due to some ZooKeeper + exception, the dead worker is queued up again for retry. + + + Checks to see if there are any unassigned + tasks. If it finds any, it create an ephemeral rescan node so that each + split log worker is notified to re-scan unassigned tasks via the + nodeChildrenChanged ZooKeeper event. + + + Checks for tasks which are assigned but expired. If any are found, they + are moved back to TASK_UNASSIGNED state again so that they can + be retried. It is possible that these tasks are assigned to slow workers, or + they may already be finished. This is not a problem, because log splitting + tasks have the property of idempotence. In other words, the same log + splitting task can be processed many times without causing any + problem. + + + The split log manager watches the HBase split log znodes constantly. If + any split log task node data is changed, the split log manager retrieves the + node data. The + node data contains the current state of the task. You can use the + zkcli get command to retrieve the + current state of a task. In the example output below, the first line of the + output shows that the task is currently unassigned. + +get /hbase/splitlog/hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost6.sample.com%2C57020%2C1340474893287-splitting%2Fhost6.sample.com%253A57020.1340474893945 + +unassigned host2.sample.com:57000 +cZxid = 0×7115 +ctime = Sat Jun 23 11:13:40 PDT 2012 +... + + Based on the state of the task whose data is changed, the split log + manager does one of the following: + + + + Resubmit the task if it is unassigned + + + Heartbeat the task if it is assigned + + + Resubmit or fail the task if it is resigned (see ) + + + Resubmit or fail the task if it is completed with errors (see ) + + + Resubmit or fail the task if it could not complete due to + errors (see ) + + + Delete the task if it is successfully completed or failed + + + + Reasons a Task Will Fail + The task has been deleted. + The node no longer exists. + The log status manager failed to move the state of the task + to TASK_UNASSIGNED. + The number of resubmits is over the resubmit + threshold. + + + + + + Each RegionServer's split log worker performs the log-splitting tasks. + Each RegionServer runs a daemon thread called the split log + worker, which does the work to split the logs. The daemon thread + starts when the RegionServer starts, and registers itself to watch HBase znodes. + If any splitlog znode children change, it notifies a sleeping worker thread to + wake up and grab more tasks. If if a worker's current task’s node data is + changed, the worker checks to see if the task has been taken by another worker. + If so, the worker thread stops work on the current task. + The worker monitors + the splitlog znode constantly. When a new task appears, the split log worker + retrieves the task paths and checks each one until it finds an unclaimed task, + which it attempts to claim. If the claim was successful, it attempts to perform + the task and updates the task's state property based on the + splitting outcome. At this point, the split log worker scans for another + unclaimed task. + + How the Split Log Worker Approaches a Task + + + It queries the task state and only takes action if the task is in + TASK_UNASSIGNED state. + + + If the task is is in TASK_UNASSIGNED state, the + worker attempts to set the state to TASK_OWNED by itself. + If it fails to set the state, another worker will try to grab it. The split + log manager will also ask all workers to rescan later if the task remains + unassigned. + + + If the worker succeeds in taking ownership of the task, it tries to get + the task state again to make sure it really gets it asynchronously. In the + meantime, it starts a split task executor to do the actual work: + + + Get the HBase root folder, create a temp folder under the root, and + split the log file to the temp folder. + + + If the split was successful, the task executor sets the task to + state TASK_DONE. + + + If the worker catches an unexpected IOException, the task is set to + state TASK_ERR. + + + If the worker is shutting down, set the the task to state + TASK_RESIGNED. + + + If the task is taken by another worker, just log it. + + + + + + + The split log manager monitors for uncompleted tasks. + The split log manager returns when all tasks are completed successfully. If + all tasks are completed with some failures, the split log manager throws an + exception so that the log splitting can be retried. Due to an asynchronous + implementation, in very rare cases, the split log manager loses track of some + completed tasks. For that reason, it periodically checks for remaining + uncompleted task in its task map or ZooKeeper. If none are found, it throws an + exception so that the log splitting can be retried right away instead of hanging + there waiting for something that won’t happen. + + +
    +
    + Distributed Log Replay + After a RegionServer fails, its failed region is assigned to another + RegionServer, which is marked as "recovering" in ZooKeeper. A split log worker directly + replays edits from the WAL of the failed region server to the region at its new + location. When a region is in "recovering" state, it can accept writes but no reads + (including Append and Increment), region splits or merges. + Distributed Log Replay extends the framework. It works by + directly replaying WAL edits to another RegionServer instead of creating + recovered.edits files. It provides the following advantages + over distributed log splitting alone: + + It eliminates the overhead of writing and reading a large number of + recovered.edits files. It is not unusual for thousands of + recovered.edits files to be created and written concurrently + during a RegionServer recovery. Many small random writes can degrade overall + system performance. + It allows writes even when a region is in recovering state. It only takes seconds for a recovering region to accept writes again. + + + + Enabling Distributed Log Replay + To enable distributed log replay, set hbase.master.distributed.log.replay to + true. This will be the default for HBase 0.99 (HBASE-10888). + + You must also enable HFile version 3 (which is the default HFile format starting + in HBase 0.99. See HBASE-10855). + Distributed log replay is unsafe for rolling upgrades. +
    +
    +
    +
    + Disabling the WAL + It is possible to disable the WAL, to improve performace in certain specific + situations. However, disabling the WAL puts your data at risk. The only situation where + this is recommended is during a bulk load. This is because, in the event of a problem, + the bulk load can be re-run with no risk of data loss. + The WAL is disabled by calling the HBase client field + Mutation.writeToWAL(false). Use the + Mutation.setDurability(Durability.SKIP_WAL) and Mutation.getDurability() + methods to set and get the field's value. There is no way to disable the WAL for only a + specific table. + + If you disable the WAL for anything other than bulk loads, your data is at + risk. +
    +
    + +
    + +
    + Regions + Regions are the basic element of availability and + distribution for tables, and are comprised of a Store per Column Family. The heirarchy of objects + is as follows: + +Table (HBase table) + Region (Regions for the table) + Store (Store per ColumnFamily for each Region for the table) + MemStore (MemStore for each Store for each Region for the table) + StoreFile (StoreFiles for each Store for each Region for the table) + Block (Blocks within a StoreFile within a Store for each Region for the table) + + For a description of what HBase files look like when written to HDFS, see . + +
    + Considerations for Number of Regions + In general, HBase is designed to run with a small (20-200) number of relatively large (5-20Gb) regions per server. The considerations for this are as follows: +
    + Why cannot I have too many regions? + + Typically you want to keep your region count low on HBase for numerous reasons. + Usually right around 100 regions per RegionServer has yielded the best results. + Here are some of the reasons below for keeping region count low: + + + MSLAB requires 2mb per memstore (that's 2mb per family per region). + 1000 regions that have 2 families each is 3.9GB of heap used, and it's not even storing data yet. NB: the 2MB value is configurable. + + If you fill all the regions at somewhat the same rate, the global memory usage makes it that it forces tiny + flushes when you have too many regions which in turn generates compactions. + Rewriting the same data tens of times is the last thing you want. + An example is filling 1000 regions (with one family) equally and let's consider a lower bound for global memstore + usage of 5GB (the region server would have a big heap). + Once it reaches 5GB it will force flush the biggest region, + at that point they should almost all have about 5MB of data so + it would flush that amount. 5MB inserted later, it would flush another + region that will now have a bit over 5MB of data, and so on. + This is currently the main limiting factor for the number of regions; see + for detailed formula. + + The master as is is allergic to tons of regions, and will + take a lot of time assigning them and moving them around in batches. + The reason is that it's heavy on ZK usage, and it's not very async + at the moment (could really be improved -- and has been imporoved a bunch + in 0.96 hbase). + + + In older versions of HBase (pre-v2 hfile, 0.90 and previous), tons of regions + on a few RS can cause the store file index to rise, increasing heap usage and potentially + creating memory pressure or OOME on the RSs + + + Another issue is the effect of the number of regions on mapreduce jobs; it is typical to have one mapper per HBase region. + Thus, hosting only 5 regions per RS may not be enough to get sufficient number of tasks for a mapreduce job, while 1000 regions will generate far too many tasks. + + See for configuration guidelines. + +
    + +
    + +
    + Region-RegionServer Assignment + This section describes how Regions are assigned to RegionServers. + + +
    + Startup + When HBase starts regions are assigned as follows (short version): + + The Master invokes the AssignmentManager upon startup. + + The AssignmentManager looks at the existing region assignments in META. + + If the region assignment is still valid (i.e., if the RegionServer is still online) + then the assignment is kept. + + If the assignment is invalid, then the LoadBalancerFactory is invoked to assign the + region. The DefaultLoadBalancer will randomly assign the region to a RegionServer. + + META is updated with the RegionServer assignment (if needed) and the RegionServer start codes + (start time of the RegionServer process) upon region opening by the RegionServer. + + + +
    + +
    + Failover + When a RegionServer fails: + + The regions immediately become unavailable because the RegionServer is + down. + + + The Master will detect that the RegionServer has failed. + + + The region assignments will be considered invalid and will be re-assigned just + like the startup sequence. + + + In-flight queries are re-tried, and not lost. + + + Operations are switched to a new RegionServer within the following amount of + time: + ZooKeeper session timeout + split time + assignment/replay time + + + +
    + +
    + Region Load Balancing + + Regions can be periodically moved by the . + +
    + +
    + Region State Transition + HBase maintains a state for each region and persists the state in META. The state + of the META region itself is persisted in ZooKeeper. You can see the states of regions + in transition in the Master web UI. Following is the list of possible region + states. + + + Possible Region States + + OFFLINE: the region is offline and not opening + + + OPENING: the region is in the process of being opened + + + OPEN: the region is open and the region server has notified the master + + + FAILED_OPEN: the region server failed to open the region + + + CLOSING: the region is in the process of being closed + + + CLOSED: the region server has closed the region and notified the master + + + FAILED_CLOSE: the region server failed to close the region + + + SPLITTING: the region server notified the master that the region is + splitting + + + SPLIT: the region server notified the master that the region has finished + splitting + + + SPLITTING_NEW: this region is being created by a split which is in + progress + + + MERGING: the region server notified the master that this region is being merged + with another region + + + MERGED: the region server notified the master that this region has been + merged + + + MERGING_NEW: this region is being created by a merge of two regions + + + +
    + Region State Transitions + + + + +
    + + + + + Graph Legend + + Brown: Offline state, a special state that can be transient (after closed before + opening), terminal (regions of disabled tables), or initial (regions of newly + created tables) + + Palegreen: Online state that regions can serve requests + + Lightblue: Transient states + + Red: Failure states that need OPS attention + + Gold: Terminal states of regions split/merged + + Grey: Initial states of regions created through split/merge + + + + Region State Transitions Explained + + The master moves a region from OFFLINE to + OPENING state and tries to assign the region to a region + server. The region server may or may not have received the open region request. The + master retries sending the open region request to the region server until the RPC + goes through or the master runs out of retries. After the region server receives the + open region request, the region server begins opening the region. + + + If the master is running out of retries, the master prevents the region server + from opening the region by moving the region to CLOSING state and + trying to close it, even if the region server is starting to open the region. + + + After the region server opens the region, it continues to try to notify the + master until the master moves the region to OPEN state and + notifies the region server. The region is now open. + + + If the region server cannot open the region, it notifies the master. The master + moves the region to CLOSED state and tries to open the region on + a different region server. + + + If the master cannot open the region on any of a certain number of regions, it + moves the region to FAILED_OPEN state, and takes no further + action until an operator intervenes from the HBase shell, or the server is + dead. + + + The master moves a region from OPEN to + CLOSING state. The region server holding the region may or may + not have received the close region request. The master retries sending the close + request to the server until the RPC goes through or the master runs out of + retries. + + + If the region server is not online, or throws + NotServingRegionException, the master moves the region to + OFFLINE state and re-assigns it to a different region + server. + + + If the region server is online, but not reachable after the master runs out of + retries, the master moves the region to FAILED_CLOSE state and + takes no further action until an operator intervenes from the HBase shell, or the + server is dead. + + + If the region server gets the close region request, it closes the region and + notifies the master. The master moves the region to CLOSED state + and re-assigns it to a different region server. + + + Before assigning a region, the master moves the region to + OFFLINE state automatically if it is in + CLOSED state. + + + When a region server is about to split a region, it notifies the master. The + master moves the region to be split from OPEN to + SPLITTING state and add the two new regions to be created to + the region server. These two regions are in SPLITING_NEW state + initially. + + + After notifying the master, the region server starts to split the region. Once + past the point of no return, the region server notifies the master again so the + master can update the META. However, the master does not update the region states + until it is notified by the server that the split is done. If the split is + successful, the splitting region is moved from SPLITTING to + SPLIT state and the two new regions are moved from + SPLITTING_NEW to OPEN state. + + + If the split fails, the splitting region is moved from + SPLITTING back to OPEN state, and the two + new regions which were created are moved from SPLITTING_NEW to + OFFLINE state. + + + When a region server is about to merge two regions, it notifies the master + first. The master moves the two regions to be merged from OPEN to + MERGINGstate, and adds the new region which will hold the + contents of the merged regions region to the region server. The new region is in + MERGING_NEW state initially. + + + After notifying the master, the region server starts to merge the two regions. + Once past the point of no return, the region server notifies the master again so the + master can update the META. However, the master does not update the region states + until it is notified by the region server that the merge has completed. If the merge + is successful, the two merging regions are moved from MERGING to + MERGED state and the new region is moved from + MERGING_NEW to OPEN state. + + + If the merge fails, the two merging regions are moved from + MERGING back to OPEN state, and the new + region which was created to hold the contents of the merged regions is moved from + MERGING_NEW to OFFLINE state. + + + For regions in FAILED_OPEN or FAILED_CLOSE + states , the master tries to close them again when they are reassigned by an + operator via HBase Shell. + + + + + + +
    + Region-RegionServer Locality + Over time, Region-RegionServer locality is achieved via HDFS block replication. + The HDFS client does the following by default when choosing locations to write replicas: + + First replica is written to local node + + Second replica is written to a random node on another rack + + Third replica is written on the same rack as the second, but on a different node chosen randomly + + Subsequent replicas are written on random nodes on the cluster. See Replica Placement: The First Baby Steps on this page: HDFS Architecture + + + Thus, HBase eventually achieves locality for a region after a flush or a compaction. + In a RegionServer failover situation a RegionServer may be assigned regions with non-local + StoreFiles (because none of the replicas are local), however as new data is written + in the region, or the table is compacted and StoreFiles are re-written, they will become "local" + to the RegionServer. + + For more information, see Replica Placement: The First Baby Steps on this page: HDFS Architecture + and also Lars George's blog on HBase and HDFS locality. + +
    + +
    + Region Splits + Regions split when they reach a configured threshold. + Below we treat the topic in short. For a longer exposition, + see Apache HBase Region Splitting and Merging + by our Enis Soztutar. + + + Splits run unaided on the RegionServer; i.e. the Master does not + participate. The RegionServer splits a region, offlines the split + region and then adds the daughter regions to META, opens daughters on + the parent's hosting RegionServer and then reports the split to the + Master. See for how to manually manage + splits (and for why you might do this) +
    + Custom Split Policies + The default split policy can be overwritten using a custom RegionSplitPolicy (HBase 0.94+). + Typically a custom split policy should extend HBase's default split policy: ConstantSizeRegionSplitPolicy. + + The policy can set globally through the HBaseConfiguration used or on a per table basis: + +HTableDescriptor myHtd = ...; +myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName()); + + +
    +
    + +
    + Manual Region Splitting + It is possible to manually split your table, either at table creation (pre-splitting), + or at a later time as an administrative action. You might choose to split your region for + one or more of the following reasons. There may be other valid reasons, but the need to + manually split your table might also point to problems with your schema design. + + Reasons to Manually Split Your Table + + Your data is sorted by timeseries or another similar algorithm that sorts new data + at the end of the table. This means that the Region Server holding the last region is + always under load, and the other Region Servers are idle, or mostly idle. See also + . + + + You have developed an unexpected hotspot in one region of your table. For + instance, an application which tracks web searches might be inundated by a lot of + searches for a celebrity in the event of news about that celebrity. See for more discussion about this particular + scenario. + + + After a big increase to the number of Region Servers in your cluster, to get the + load spread out quickly. + + + Before a bulk-load which is likely to cause unusual and uneven load across + regions. + + + See for a discussion about the dangers and + possible benefits of managing splitting completely manually. +
    + Determining Split Points + The goal of splitting your table manually is to improve the chances of balancing the + load across the cluster in situations where good rowkey design alone won't get you + there. Keeping that in mind, the way you split your regions is very dependent upon the + characteristics of your data. It may be that you already know the best way to split your + table. If not, the way you split your table depends on what your keys are like. + + + Alphanumeric Rowkeys + + If your rowkeys start with a letter or number, you can split your table at + letter or number boundaries. For instance, the following command creates a table + with regions that split at each vowel, so the first region has A-D, the second + region has E-H, the third region has I-N, the fourth region has O-V, and the fifth + region has U-Z. + hbase> create 'test_table', 'f1', SPLITS=> ['a', 'e', 'i', 'o', 'u'] + The following command splits an existing table at split point '2'. + hbase> split 'test_table', '2' + You can also split a specific region by referring to its ID. You can find the + region ID by looking at either the table or region in the Web UI. It will be a + long number such as + t2,1,1410227759524.829850c6eaba1acc689480acd8f081bd.. The + format is table_name,start_key,region_idTo split that + region into two, as close to equally as possible (at the nearest row boundary), + issue the following command. + hbase> split 't2,1,1410227759524.829850c6eaba1acc689480acd8f081bd.' + The split key is optional. If it is omitted, the table or region is split in + half. + The following example shows how to use the RegionSplitter to create 10 + regions, split at hexadecimal values. + hbase org.apache.hadoop.hbase.util.RegionSplitter test_table HexStringSplit -c 10 -f f1 + + + + Using a Custom Algorithm + + The RegionSplitter tool is provided with HBase, and uses a SplitAlgorithm to determine split points for you. As + parameters, you give it the algorithm, desired number of regions, and column + families. It includes two split algorithms. The first is the HexStringSplit algorithm, which assumes the row keys are + hexadecimal strings. The second, UniformSplit, assumes the row keys are random byte arrays. You will + probably need to develop your own SplitAlgorithm, using the provided ones as + models. + + + +
    +
    +
    + Online Region Merges + + Both Master and Regionserver participate in the event of online region merges. + Client sends merge RPC to master, then master moves the regions together to the + same regionserver where the more heavily loaded region resided, finally master + send merge request to this regionserver and regionserver run the region merges. + Similar with process of region splits, region merges run as a local transaction + on the regionserver, offlines the regions and then merges two regions on the file + system, atomically delete merging regions from META and add merged region to the META, + opens merged region on the regionserver and reports the merge to Master at last. + + An example of region merges in the hbase shell + $ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME' + hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true + + It's an asynchronous operation and call returns immediately without waiting merge completed. + Passing 'true' as the optional third parameter will force a merge ('force' merges regardless + else merge will fail unless passed adjacent regions. 'force' is for expert use only) + +
    + +
    + Store + A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region. + +
    + MemStore + The MemStore holds in-memory modifications to the Store. Modifications are + Cells/KeyValues. When a flush is requested, the current memstore is moved to a snapshot and is + cleared. HBase continues to serve edits from the new memstore and backing snapshot until + the flusher reports that the flush succeeded. At this point, the snapshot is discarded. + Note that when the flush happens, Memstores that belong to the same region will all be + flushed. +
    +
    + MemStoreFlush + A MemStore flush can be triggered under any of the conditions listed below. The + minimum flush unit is per region, not at individual MemStore level. + + + When a MemStore reaches the value specified by + hbase.hregion.memstore.flush.size, all MemStores that belong to + its region will be flushed out to disk. + + + When overall memstore usage reaches the value specified by + hbase.regionserver.global.memstore.upperLimit, MemStores from + various regions will be flushed out to disk to reduce overall MemStore usage in a + Region Server. The flush order is based on the descending order of a region's + MemStore usage. Regions will have their MemStores flushed until the overall MemStore + usage drops to or slightly below + hbase.regionserver.global.memstore.lowerLimit. + + + When the number of WAL per region server reaches the value specified in + hbase.regionserver.max.logs, MemStores from various regions + will be flushed out to disk to reduce WAL count. The flush order is based on time. + Regions with the oldest MemStores are flushed first until WAL count drops below + hbase.regionserver.max.logs. + + +
    +
    + Scans + + + When a client issues a scan against a table, HBase generates + RegionScanner objects, one per region, to serve the scan request. + + + + The RegionScanner object contains a list of + StoreScanner objects, one per column family. + + + Each StoreScanner object further contains a list of + StoreFileScanner objects, corresponding to each StoreFile and + HFile of the corresponding column family, and a list of + KeyValueScanner objects for the MemStore. + + + The two lists are merge into one, which is sorted in ascending order with the + scan object for the MemStore at the end of the list. + + + When a StoreFileScanner object is constructed, it is associated + with a MultiVersionConsistencyControl read point, which is the + current memstoreTS, filtering out any new updates beyond the read + point. + + +
    +
    + StoreFile (HFile) + StoreFiles are where your data lives. + +
    HFile Format + The hfile file format is based on + the SSTable file described in the BigTable [2006] paper and on + Hadoop's tfile + (The unit test suite and the compression harness were taken directly from tfile). + Schubert Zhang's blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase's hfile. Matteo Bertozzi has also put up a + helpful description, HBase I/O: HFile. + + For more information, see the HFile source code. + Also see for information about the HFile v2 format that was included in 0.92. + +
    +
    + HFile Tool + + To view a textualized version of hfile content, you can do use + the org.apache.hadoop.hbase.io.hfile.HFile + tool. Type the following to see usage:$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile For + example, to view the content of the file + hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475, + type the following: $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475 If + you leave off the option -v to see just a summary on the hfile. See + usage for other things to do with the HFile + tool. +
    +
    + StoreFile Directory Structure on HDFS + For more information of what StoreFiles look like on HDFS with respect to the directory structure, see . + +
    +
    + +
    + Blocks + StoreFiles are composed of blocks. The blocksize is configured on a per-ColumnFamily basis. + + Compression happens at the block level within StoreFiles. For more information on compression, see . + + For more information on blocks, see the HFileBlock source code. + +
    +
    + KeyValue + The KeyValue class is the heart of data storage in HBase. KeyValue wraps a byte array and takes offsets and lengths into passed array + at where to start interpreting the content as KeyValue. + + The KeyValue format inside a byte array is: + + keylength + valuelength + key + value + + + The Key is further decomposed as: + + rowlength + row (i.e., the rowkey) + columnfamilylength + columnfamily + columnqualifier + timestamp + keytype (e.g., Put, Delete, DeleteColumn, DeleteFamily) + + + KeyValue instances are not split across blocks. + For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read + in as a coherent block. For more information, see the KeyValue source code. + +
    Example + To emphasize the points above, examine what happens with two Puts for two different columns for the same row: + + Put #1: rowkey=row1, cf:attr1=value1 + Put #2: rowkey=row1, cf:attr2=value2 + + Even though these are for the same row, a KeyValue is created for each column: + Key portion for Put #1: + + rowlength ------------> 4 + row -----------------> row1 + columnfamilylength ---> 2 + columnfamily --------> cf + columnqualifier ------> attr1 + timestamp -----------> server time of Put + keytype -------------> Put + + + Key portion for Put #2: + + rowlength ------------> 4 + row -----------------> row1 + columnfamilylength ---> 2 + columnfamily --------> cf + columnqualifier ------> attr2 + timestamp -----------> server time of Put + keytype -------------> Put + + + + It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within + the KeyValue instance. The longer these identifiers are, the bigger the KeyValue is. +
    + +
    +
    + Compaction + + Ambiguous Terminology + A StoreFile is a facade of HFile. In terms of compaction, use of + StoreFile seems to have prevailed in the past. + A Store is the same thing as a ColumnFamily. + StoreFiles are related to a Store, or ColumnFamily. + + If you want to read more about StoreFiles versus HFiles and Stores versus + ColumnFamilies, see HBASE-11316. + + + When the MemStore reaches a given size + (hbase.hregion.memstore.flush.size), it flushes its contents to a + StoreFile. The number of StoreFiles in a Store increases over time. + Compaction is an operation which reduces the number of + StoreFiles in a Store, by merging them together, in order to increase performance on + read operations. Compactions can be resource-intensive to perform, and can either help + or hinder performance depending on many factors. + Compactions fall into two categories: minor and major. Minor and major compactions + differ in the following ways. + Minor compactions usually select a small number of small, + adjacent StoreFiles and rewrite them as a single StoreFile. Minor compactions do not + drop (filter out) deletes or expired versions, because of potential side effects. See and for information on how deletes and versions are + handled in relation to compactions. The end result of a minor compaction is fewer, + larger StoreFiles for a given Store. + The end result of a major compaction is a single StoreFile + per Store. Major compactions also process delete markers and max versions. See and for information on how deletes and versions are + handled in relation to compactions. + + + Compaction and Deletions + When an explicit deletion occurs in HBase, the data is not actually deleted. + Instead, a tombstone marker is written. The tombstone marker + prevents the data from being returned with queries. During a major compaction, the + data is actually deleted, and the tombstone marker is removed from the StoreFile. If + the deletion happens because of an expired TTL, no tombstone is created. Instead, the + expired data is filtered out and is not written back to the compacted + StoreFile. + + + + Compaction and Versions + When you create a Column Family, you can specify the maximum number of versions + to keep, by specifying HColumnDescriptor.setMaxVersions(int + versions). The default value is 3. If more versions + than the specified maximum exist, the excess versions are filtered out and not written + back to the compacted StoreFile. + + + + Major Compactions Can Impact Query Results + In some situations, older versions can be inadvertently resurrected if a newer + version is explicitly deleted. See for a more in-depth explanation. + This situation is only possible before the compaction finishes. + + + In theory, major compactions improve performance. However, on a highly loaded + system, major compactions can require an inappropriate number of resources and adversely + affect performance. In a default configuration, major compactions are scheduled + automatically to run once in a 7-day period. This is sometimes inappropriate for systems + in production. You can manage major compactions manually. See . + Compactions do not perform region merges. See for more information on region merging. +
    + Compaction Policy - HBase 0.96.x and newer + Compacting large StoreFiles, or too many StoreFiles at once, can cause more IO + load than your cluster is able to handle without causing performance problems. The + method by which HBase selects which StoreFiles to include in a compaction (and whether + the compaction is a minor or major compaction) is called the compaction + policy. + Prior to HBase 0.96.x, there was only one compaction policy. That original + compaction policy is still available as + RatioBasedCompactionPolicy The new compaction default + policy, called ExploringCompactionPolicy, was subsequently + backported to HBase 0.94 and HBase 0.95, and is the default in HBase 0.96 and newer. + It was implemented in HBASE-7842. In + short, ExploringCompactionPolicy attempts to select the best + possible set of StoreFiles to compact with the least amount of work, while the + RatioBasedCompactionPolicy selects the first set that meets + the criteria. + Regardless of the compaction policy used, file selection is controlled by several + configurable parameters and happens in a multi-step approach. These parameters will be + explained in context, and then will be given in a table which shows their + descriptions, defaults, and implications of changing them. + +
    + Being Stuck + When the MemStore gets too large, it needs to flush its contents to a StoreFile. + However, a Store can only have hbase.hstore.blockingStoreFiles + files, so the MemStore needs to wait for the number of StoreFiles to be reduced by + one or more compactions. However, if the MemStore grows larger than + hbase.hregion.memstore.flush.size, it is not able to flush its + contents to a StoreFile. If the MemStore is too large and the number of StpreFo;es + is also too high, the algorithm is said to be "stuck". The compaction algorithm + checks for this "stuck" situation and provides mechanisms to alleviate it. +
    + +
    + The ExploringCompactionPolicy Algorithm + The ExploringCompactionPolicy algorithm considers each possible set of + adjacent StoreFiles before choosing the set where compaction will have the most + benefit. + One situation where the ExploringCompactionPolicy works especially well is when + you are bulk-loading data and the bulk loads create larger StoreFiles than the + StoreFiles which are holding data older than the bulk-loaded data. This can "trick" + HBase into choosing to perform a major compaction each time a compaction is needed, + and cause a lot of extra overhead. With the ExploringCompactionPolicy, major + compactions happen much less frequently because minor compactions are more + efficient. + In general, ExploringCompactionPolicy is the right choice for most situations, + and thus is the default compaction policy. You can also use + ExploringCompactionPolicy along with . + The logic of this policy can be examined in + hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java. + The following is a walk-through of the logic of the + ExploringCompactionPolicy. + + + Make a list of all existing StoreFiles in the Store. The rest of the + algorithm filters this list to come up with the subset of HFiles which will be + chosen for compaction. + + + If this was a user-requested compaction, attempt to perform the requested + compaction type, regardless of what would normally be chosen. Note that even if + the user requests a major compaction, it may not be possible to perform a major + compaction. This may be because not all StoreFiles in the Column Family are + available to compact or because there are too many Stores in the Column + Family. + + + Some StoreFiles are automatically excluded from consideration. These + include: + + + StoreFiles that are larger than + hbase.hstore.compaction.max.size + + + StoreFiles that were created by a bulk-load operation which explicitly + excluded compaction. You may decide to exclude StoreFiles resulting from + bulk loads, from compaction. To do this, specify the + hbase.mapreduce.hfileoutputformat.compaction.exclude + parameter during the bulk load operation. + + + + + Iterate through the list from step 1, and make a list of all potential sets + of StoreFiles to compact together. A potential set is a grouping of + hbase.hstore.compaction.min contiguous StoreFiles in the + list. For each set, perform some sanity-checking and figure out whether this is + the best compaction that could be done: + + + If the number of StoreFiles in this set (not the size of the StoreFiles) + is fewer than hbase.hstore.compaction.min or more than + hbase.hstore.compaction.max, take it out of + consideration. + + + Compare the size of this set of StoreFiles with the size of the smallest + possible compaction that has been found in the list so far. If the size of + this set of StoreFiles represents the smallest compaction that could be + done, store it to be used as a fall-back if the algorithm is "stuck" and no + StoreFiles would otherwise be chosen. See . + + + Do size-based sanity checks against each StoreFile in this set of + StoreFiles. + + + If the size of this StoreFile is larger than + hbase.hstore.compaction.max.size, take it out of + consideration. + + + If the size is greater than or equal to + hbase.hstore.compaction.min.size, sanity-check it + against the file-based ratio to see whether it is too large to be + considered. The sanity-checking is successful if: + + + There is only one StoreFile in this set, or + + + For each StoreFile, its size multiplied by + hbase.hstore.compaction.ratio (or + hbase.hstore.compaction.ratio.offpeak if + off-peak hours are configured and it is during off-peak hours) is + less than the sum of the sizes of the other HFiles in the + set. + + + + + + + + + If this set of StoreFiles is still in consideration, compare it to the + previously-selected best compaction. If it is better, replace the + previously-selected best compaction with this one. + + + When the entire list of potential compactions has been processed, perform + the best compaction that was found. If no StoreFiles were selected for + compaction, but there are multiple StoreFiles, assume the algorithm is stuck + (see ) and if so, perform the smallest + compaction that was found in step 3. + + +
    + +
    + RatioBasedCompactionPolicy Algorithm + The RatioBasedCompactionPolicy was the only compaction policy prior to HBase + 0.96, though ExploringCompactionPolicy has now been backported to HBase 0.94 and + 0.95. To use the RatioBasedCompactionPolicy rather than the + ExploringCompactionPolicy, set + hbase.hstore.defaultengine.compactionpolicy.class to + RatioBasedCompactionPolicy in the + hbase-site.xml file. To switch back to the + ExploringCompactionPolicy, remove the setting from the + hbase-site.xml. + The following section walks you through the algorithm used to select StoreFiles + for compaction in the RatioBasedCompactionPolicy. + + + The first phase is to create a list of all candidates for compaction. A list + is created of all StoreFiles not already in the compaction queue, and all + StoreFiles newer than the newest file that is currently being compacted. This + list of StoreFiles is ordered by the sequence ID. The sequence ID is generated + when a Put is appended to the write-ahead log (WAL), and is stored in the + metadata of the HFile. + + + Check to see if the algorithm is stuck (see , and if so, a major compaction is forced. + This is a key area where is often a better choice than the + RatioBasedCompactionPolicy. + + + If the compaction was user-requested, try to perform the type of compaction + that was requested. Note that a major compaction may not be possible if all + HFiles are not available for compaction or if too may StoreFiles exist (more + than hbase.hstore.compaction.max). + + + Some StoreFiles are automatically excluded from consideration. These + include: + + + StoreFiles that are larger than + hbase.hstore.compaction.max.size + + + StoreFiles that were created by a bulk-load operation which explicitly + excluded compaction. You may decide to exclude StoreFiles resulting from + bulk loads, from compaction. To do this, specify the + hbase.mapreduce.hfileoutputformat.compaction.exclude + parameter during the bulk load operation. + + + + + The maximum number of StoreFiles allowed in a major compaction is controlled + by the hbase.hstore.compaction.max parameter. If the list + contains more than this number of StoreFiles, a minor compaction is performed + even if a major compaction would otherwise have been done. However, a + user-requested major compaction still occurs even if there are more than + hbase.hstore.compaction.max StoreFiles to compact. + + + If the list contains fewer than + hbase.hstore.compaction.min StoreFiles to compact, a minor + compaction is aborted. Note that a major compaction can be performed on a single + HFile. Its function is to remove deletes and expired versions, and reset + locality on the StoreFile. + + + The value of the hbase.hstore.compaction.ratio parameter + is multiplied by the sum of StoreFiles smaller than a given file, to determine + whether that StoreFile is selected for compaction during a minor compaction. For + instance, if hbase.hstore.compaction.ratio is 1.2, FileX is 5 mb, FileY is 2 mb, + and FileZ is 3 mb: + 5 <= 1.2 x (2 + 3) or 5 <= 6 + In this scenario, FileX is eligible for minor compaction. If FileX were 7 + mb, it would not be eligible for minor compaction. This ratio favors smaller + StoreFile. You can configure a different ratio for use in off-peak hours, using + the parameter hbase.hstore.compaction.ratio.offpeak, if you + also configure hbase.offpeak.start.hour and + hbase.offpeak.end.hour. + + + + If the last major compaction was too long ago and there is more than one + StoreFile to be compacted, a major compaction is run, even if it would otherwise + have been minor. By default, the maximum time between major compactions is 7 + days, plus or minus a 4.8 hour period, and determined randomly within those + parameters. Prior to HBase 0.96, the major compaction period was 24 hours. See + hbase.hregion.majorcompaction in the table below to tune or + disable time-based major compactions. + + +
    + +
    + + Parameters Used by Compaction Algorithm + This table contains the main configuration parameters for compaction. This list + is not exhaustive. To tune these parameters from the defaults, edit the + hbase-default.xml file. For a full list of all configuration + parameters available, see + + +
    + + Parameter + Description + Default + + + + + hbase.hstore.compaction.min + The minimum number of StoreFiles which must be eligible for + compaction before compaction can run. + The goal of tuning hbase.hstore.compaction.min + is to avoid ending up with too many tiny StoreFiles to compact. Setting + this value to 2 would cause a minor compaction each + time you have two StoreFiles in a Store, and this is probably not + appropriate. If you set this value too high, all the other values will + need to be adjusted accordingly. For most cases, the default value is + appropriate. + In previous versions of HBase, the parameter + hbase.hstore.compaction.min was called + hbase.hstore.compactionThreshold. + + 3 + + + hbase.hstore.compaction.max + The maximum number of StoreFiles which will be selected for a + single minor compaction, regardless of the number of eligible + StoreFiles. + Effectively, the value of + hbase.hstore.compaction.max controls the length of + time it takes a single compaction to complete. Setting it larger means + that more StoreFiles are included in a compaction. For most cases, the + default value is appropriate. + + 10 + + + hbase.hstore.compaction.min.size + A StoreFile smaller than this size will always be eligible for + minor compaction. StoreFiles this size or larger are evaluated by + hbase.hstore.compaction.ratio to determine if they are + eligible. + Because this limit represents the "automatic include" limit for + all StoreFiles smaller than this value, this value may need to be reduced + in write-heavy environments where many files in the 1-2 MB range are being + flushed, because every StoreFile will be targeted for compaction and the + resulting StoreFiles may still be under the minimum size and require + further compaction. + If this parameter is lowered, the ratio check is triggered more + quickly. This addressed some issues seen in earlier versions of HBase but + changing this parameter is no longer necessary in most situations. + + 128 MB + + + hbase.hstore.compaction.max.size + An StoreFile larger than this size will be excluded from + compaction. The effect of raising + hbase.hstore.compaction.max.size is fewer, larger + StoreFiles that do not get compacted often. If you feel that compaction is + happening too often without much benefit, you can try raising this + value. + Long.MAX_VALUE + + + hbase.hstore.compaction.ratio + For minor compaction, this ratio is used to determine whether a + given StoreFile which is larger than + hbase.hstore.compaction.min.size is eligible for + compaction. Its effect is to limit compaction of large StoreFile. The + value of hbase.hstore.compaction.ratio is expressed as + a floating-point decimal. + A large ratio, such as 10, will produce a + single giant StoreFile. Conversely, a value of .25, + will produce behavior similar to the BigTable compaction algorithm, + producing four StoreFiles. + A moderate value of between 1.0 and 1.4 is recommended. When + tuning this value, you are balancing write costs with read costs. Raising + the value (to something like 1.4) will have more write costs, because you + will compact larger StoreFiles. However, during reads, HBase will need to seek + through fewer StpreFo;es to accomplish the read. Consider this approach if you + cannot take advantage of . + Alternatively, you can lower this value to something like 1.0 to + reduce the background cost of writes, and use to limit the number of StoreFiles touched + during reads. + For most cases, the default value is appropriate. + + 1.2F + + + hbase.hstore.compaction.ratio.offpeak + The compaction ratio used during off-peak compactions, if off-peak + hours are also configured (see below). Expressed as a floating-point + decimal. This allows for more aggressive (or less aggressive, if you set it + lower than hbase.hstore.compaction.ratio) compaction + during a set time period. Ignored if off-peak is disabled (default). This + works the same as hbase.hstore.compaction.ratio. + 5.0F + + + hbase.offpeak.start.hour + The start of off-peak hours, expressed as an integer between 0 and 23, + inclusive. Set to -1 to disable off-peak. + -1 (disabled) + + + hbase.offpeak.end.hour + The end of off-peak hours, expressed as an integer between 0 and 23, + inclusive. Set to -1 to disable off-peak. + -1 (disabled) + + + hbase.regionserver.thread.compaction.throttle + There are two different thread pools for compactions, one for + large compactions and the other for small compactions. This helps to keep + compaction of lean tables (such as hbase:meta) + fast. If a compaction is larger than this threshold, it goes into the + large compaction pool. In most cases, the default value is + appropriate. + 2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size + (which defaults to 128) + + + hbase.hregion.majorcompaction + Time between major compactions, expressed in milliseconds. Set to + 0 to disable time-based automatic major compactions. User-requested and + size-based major compactions will still run. This value is multiplied by + hbase.hregion.majorcompaction.jitter to cause + compaction to start at a somewhat-random time during a given window of + time. + 7 days (604800000 milliseconds) + + + hbase.hregion.majorcompaction.jitter + A multiplier applied to + hbase.hregion.majorcompaction to cause compaction to + occur a given amount of time either side of + hbase.hregion.majorcompaction. The smaller the + number, the closer the compactions will happen to the + hbase.hregion.majorcompaction interval. Expressed as + a floating-point decimal. + .50F + + + + + + +
    + Compaction File Selection + + Legacy Information + This section has been preserved for historical reasons and refers to the way + compaction worked prior to HBase 0.96.x. You can still use this behavior if you + enable For information on + the way that compactions work in HBase 0.96.x and later, see . + + To understand the core algorithm for StoreFile selection, there is some ASCII-art + in the Store + source code that will serve as useful reference. It has been copied below: + +/* normal skew: + * + * older ----> newer + * _ + * | | _ + * | | | | _ + * --|-|- |-|- |-|---_-------_------- minCompactSize + * | | | | | | | | _ | | + * | | | | | | | | | | | | + * | | | | | | | | | | | | + */ + + Important knobs: + + hbase.hstore.compaction.ratio Ratio used in compaction file + selection algorithm (default 1.2f). + + + hbase.hstore.compaction.min (.90 + hbase.hstore.compactionThreshold) (files) Minimum number of StoreFiles per Store + to be selected for a compaction to occur (default 2). + + + hbase.hstore.compaction.max (files) Maximum number of + StoreFiles to compact per minor compaction (default 10). + + + hbase.hstore.compaction.min.size (bytes) Any StoreFile smaller + than this setting with automatically be a candidate for compaction. Defaults to + hbase.hregion.memstore.flush.size (128 mb). + + + hbase.hstore.compaction.max.size (.92) (bytes) Any StoreFile + larger than this setting with automatically be excluded from compaction (default + Long.MAX_VALUE). + + + + The minor compaction StoreFile selection logic is size based, and selects a file + for compaction when the file <= sum(smaller_files) * + hbase.hstore.compaction.ratio. + +
    + Minor Compaction File Selection - Example #1 (Basic Example) + This example mirrors an example from the unit test + TestCompactSelection. + + + hbase.hstore.compaction.ratio = 1.0f + + + hbase.hstore.compaction.min = 3 (files) + + + hbase.hstore.compaction.max = 5 (files) + + + hbase.hstore.compaction.min.size = 10 (bytes) + + + hbase.hstore.compaction.max.size = 1000 (bytes) + + + The following StoreFiles exist: 100, 50, 23, 12, and 12 bytes apiece (oldest to + newest). With the above parameters, the files that would be selected for minor + compaction are 23, 12, and 12. + Why? + + 100 --> No, because sum(50, 23, 12, 12) * 1.0 = 97. + + + 50 --> No, because sum(23, 12, 12) * 1.0 = 47. + + + 23 --> Yes, because sum(12, 12) * 1.0 = 24. + + + 12 --> Yes, because the previous file has been included, and because this + does not exceed the the max-file limit of 5 + + + 12 --> Yes, because the previous file had been included, and because this + does not exceed the the max-file limit of 5. + + + +
    +
    + Minor Compaction File Selection - Example #2 (Not Enough Files To + Compact) + This example mirrors an example from the unit test + TestCompactSelection. + + hbase.hstore.compaction.ratio = 1.0f + + + hbase.hstore.compaction.min = 3 (files) + + + hbase.hstore.compaction.max = 5 (files) + + + hbase.hstore.compaction.min.size = 10 (bytes) + + + hbase.hstore.compaction.max.size = 1000 (bytes) + + + + The following StoreFiles exist: 100, 25, 12, and 12 bytes apiece (oldest to + newest). With the above parameters, no compaction will be started. + Why? + + 100 --> No, because sum(25, 12, 12) * 1.0 = 47 + + + 25 --> No, because sum(12, 12) * 1.0 = 24 + + + 12 --> No. Candidate because sum(12) * 1.0 = 12, there are only 2 files + to compact and that is less than the threshold of 3 + + + 12 --> No. Candidate because the previous StoreFile was, but there are + not enough files to compact + + + +
    +
    + Minor Compaction File Selection - Example #3 (Limiting Files To Compact) + This example mirrors an example from the unit test + TestCompactSelection. + + hbase.hstore.compaction.ratio = 1.0f + + + hbase.hstore.compaction.min = 3 (files) + + + hbase.hstore.compaction.max = 5 (files) + + + hbase.hstore.compaction.min.size = 10 (bytes) + + + hbase.hstore.compaction.max.size = 1000 (bytes) + + The following StoreFiles exist: 7, 6, 5, 4, 3, 2, and 1 bytes apiece + (oldest to newest). With the above parameters, the files that would be selected for + minor compaction are 7, 6, 5, 4, 3. + Why? + + 7 --> Yes, because sum(6, 5, 4, 3, 2, 1) * 1.0 = 21. Also, 7 is less than + the min-size + + + 6 --> Yes, because sum(5, 4, 3, 2, 1) * 1.0 = 15. Also, 6 is less than + the min-size. + + + 5 --> Yes, because sum(4, 3, 2, 1) * 1.0 = 10. Also, 5 is less than the + min-size. + + + 4 --> Yes, because sum(3, 2, 1) * 1.0 = 6. Also, 4 is less than the + min-size. + + + 3 --> Yes, because sum(2, 1) * 1.0 = 3. Also, 3 is less than the + min-size. + + + 2 --> No. Candidate because previous file was selected and 2 is less than + the min-size, but the max-number of files to compact has been reached. + + + 1 --> No. Candidate because previous file was selected and 1 is less than + the min-size, but max-number of files to compact has been reached. + + + +
    + Impact of Key Configuration Options + + This information is now included in the configuration parameter table in . + +
    +
    +
    +
    + Experimental: Stripe Compactions + Stripe compactions is an experimental feature added in HBase 0.98 which aims to + improve compactions for large regions or non-uniformly distributed row keys. In order + to achieve smaller and/or more granular compactions, the StoreFiles within a region + are maintained separately for several row-key sub-ranges, or "stripes", of the region. + The stripes are transparent to the rest of HBase, so other operations on the HFiles or + data work without modification. + Stripe compactions change the HFile layout, creating sub-regions within regions. + These sub-regions are easier to compact, and should result in fewer major compactions. + This approach alleviates some of the challenges of larger regions. + Stripe compaction is fully compatible with and works in conjunction with either the + ExploringCompactionPolicy or RatioBasedCompactionPolicy. It can be enabled for + existing tables, and the table will continue to operate normally if it is disabled + later. +
    +
    + When To Use Stripe Compactions + Consider using stripe compaction if you have either of the following: + + + Large regions. You can get the positive effects of smaller regions without + additional overhead for MemStore and region management overhead. + + + Non-uniform keys, such as time dimension in a key. Only the stripes receiving + the new keys will need to compact. Old data will not compact as often, if at + all + + + + Performance Improvements + Performance testing has shown that the performance of reads improves somewhat, + and variability of performance of reads and writes is greatly reduced. An overall + long-term performance improvement is seen on large non-uniform-row key regions, such + as a hash-prefixed timestamp key. These performance gains are the most dramatic on a + table which is already large. It is possible that the performance improvement might + extend to region splits. + +
    + Enabling Stripe Compaction + You can enable stripe compaction for a table or a column family, by setting its + hbase.hstore.engine.class to + org.apache.hadoop.hbase.regionserver.StripeStoreEngine. You + also need to set the hbase.hstore.blockingStoreFiles to a high + number, such as 100 (rather than the default value of 10). + + Enable Stripe Compaction + + If the table already exists, disable the table. + + + Run one of following commands in the HBase shell. Replace the table name + orders_table with the name of your table. + +alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'} +alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}} +create 'orders_table', 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'} + + + + Configure other options if needed. See for more information. + + + Enable the table. + + + + + Disable Stripe Compaction + + Disable the table. + + + Set the hbase.hstore.engine.class option to either nil or + org.apache.hadoop.hbase.regionserver.DefaultStoreEngine. + Either option has the same effect. + +alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => ''} + + + + Enable the table. + + + When you enable a large table after changing the store engine either way, a + major compaction will likely be performed on most regions. This is not necessary on + new tables. +
    +
    + Configuring Stripe Compaction + Each of the settings for stripe compaction should be configured at the table or + column family, after disabling the table. If you use HBase shell, the general + command pattern is as follows: + + +alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}} + +
    + Region and stripe sizing + You can configure your stripe sizing bsaed upon your region sizing. By + default, your new regions will start with one stripe. On the next compaction after + the stripe has grown too large (16 x MemStore flushes size), it is split into two + stripes. Stripe splitting continues as the region grows, until the region is large + enough to split. + You can improve this pattern for your own data. A good rule is to aim for a + stripe size of at least 1 GB, and about 8-12 stripes for uniform row keys. For + example, if your regions are 30 GB, 12 x 2.5 GB stripes might be a good starting + point. + +
    + This graph shows all allowed transitions a region can undergo. In the graph, + each node is a state. A node has a color based on the state type, for readability. + A directed line in the graph is a possible state transition. +
    + Stripe Sizing Settings + + + + + + Setting + Notes + + + + + + hbase.store.stripe.initialStripeCount + + + The number of stripes to create when stripe compaction is enabled. + You can use it as follows: + + For relatively uniform row keys, if you know the approximate + target number of stripes from the above, you can avoid some + splitting overhead by starting with several stripes (2, 5, 10...). + If the early data is not representative of overall row key + distribution, this will not be as efficient. + + + For existing tables with a large amount of data, this setting + will effectively pre-split your stripes. + + + For keys such as hash-prefixed sequential keys, with more than + one hash prefix per region, pre-splitting may make sense. + + + + + + + hbase.store.stripe.sizeToSplit + + The maximum size a stripe grows before splitting. Use this in + conjunction with hbase.store.stripe.splitPartCount to + control the target stripe size (sizeToSplit = splitPartsCount * target + stripe size), according to the above sizing considerations. + + + + hbase.store.stripe.splitPartCount + + The number of new stripes to create when splitting a stripe. The + default is 2, which is appropriate for most cases. For non-uniform row + keys, you can experiment with increasing the number to 3 or 4, to isolate + the arriving updates into narrower slice of the region without additional + splits being required. + + + +
    + +
    + MemStore Size Settings + By default, the flush creates several files from one MemStore, according to + existing stripe boundaries and row keys to flush. This approach minimizes write + amplification, but can be undesirable if the MemStore is small and there are many + stripes, because the files will be too small. + In this type of situation, you can set + hbase.store.stripe.compaction.flushToL0 to + true. This will cause a MemStore flush to create a single + file instead. When at least + hbase.store.stripe.compaction.minFilesL0 such files (by + default, 4) accumulate, they will be compacted into striped files. +
    +
    + Normal Compaction Configuration and Stripe Compaction + All the settings that apply to normal compactions (see ) apply to stripe compactions. + The exceptions are the minimum and maximum number of files, which are set to + higher values by default because the files in stripes are smaller. To control + these for stripe compactions, use + hbase.store.stripe.compaction.minFiles and + hbase.store.stripe.compaction.maxFiles, rather than + hbase.hstore.compaction.min and + hbase.hstore.compaction.max. +
    + + + + + + + + +
    Bulk Loading +
    Overview + + HBase includes several methods of loading data into tables. + The most straightforward method is to either use the TableOutputFormat + class from a MapReduce job, or use the normal client APIs; however, + these are not always the most efficient methods. + + + The bulk load feature uses a MapReduce job to output table data in HBase's internal + data format, and then directly loads the generated StoreFiles into a running + cluster. Using bulk load will use less CPU and network resources than + simply using the HBase API. + +
    +
    Bulk Load Limitations + As bulk loading bypasses the write path, the WAL doesn’t get written to as part of the process. + Replication works by reading the WAL files so it won’t see the bulk loaded data – and the same goes for the edits that use Put.setWriteToWAL(true). + One way to handle that is to ship the raw files or the HFiles to the other cluster and do the other processing there. +
    +
    Bulk Load Architecture + + The HBase bulk load process consists of two main steps. + +
    Preparing data via a MapReduce job + + The first step of a bulk load is to generate HBase data files (StoreFiles) from + a MapReduce job using HFileOutputFormat. This output format writes + out data in HBase's internal storage format so that they can be + later loaded very efficiently into the cluster. + + + In order to function efficiently, HFileOutputFormat must be + configured such that each output HFile fits within a single region. + In order to do this, jobs whose output will be bulk loaded into HBase + use Hadoop's TotalOrderPartitioner class to partition the map output + into disjoint ranges of the key space, corresponding to the key + ranges of the regions in the table. + + + HFileOutputFormat includes a convenience function, + configureIncrementalLoad(), which automatically sets up + a TotalOrderPartitioner based on the current region boundaries of a + table. + +
    +
    Completing the data load + + After the data has been prepared using + HFileOutputFormat, it is loaded into the cluster using + completebulkload. This command line tool iterates + through the prepared data files, and for each one determines the + region the file belongs to. It then contacts the appropriate Region + Server which adopts the HFile, moving it into its storage directory + and making the data available to clients. + + + If the region boundaries have changed during the course of bulk load + preparation, or between the preparation and completion steps, the + completebulkloads utility will automatically split the + data files into pieces corresponding to the new boundaries. This + process is not optimally efficient, so users should take care to + minimize the delay between preparing a bulk load and importing it + into the cluster, especially if other clients are simultaneously + loading data through other means. + +
    +
    +
    Importing the prepared data using the completebulkload tool + + After a data import has been prepared, either by using the + importtsv tool with the + "importtsv.bulk.output" option or by some other MapReduce + job using the HFileOutputFormat, the + completebulkload tool is used to import the data into the + running cluster. + + + The completebulkload tool simply takes the output path + where importtsv or your MapReduce job put its results, and + the table name to import into. For example: + + $ hadoop jar hbase-server-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable + + The -c config-file option can be used to specify a file + containing the appropriate hbase parameters (e.g., hbase-site.xml) if + not supplied already on the CLASSPATH (In addition, the CLASSPATH must + contain the directory that has the zookeeper configuration file if + zookeeper is NOT managed by HBase). + + + Note: If the target table does not already exist in HBase, this + tool will create the table automatically. + + This tool will run quickly, after which point the new data will be visible in + the cluster. + +
    +
    See Also + For more information about the referenced utilities, see and . + + + See How-to: Use HBase Bulk Loading, and Why + for a recent blog on current state of bulk loading. + +
    +
    Advanced Usage + + Although the importtsv tool is useful in many cases, advanced users may + want to generate data programatically, or import data from other formats. To get + started doing so, dig into ImportTsv.java and check the JavaDoc for + HFileOutputFormat. + + + The import step of the bulk load can also be done programatically. See the + LoadIncrementalHFiles class for more information. + +
    +
    + +
    HDFS + As HBase runs on HDFS (and each StoreFile is written as a file on HDFS), + it is important to have an understanding of the HDFS Architecture + especially in terms of how it stores files, handles failovers, and replicates blocks. + + See the Hadoop documentation on HDFS Architecture + for more information. + +
    NameNode + The NameNode is responsible for maintaining the filesystem metadata. See the above HDFS Architecture link + for more information. + +
    +
    DataNode + The DataNodes are responsible for storing HDFS blocks. See the above HDFS Architecture link + for more information. + +
    +
    + +
    + Timeline-consistent High Available Reads +
    + Introduction + + HBase, architecturally, always had the strong consistency guarantee from the start. All reads and writes are routed through a single region server, which guarantees that all writes happen in an order, and all reads are seeing the most recent committed data. + + However, because of this single homing of the reads to a single location, if the server becomes unavailable, the regions of the table that were hosted in the region server become unavailable for some time. There are three phases in the region recovery process - detection, assignment, and recovery. Of these, the detection is usually the longest and is presently in the order of 20-30 seconds depending on the zookeeper session timeout. During this time and before the recovery is complete, the clients will not be able to read the region data. + + However, for some use cases, either the data may be read-only, or doing reads againsts some stale data is acceptable. With timeline-consistent high available reads, HBase can be used for these kind of latency-sensitive use cases where the application can expect to have a time bound on the read completion. + + For achieving high availability for reads, HBase provides a feature called “region replication”. In this model, for each region of a table, there will be multiple replicas that are opened in different region servers. By default, the region replication is set to 1, so only a single region replica is deployed and there will not be any changes from the original model. If region replication is set to 2 or more, than the master will assign replicas of the regions of the table. The Load Balancer ensures that the region replicas are not co-hosted in the same region servers and also in the same rack (if possible). + + All of the replicas for a single region will have a unique replica_id, starting from 0. The region replica having replica_id==0 is called the primary region, and the others “secondary regions” or secondaries. Only the primary can accept writes from the client, and the primary will always contain the latest changes. Since all writes still have to go through the primary region, the writes are not highly-available (meaning they might block for some time if the region becomes unavailable). + + The writes are asynchronously sent to the secondary region replicas using an “Async WAL replication” feature. This works similarly to HBase’s multi-datacenter replication, but instead the data from a region is replicated to the secondary regions. Each secondary replica always receives and observes the writes in the same order that the primary region committed them. This ensures that the secondaries won’t diverge from the primary regions data, but since the log replication is asnyc, the data might be stale in secondary regions. In some sense, this design can be thought of as “in-cluster replication”, where instead of replicating to a different datacenter, the data goes to a secondary region to keep secondary region’s in-memory state up to date. The data files are shared between the primary region and the other replicas, so that there is no extra storage overhead. However, the secondary regions will have recent non-flushed data in their memstores, which increases the memory overhead. + + Async WAL replication feature is being implemented in Phase 2 of issue HBASE-10070. Before this, region replicas will only be updated with flushed data files from the primary (see hbase.regionserver.storefile.refresh.period below). It is also possible to use this without setting storefile.refresh.period for read only tables. + +
    +
    + Timeline Consistency + + With this feature, HBase introduces a Consistency definition, which can be provided per read operation (get or scan). + +public enum Consistency { + STRONG, + TIMELINE +} + + Consistency.STRONG is the default consistency model provided by HBase. In case the table has region replication = 1, or in a table with region replicas but the reads are done with this consistency, the read is always performed by the primary regions, so that there will not be any change from the previous behaviour, and the client always observes the latest data. + + In case a read is performed with Consistency.TIMELINE, then the read RPC will be sent to the primary region server first. After a short interval (hbase.client.primaryCallTimeout.get, 10ms by default), parallel RPC for secondary region replicas will also be sent if the primary does not respond back. After this, the result is returned from whichever RPC is finished first. If the response came back from the primary region replica, we can always know that the data is latest. For this Result.isStale() API has been added to inspect the staleness. If the result is from a secondary region, then Result.isStale() will be set to true. The user can then inspect this field to possibly reason about the data. + + In terms of semantics, TIMELINE consistency as implemented by HBase differs from pure eventual + consistency in these respects: + + + Single homed and ordered updates: Region replication or not, on the write side, + there is still only 1 defined replica (primary) which can accept writes. This + replica is responsible for ordering the edits and preventing conflicts. This + guarantees that two different writes are not committed at the same time by different + replicas and the data diverges. With this, there is no need to do read-repair or + last-timestamp-wins kind of conflict resolution. + + + The secondaries also apply the edits in the order that the primary committed + them. This way the secondaries will contain a snapshot of the primaries data at any + point in time. This is similar to RDBMS replications and even HBase’s own + multi-datacenter replication, however in a single cluster. + + + On the read side, the client can detect whether the read is coming from + up-to-date data or is stale data. Also, the client can issue reads with different + consistency requirements on a per-operation basis to ensure its own semantic + guarantees. + + + The client can still observe edits out-of-order, and can go back in time, if it + observes reads from one secondary replica first, then another secondary replica. + There is no stickiness to region replicas or a transaction-id based guarantee. If + required, this can be implemented later though. + + + +
    + HFile Version 1 + + + + + + HFile Version 1 + + +
    + + To better understand the TIMELINE semantics, lets look at the above diagram. Lets say that there are two clients, and the first one writes x=1 at first, then x=2 and x=3 later. As above, all writes are handled by the primary region replica. The writes are saved in the write ahead log (WAL), and replicated to the other replicas asynchronously. In the above diagram, notice that replica_id=1 received 2 updates, and it’s data shows that x=2, while the replica_id=2 only received a single update, and its data shows that x=1. + + If client1 reads with STRONG consistency, it will only talk with the replica_id=0, and thus is guaranteed to observe the latest value of x=3. In case of a client issuing TIMELINE consistency reads, the RPC will go to all replicas (after primary timeout) and the result from the first response will be returned back. Thus the client can see either 1, 2 or 3 as the value of x. Let’s say that the primary region has failed and log replication cannot continue for some time. If the client does multiple reads with TIMELINE consistency, she can observe x=2 first, then x=1, and so on. + + +
    +
    + Tradeoffs + Having secondary regions hosted for read availability comes with some tradeoffs which + should be carefully evaluated per use case. Following are advantages and + disadvantages. + + Advantages + + High availability for read-only tables. + + + High availability for stale reads + + + Ability to do very low latency reads with very high percentile (99.9%+) latencies + for stale reads + + + + + Disadvantages + + Double / Triple memstore usage (depending on region replication count) for tables + with region replication > 1 + + + Increased block cache usage + + + Extra network traffic for log replication + + + Extra backup RPCs for replicas + + + To serve the region data from multiple replicas, HBase opens the regions in secondary + mode in the region servers. The regions opened in secondary mode will share the same data + files with the primary region replica, however each secondary region replica will have its + own memstore to keep the unflushed data (only primary region can do flushes). Also to + serve reads from secondary regions, the blocks of data files may be also cached in the + block caches for the secondary regions. +
    +
    + Configuration properties + + To use highly available reads, you should set the following properties in hbase-site.xml file. There is no specific configuration to enable or disable region replicas. Instead you can change the number of region replicas per table to increase or decrease at the table creation or with alter table. + +
    + Server side properties + + hbase.regionserver.storefile.refresh.period + 0 + + The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this feature is disabled. Secondary regions sees new files (from flushes and compactions) from primary once the secondary region refreshes the list of files in the region. But too frequent refreshes might cause extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger value is also recommended with this setting. + + +]]> + + One thing to keep in mind also is that, region replica placement policy is only + enforced by the StochasticLoadBalancer which is the default balancer. If + you are using a custom load balancer property in hbase-site.xml + (hbase.master.loadbalancer.class) replicas of regions might end up being + hosted in the same server. +
    +
    + Client side properties + Ensure to set the following for all clients (and servers) that will use region + replicas. + + hbase.ipc.client.allowsInterrupt + true + + Whether to enable interruption of RPC threads at the client side. This is required for region replicas with fallback RPC’s to secondary regions. + + + + hbase.client.primaryCallTimeout.get + 10000 + + The timeout (in microseconds), before secondary fallback RPC’s are submitted for get requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. + + + + hbase.client.primaryCallTimeout.multiget + 10000 + + The timeout (in microseconds), before secondary fallback RPC’s are submitted for multi-get requests (HTable.get(List)) with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. + + + + hbase.client.replicaCallTimeout.scan + 1000000 + + The timeout (in microseconds), before secondary fallback RPC’s are submitted for scan requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 1 sec. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. + + +]]> + +
    +
    +
    + Creating a table with region replication + + Region replication is a per-table property. All tables have REGION_REPLICATION = 1 by default, which means that there is only one replica per region. You can set and change the number of replicas per region of a table by supplying the REGION_REPLICATION property in the table descriptor. + +
    Shell + 2} + +describe 't1' +for i in 1..100 +put 't1', "r#{i}", 'f1:c1', i +end +flush 't1' +]]> + +
    +
    Java + + + You can also use setRegionReplication() and alter table to increase, decrease the + region replication for a table. +
    +
    +
    + Region splits and merges + Region splits and merges are not compatible with regions with replicas yet. So you + have to pre-split the table, and disable the region splits. Also you should not execute + region merges on tables with region replicas. To disable region splits you can use + DisabledRegionSplitPolicy as the split policy. +
    +
    + User Interface + In the masters user interface, the region replicas of a table are also shown together + with the primary regions. You can notice that the replicas of a region will share the same + start and end keys and the same region name prefix. The only difference would be the + appended replica_id (which is encoded as hex), and the region encoded name will be + different. You can also see the replica ids shown explicitly in the UI. +
    +
    + API and Usage +
    + Shell + You can do reads in shell using a the Consistency.TIMELINE semantics as follows + + get 't1','r6', {CONSISTENCY => "TIMELINE"} +]]> + You can simulate a region server pausing or becoming unavailable and do a read from + the secondary replica: + + +hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"} +]]> + Using scans is also similar + scan 't1', {CONSISTENCY => 'TIMELINE'} +]]> +
    +
    + Java + You can set set the consistency for Gets and Scans and do requests as + follows. + + You can also pass multiple gets: + gets = new ArrayList(); +gets.add(get1); +... +Result[] results = table.get(gets); +]]> + And Scans: + + You can inspect whether the results are coming from primary region or not by calling + the Result.isStale() method: + + +
    +
    + +
    + Resources + + + More information about the design and implementation can be found at the jira + issue: HBASE-10070 + + + + HBaseCon 2014 talk also contains some + details and slides. + + +
    +
    + + + diff --git src/main/docbkx/asf.xml src/main/docbkx/asf.xml new file mode 100644 index 0000000..1455b4a --- /dev/null +++ src/main/docbkx/asf.xml @@ -0,0 +1,44 @@ + + + + HBase and the Apache Software Foundation + HBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure + a healthy project. +
    ASF Development Process + See the Apache Development Process page + for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing + and getting involved, and how open-source works at ASF. + +
    +
    ASF Board Reporting + Once a quarter, each project in the ASF portfolio submits a report to the ASF board. This is done by the HBase project + lead and the committers. See ASF board reporting for more information. + +
    +
    diff --git src/main/docbkx/book.xml src/main/docbkx/book.xml index f835dc7..3010055 100644 --- src/main/docbkx/book.xml +++ src/main/docbkx/book.xml @@ -1,4 +1,5 @@ + - - - - - - - - Data Model - In HBase, data is stored in tables, which have rows and columns. This is a terminology - overlap with relational databases (RDBMSs), but this is not a helpful analogy. Instead, it can - be helpful to think of an HBase table as a multi-dimensional map. - - HBase Data Model Terminology - - Table - - An HBase table consists of multiple rows. - - - - Row - - A row in HBase consists of a row key and one or more columns with values associated - with them. Rows are sorted alphabetically by the row key as they are stored. For this - reason, the design of the row key is very important. The goal is to store data in such a - way that related rows are near each other. A common row key pattern is a website domain. - If your row keys are domains, you should probably store them in reverse (org.apache.www, - org.apache.mail, org.apache.jira). This way, all of the Apache domains are near each - other in the table, rather than being spread out based on the first letter of the - subdomain. - - - - Column - - A column in HBase consists of a column family and a column qualifier, which are - delimited by a : (colon) character. - - - - Column Family - - Column families physically colocate a set of columns and their values, often for - performance reasons. Each column family has a set of storage properties, such as whether - its values should be cached in memory, how its data is compressed or its row keys are - encoded, and others. Each row in a table has the same column - families, though a given row might not store anything in a given column family. - Column families are specified when you create your table, and influence the way your - data is stored in the underlying filesystem. Therefore, the column families should be - considered carefully during schema design. - - - - Column Qualifier - - A column qualifier is added to a column family to provide the index for a given - piece of data. Given a column family content, a column qualifier - might be content:html, and another might be - content:pdf. Though column families are fixed at table creation, - column qualifiers are mutable and may differ greatly between rows. - - - - Cell - - A cell is a combination of row, column family, and column qualifier, and contains a - value and a timestamp, which represents the value's version. - A cell's value is an uninterpreted array of bytes. - - - - Timestamp - - A timestamp is written alongside each value, and is the identifier for a given - version of a value. By default, the timestamp represents the time on the RegionServer - when the data was written, but you can specify a different timestamp value when you put - data into the cell. - - Direct manipulation of timestamps is an advanced feature which is only exposed for - special cases that are deeply integrated with HBase, and is discouraged in general. - Encoding a timestamp at the application level is the preferred pattern. - - You can specify the maximum number of versions of a value that HBase retains, per column - family. When the maximum number of versions is reached, the oldest versions are - eventually deleted. By default, only the newest version is kept. - - - - -
    - Conceptual View - You can read a very understandable explanation of the HBase data model in the blog post Understanding - HBase and BigTable by Jim R. Wilson. Another good explanation is available in the - PDF Introduction - to Basic Schema Design by Amandeep Khurana. It may help to read different - perspectives to get a solid understanding of HBase schema design. The linked articles cover - the same ground as the information in this section. - The following example is a slightly modified form of the one on page 2 of the BigTable paper. There - is a table called webtable that contains two rows - (com.cnn.www - and com.example.www), three column families named - contents, anchor, and people. In - this example, for the first row (com.cnn.www), - anchor contains two columns (anchor:cssnsi.com, - anchor:my.look.ca) and contents contains one column - (contents:html). This example contains 5 versions of the row with the - row key com.cnn.www, and one version of the row with the row key - com.example.www. The contents:html column qualifier contains the entire - HTML of a given website. Qualifiers of the anchor column family each - contain the external site which links to the site represented by the row, along with the - text it used in the anchor of its link. The people column family represents - people associated with the site. - - - Column Names - By convention, a column name is made of its column family prefix and a - qualifier. For example, the column - contents:html is made up of the column family - contents and the html qualifier. The colon - character (:) delimits the column family from the column family - qualifier. - - - Table <varname>webtable</varname> - - - - - - - - - Row Key - Time Stamp - ColumnFamily contents - ColumnFamily anchor - ColumnFamily people - - - - - "com.cnn.www" - t9 - - anchor:cnnsi.com = "CNN" - - - - "com.cnn.www" - t8 - - anchor:my.look.ca = "CNN.com" - - - - "com.cnn.www" - t6 - contents:html = "<html>..." - - - - - "com.cnn.www" - t5 - contents:html = "<html>..." - - - - - "com.cnn.www" - t3 - contents:html = "<html>..." - - - - - "com.example.www" - t5 - contents:html = "<html>..." - - people:author = "John Doe" - - - -
    - Cells in this table that appear to be empty do not take space, or in fact exist, in - HBase. This is what makes HBase "sparse." A tabular view is not the only possible way to - look at data in HBase, or even the most accurate. The following represents the same - information as a multi-dimensional map. This is only a mock-up for illustrative - purposes and may not be strictly accurate. - ..." - t5: contents:html: "..." - t3: contents:html: "..." - } - anchor: { - t9: anchor:cnnsi.com = "CNN" - t8: anchor:my.look.ca = "CNN.com" - } - people: {} - } - "com.example.www": { - contents: { - t5: contents:html: "..." - } - anchor: {} - people: { - t5: people:author: "John Doe" - } - } -} - ]]> - -
    -
    - Physical View - Although at a conceptual level tables may be viewed as a sparse set of rows, they are - physically stored by column family. A new column qualifier (column_family:column_qualifier) - can be added to an existing column family at any time. - - ColumnFamily <varname>anchor</varname> - - - - - - - Row Key - Time Stamp - Column Family anchor - - - - - "com.cnn.www" - t9 - anchor:cnnsi.com = "CNN" - - - "com.cnn.www" - t8 - anchor:my.look.ca = "CNN.com" - - - -
    - - ColumnFamily <varname>contents</varname> - - - - - - - Row Key - Time Stamp - ColumnFamily "contents:" - - - - - "com.cnn.www" - t6 - contents:html = "<html>..." - - - "com.cnn.www" - t5 - contents:html = "<html>..." - - - "com.cnn.www" - t3 - contents:html = "<html>..." - - - -
    - The empty cells shown in the - conceptual view are not stored at all. - Thus a request for the value of the contents:html column at time stamp - t8 would return no value. Similarly, a request for an - anchor:my.look.ca value at time stamp t9 would - return no value. However, if no timestamp is supplied, the most recent value for a - particular column would be returned. Given multiple versions, the most recent is also the - first one found, since timestamps - are stored in descending order. Thus a request for the values of all columns in the row - com.cnn.www if no timestamp is specified would be: the value of - contents:html from timestamp t6, the value of - anchor:cnnsi.com from timestamp t9, the value of - anchor:my.look.ca from timestamp t8. - For more information about the internals of how Apache HBase stores data, see . -
    - -
    - Namespace - A namespace is a logical grouping of tables analogous to a database in relation - database systems. This abstraction lays the groundwork for upcoming multi-tenancy related - features: - - Quota Management (HBASE-8410) - Restrict the amount of resources (ie regions, - tables) a namespace can consume. - - - Namespace Security Administration (HBASE-9206) - provide another level of security - administration for tenants. - - - Region server groups (HBASE-6721) - A namespace/table can be pinned onto a subset - of regionservers thus guaranteeing a course level of isolation. - - - -
    - Namespace management - A namespace can be created, removed or altered. Namespace membership is determined - during table creation by specifying a fully-qualified table name of the form: - - :]]> - - - - Examples - - -#Create a namespace -create_namespace 'my_ns' - - -#create my_table in my_ns namespace -create 'my_ns:my_table', 'fam' - - -#drop namespace -drop_namespace 'my_ns' - - -#alter namespace -alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'} - - - -
    - Predefined namespaces - There are two predefined special namespaces: - - - hbase - system namespace, used to contain hbase internal tables - - - default - tables with no explicit specified namespace will automatically fall into - this namespace. - - - - Examples - - -#namespace=foo and table qualifier=bar -create 'foo:bar', 'fam' - -#namespace=default and table qualifier=bar -create 'bar', 'fam' - - -
    - - -
    - Table - Tables are declared up front at schema definition time. -
    - -
    - Row - Row keys are uninterrpreted bytes. Rows are lexicographically sorted with the lowest - order appearing first in a table. The empty byte array is used to denote both the start and - end of a tables' namespace. -
    - -
    - Column Family<indexterm><primary>Column Family</primary></indexterm> - Columns in Apache HBase are grouped into column families. All - column members of a column family have the same prefix. For example, the columns - courses:history and courses:math are both - members of the courses column family. The colon character - (:) delimits the column family from the column - family qualifierColumn Family Qualifier. - The column family prefix must be composed of printable characters. The - qualifying tail, the column family qualifier, can be made of any - arbitrary bytes. Column families must be declared up front at schema definition time whereas - columns do not need to be defined at schema time but can be conjured on the fly while the - table is up an running. - Physically, all column family members are stored together on the filesystem. Because - tunings and storage specifications are done at the column family level, it is advised that - all column family members have the same general access pattern and size - characteristics. - -
    -
    - Cells<indexterm><primary>Cells</primary></indexterm> - A {row, column, version} tuple exactly specifies a - cell in HBase. Cell content is uninterrpreted bytes -
    -
    - Data Model Operations - The four primary data model operations are Get, Put, Scan, and Delete. Operations are - applied via Table - instances. - -
    - Get - Get - returns attributes for a specified row. Gets are executed via - Table.get. -
    -
    - Put - Put - either adds new rows to a table (if the key is new) or can update existing rows (if the - key already exists). Puts are executed via - Table.put (writeBuffer) or - Table.batch (non-writeBuffer). -
    -
    - Scans - Scan - allow iteration over multiple rows for specified attributes. - The following is an example of a Scan on a Table instance. Assume that a table is - populated with rows with keys "row1", "row2", "row3", and then another set of rows with - the keys "abc1", "abc2", and "abc3". The following example shows how to set a Scan - instance to return the rows beginning with "row". - -public static final byte[] CF = "cf".getBytes(); -public static final byte[] ATTR = "attr".getBytes(); -... - -Table table = ... // instantiate a Table instance - -Scan scan = new Scan(); -scan.addColumn(CF, ATTR); -scan.setRowPrefixFilter(Bytes.toBytes("row")); -ResultScanner rs = table.getScanner(scan); -try { - for (Result r = rs.next(); r != null; r = rs.next()) { - // process result... -} finally { - rs.close(); // always close the ResultScanner! - - Note that generally the easiest way to specify a specific stop point for a scan is by - using the InclusiveStopFilter - class. -
    -
    - Delete - Delete - removes a row from a table. Deletes are executed via - HTable.delete. - HBase does not modify data in place, and so deletes are handled by creating new - markers called tombstones. These tombstones, along with the dead - values, are cleaned up on major compactions. - See for more information on deleting versions of columns, and - see for more information on compactions. - -
    - -
    - - -
    - Versions<indexterm><primary>Versions</primary></indexterm> - - A {row, column, version} tuple exactly specifies a - cell in HBase. It's possible to have an unbounded number of cells where - the row and column are the same but the cell address differs only in its version - dimension. - - While rows and column keys are expressed as bytes, the version is specified using a long - integer. Typically this long contains time instances such as those returned by - java.util.Date.getTime() or System.currentTimeMillis(), that is: - the difference, measured in milliseconds, between the current time and midnight, - January 1, 1970 UTC. - - The HBase version dimension is stored in decreasing order, so that when reading from a - store file, the most recent values are found first. - - There is a lot of confusion over the semantics of cell versions, in - HBase. In particular: - - - If multiple writes to a cell have the same version, only the last written is - fetchable. - - - - It is OK to write cells in a non-increasing version order. - - - - Below we describe how the version dimension in HBase currently works. See HBASE-2406 for - discussion of HBase versions. Bending time in HBase - makes for a good read on the version, or time, dimension in HBase. It has more detail on - versioning than is provided here. As of this writing, the limiitation - Overwriting values at existing timestamps mentioned in the - article no longer holds in HBase. This section is basically a synopsis of this article - by Bruno Dumon. - -
    - Specifying the Number of Versions to Store - The maximum number of versions to store for a given column is part of the column - schema and is specified at table creation, or via an alter command, via - HColumnDescriptor.DEFAULT_VERSIONS. Prior to HBase 0.96, the default number - of versions kept was 3, but in 0.96 and newer has been changed to - 1. - - Modify the Maximum Number of Versions for a Column - This example uses HBase Shell to keep a maximum of 5 versions of column - f1. You could also use HColumnDescriptor. - alter ‘t1′, NAME => ‘f1′, VERSIONS => 5]]> - - - Modify the Minimum Number of Versions for a Column - You can also specify the minimum number of versions to store. By default, this is - set to 0, which means the feature is disabled. The following example sets the minimum - number of versions on field f1 to 2, via HBase Shell. - You could also use HColumnDescriptor. - alter ‘t1′, NAME => ‘f1′, MIN_VERSIONS => 2]]> - - Starting with HBase 0.98.2, you can specify a global default for the maximum number of - versions kept for all newly-created columns, by setting - in hbase-site.xml. See - . -
    - -
    - Versions and HBase Operations - - In this section we look at the behavior of the version dimension for each of the core - HBase operations. - -
    - Get/Scan - - Gets are implemented on top of Scans. The below discussion of Get - applies equally to Scans. - - By default, i.e. if you specify no explicit version, when doing a - get, the cell whose version has the largest value is returned - (which may or may not be the latest one written, see later). The default behavior can be - modified in the following ways: - - - - to return more than one version, see Get.setMaxVersions() - - - - to return versions other than the latest, see Get.setTimeRange() - - To retrieve the latest version that is less than or equal to a given value, thus - giving the 'latest' state of the record at a certain point in time, just use a range - from 0 to the desired version and set the max versions to 1. - - - -
    -
    - Default Get Example - The following Get will only retrieve the current version of the row - -public static final byte[] CF = "cf".getBytes(); -public static final byte[] ATTR = "attr".getBytes(); -... -Get get = new Get(Bytes.toBytes("row1")); -Result r = table.get(get); -byte[] b = r.getValue(CF, ATTR); // returns current version of value - -
    -
    - Versioned Get Example - The following Get will return the last 3 versions of the row. - -public static final byte[] CF = "cf".getBytes(); -public static final byte[] ATTR = "attr".getBytes(); -... -Get get = new Get(Bytes.toBytes("row1")); -get.setMaxVersions(3); // will return last 3 versions of row -Result r = table.get(get); -byte[] b = r.getValue(CF, ATTR); // returns current version of value -List<KeyValue> kv = r.getColumn(CF, ATTR); // returns all versions of this column - -
    - -
    - Put - - Doing a put always creates a new version of a cell, at a certain - timestamp. By default the system uses the server's currentTimeMillis, - but you can specify the version (= the long integer) yourself, on a per-column level. - This means you could assign a time in the past or the future, or use the long value for - non-time purposes. - - To overwrite an existing value, do a put at exactly the same row, column, and - version as that of the cell you would overshadow. -
    - Implicit Version Example - The following Put will be implicitly versioned by HBase with the current - time. - -public static final byte[] CF = "cf".getBytes(); -public static final byte[] ATTR = "attr".getBytes(); -... -Put put = new Put(Bytes.toBytes(row)); -put.add(CF, ATTR, Bytes.toBytes( data)); -table.put(put); - -
    -
    - Explicit Version Example - The following Put has the version timestamp explicitly set. - -public static final byte[] CF = "cf".getBytes(); -public static final byte[] ATTR = "attr".getBytes(); -... -Put put = new Put( Bytes.toBytes(row)); -long explicitTimeInMs = 555; // just an example -put.add(CF, ATTR, explicitTimeInMs, Bytes.toBytes(data)); -table.put(put); - - Caution: the version timestamp is internally by HBase for things like time-to-live - calculations. It's usually best to avoid setting this timestamp yourself. Prefer using - a separate timestamp attribute of the row, or have the timestamp a part of the rowkey, - or both. -
    - -
    - -
    - Delete - - There are three different types of internal delete markers. See Lars Hofhansl's blog - for discussion of his attempt adding another, Scanning - in HBase: Prefix Delete Marker. - - - Delete: for a specific version of a column. - - - Delete column: for all versions of a column. - - - Delete family: for all columns of a particular ColumnFamily - - - When deleting an entire row, HBase will internally create a tombstone for each - ColumnFamily (i.e., not each individual column). - Deletes work by creating tombstone markers. For example, let's - suppose we want to delete a row. For this you can specify a version, or else by default - the currentTimeMillis is used. What this means is delete all - cells where the version is less than or equal to this version. HBase never - modifies data in place, so for example a delete will not immediately delete (or mark as - deleted) the entries in the storage file that correspond to the delete condition. - Rather, a so-called tombstone is written, which will mask the - deleted values. When HBase does a major compaction, the tombstones are processed to - actually remove the dead values, together with the tombstones themselves. If the version - you specified when deleting a row is larger than the version of any value in the row, - then you can consider the complete row to be deleted. - For an informative discussion on how deletes and versioning interact, see the thread Put w/ - timestamp -> Deleteall -> Put w/ timestamp fails up on the user mailing - list. - Also see for more information on the internal KeyValue format. - Delete markers are purged during the next major compaction of the store, unless the - option is set in the column family. To keep the - deletes for a configurable amount of time, you can set the delete TTL via the - property in - hbase-site.xml. If - is not set, or set to 0, all - delete markers, including those with timestamps in the future, are purged during the - next major compaction. Otherwise, a delete marker with a timestamp in the future is kept - until the major compaction which occurs after the time represented by the marker's - timestamp plus the value of , in - milliseconds. - - This behavior represents a fix for an unexpected change that was introduced in - HBase 0.94, and was fixed in HBASE-10118. - The change has been backported to HBase 0.94 and newer branches. - -
    -
    - -
    - Current Limitations - -
    - Deletes mask Puts - - Deletes mask puts, even puts that happened after the delete - was entered. See HBASE-2256. Remember that a delete writes a tombstone, which only - disappears after then next major compaction has run. Suppose you do - a delete of everything <= T. After this you do a new put with a - timestamp <= T. This put, even if it happened after the delete, - will be masked by the delete tombstone. Performing the put will not - fail, but when you do a get you will notice the put did have no - effect. It will start working again after the major compaction has - run. These issues should not be a problem if you use - always-increasing versions for new puts to a row. But they can occur - even if you do not care about time: just do delete and put - immediately after each other, and there is some chance they happen - within the same millisecond. -
    - -
    - Major compactions change query results - - ...create three cell versions at t1, t2 and t3, with a maximum-versions - setting of 2. So when getting all versions, only the values at t2 and t3 will be - returned. But if you delete the version at t2 or t3, the one at t1 will appear again. - Obviously, once a major compaction has run, such behavior will not be the case - anymore... (See Garbage Collection in Bending time in - HBase.) -
    -
    -
    -
    - Sort Order - All data model operations HBase return data in sorted order. First by row, - then by ColumnFamily, followed by column qualifier, and finally timestamp (sorted - in reverse, so newest records are returned first). - -
    -
    - Column Metadata - There is no store of column metadata outside of the internal KeyValue instances for a ColumnFamily. - Thus, while HBase can support not only a wide number of columns per row, but a heterogenous set of columns - between rows as well, it is your responsibility to keep track of the column names. - - The only way to get a complete set of columns that exist for a ColumnFamily is to process all the rows. - For more information about how HBase stores data internally, see . - -
    -
    Joins - Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it doesn't, - at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL). As has been illustrated - in this chapter, the read data model operations in HBase are Get and Scan. - - However, that doesn't mean that equivalent join functionality can't be supported in your application, but - you have to do it yourself. The two primary strategies are either denormalizing the data upon writing to HBase, - or to have lookup tables and do the join between HBase tables in your application or MapReduce code (and as RDBMS' - demonstrate, there are several strategies for this depending on the size of the tables, e.g., nested loops vs. - hash-joins). So which is the best approach? It depends on what you are trying to do, and as such there isn't a single - answer that works for every use case. - -
    -
    ACID - See ACID Semantics. - Lars Hofhansl has also written a note on - ACID in HBase. -
    - - - + + + + + + - - - HBase and MapReduce - Apache MapReduce is a software framework used to analyze large amounts of data, and is - the framework used most often with Apache Hadoop. MapReduce itself is out of the - scope of this document. A good place to get started with MapReduce is . MapReduce version - 2 (MR2)is now part of YARN. - - This chapter discusses specific configuration steps you need to take to use MapReduce on - data within HBase. In addition, it discusses other interactions and issues between HBase and - MapReduce jobs. - - mapred and mapreduce - There are two mapreduce packages in HBase as in MapReduce itself: org.apache.hadoop.hbase.mapred - and org.apache.hadoop.hbase.mapreduce. The former does old-style API and the latter - the new style. The latter has more facility though you can usually find an equivalent in the older - package. Pick the package that goes with your mapreduce deploy. When in doubt or starting over, pick the - org.apache.hadoop.hbase.mapreduce. In the notes below, we refer to - o.a.h.h.mapreduce but replace with the o.a.h.h.mapred if that is what you are using. - - - - -
    - HBase, MapReduce, and the CLASSPATH - Ny default, MapReduce jobs deployed to a MapReduce cluster do not have access to either - the HBase configuration under $HBASE_CONF_DIR or the HBase classes. - To give the MapReduce jobs the access they need, you could add - hbase-site.xml to the - $HADOOP_HOME/conf/ directory and add the - HBase JARs to the HADOOP_HOME/conf/ - directory, then copy these changes across your cluster. You could add hbase-site.xml to - $HADOOP_HOME/conf and add HBase jars to the $HADOOP_HOME/lib. You would then need to copy - these changes across your cluster or edit - $HADOOP_HOMEconf/hadoop-env.sh and add - them to the HADOOP_CLASSPATH variable. However, this approach is not - recommended because it will pollute your Hadoop install with HBase references. It also - requires you to restart the Hadoop cluster before Hadoop can use the HBase data. - Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration itself. The - dependencies only need to be available on the local CLASSPATH. The following example runs - the bundled HBase RowCounter - MapReduce job against a table named usertable If you have not set - the environment variables expected in the command (the parts prefixed by a - $ sign and curly braces), you can use the actual system paths instead. - Be sure to use the correct version of the HBase JAR for your system. The backticks - (` symbols) cause ths shell to execute the sub-commands, setting the - CLASSPATH as part of the command. This example assumes you use a BASH-compatible shell. - $ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter usertable - When the command runs, internally, the HBase JAR finds the dependencies it needs for - zookeeper, guava, and its other dependencies on the passed HADOOP_CLASSPATH - and adds the JARs to the MapReduce job configuration. See the source at - TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job) for how this is done. - - The example may not work if you are running HBase from its build directory rather - than an installed location. You may see an error like the following: - java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper - If this occurs, try modifying the command as follows, so that it uses the HBase JARs - from the target/ directory within the build environment. - $ HADOOP_CLASSPATH=${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar rowcounter usertable - - - Notice to Mapreduce users of HBase 0.96.1 and above - Some mapreduce jobs that use HBase fail to launch. The symptom is an exception similar - to the following: - -Exception in thread "main" java.lang.IllegalAccessError: class - com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass - com.google.protobuf.LiteralByteString - at java.lang.ClassLoader.defineClass1(Native Method) - at java.lang.ClassLoader.defineClass(ClassLoader.java:792) - at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) - at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) - at java.net.URLClassLoader.access$100(URLClassLoader.java:71) - at java.net.URLClassLoader$1.run(URLClassLoader.java:361) - at java.net.URLClassLoader$1.run(URLClassLoader.java:355) - at java.security.AccessController.doPrivileged(Native Method) - at java.net.URLClassLoader.findClass(URLClassLoader.java:354) - at java.lang.ClassLoader.loadClass(ClassLoader.java:424) - at java.lang.ClassLoader.loadClass(ClassLoader.java:357) - at - org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818) - at - org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433) - at - org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186) - at - org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147) - at - org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270) - at - org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100) -... - - This is caused by an optimization introduced in HBASE-9867 that - inadvertently introduced a classloader dependency. - This affects both jobs using the -libjars option and "fat jar," those - which package their runtime dependencies in a nested lib folder. - In order to satisfy the new classloader requirements, hbase-protocol.jar must be - included in Hadoop's classpath. See for current recommendations for resolving - classpath errors. The following is included for historical purposes. - This can be resolved system-wide by including a reference to the hbase-protocol.jar in - hadoop's lib directory, via a symlink or by copying the jar into the new location. - This can also be achieved on a per-job launch basis by including it in the - HADOOP_CLASSPATH environment variable at job submission time. When - launching jobs that package their dependencies, all three of the following job launching - commands satisfy this requirement: - -$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass -$ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass -$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass - - For jars that do not package their dependencies, the following command structure is - necessary: - -$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ... - - See also HBASE-10304 for - further discussion of this issue. - -
    - -
    - MapReduce Scan Caching - TableMapReduceUtil now restores the option to set scanner caching (the number of rows - which are cached before returning the result to the client) on the Scan object that is - passed in. This functionality was lost due to a bug in HBase 0.95 (HBASE-11558), which - is fixed for HBase 0.98.5 and 0.96.3. The priority order for choosing the scanner caching is - as follows: - - - Caching settings which are set on the scan object. - - - Caching settings which are specified via the configuration option - , which can either be set manually in - hbase-site.xml or via the helper method - TableMapReduceUtil.setScannerCaching(). - - - The default value HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING, which is set to - 100. - - - Optimizing the caching settings is a balance between the time the client waits for a - result and the number of sets of results the client needs to receive. If the caching setting - is too large, the client could end up waiting for a long time or the request could even time - out. If the setting is too small, the scan needs to return results in several pieces. - If you think of the scan as a shovel, a bigger cache setting is analogous to a bigger - shovel, and a smaller cache setting is equivalent to more shoveling in order to fill the - bucket. - The list of priorities mentioned above allows you to set a reasonable default, and - override it for specific operations. - See the API documentation for Scan for more details. -
    - -
    - Bundled HBase MapReduce Jobs - The HBase JAR also serves as a Driver for some bundled mapreduce jobs. To learn about - the bundled MapReduce jobs, run the following command. - - $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar -An example program must be given as the first argument. -Valid program names are: - copytable: Export a table from local cluster to peer cluster - completebulkload: Complete a bulk data load. - export: Write table data to HDFS. - import: Import data written by Export. - importtsv: Import data in TSV format. - rowcounter: Count rows in HBase table - - Each of the valid program names are bundled MapReduce jobs. To run one of the jobs, - model your command after the following example. - $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter myTable -
    - -
    - HBase as a MapReduce Job Data Source and Data Sink - HBase can be used as a data source, TableInputFormat, - and data sink, TableOutputFormat - or MultiTableOutputFormat, - for MapReduce jobs. Writing MapReduce jobs that read or write HBase, it is advisable to - subclass TableMapper - and/or TableReducer. - See the do-nothing pass-through classes IdentityTableMapper - and IdentityTableReducer - for basic usage. For a more involved example, see RowCounter - or review the org.apache.hadoop.hbase.mapreduce.TestTableMapReduce unit test. - If you run MapReduce jobs that use HBase as source or sink, need to specify source and - sink table and column names in your configuration. - - When you read from HBase, the TableInputFormat requests the list of regions - from HBase and makes a map, which is either a map-per-region or - mapreduce.job.maps map, whichever is smaller. If your job only has two maps, - raise mapreduce.job.maps to a number greater than the number of regions. Maps - will run on the adjacent TaskTracker if you are running a TaskTracer and RegionServer per - node. When writing to HBase, it may make sense to avoid the Reduce step and write back into - HBase from within your map. This approach works when your job does not need the sort and - collation that MapReduce does on the map-emitted data. On insert, HBase 'sorts' so there is - no point double-sorting (and shuffling data around your MapReduce cluster) unless you need - to. If you do not need the Reduce, you myour map might emit counts of records processed for - reporting at the end of the jobj, or set the number of Reduces to zero and use - TableOutputFormat. If running the Reduce step makes sense in your case, you should typically - use multiple reducers so that load is spread across the HBase cluster. - - A new HBase partitioner, the HRegionPartitioner, - can run as many reducers the number of existing regions. The HRegionPartitioner is suitable - when your table is large and your upload will not greatly alter the number of existing - regions upon completion. Otherwise use the default partitioner. -
    - -
    - Writing HFiles Directly During Bulk Import - If you are importing into a new table, you can bypass the HBase API and write your - content directly to the filesystem, formatted into HBase data files (HFiles). Your import - will run faster, perhaps an order of magnitude faster. For more on how this mechanism works, - see . -
    - -
    - RowCounter Example - The included RowCounter - MapReduce job uses TableInputFormat and does a count of all rows in the specified - table. To run it, use the following command: - $ ./bin/hadoop jar hbase-X.X.X.jar - This will - invoke the HBase MapReduce Driver class. Select rowcounter from the choice of jobs - offered. This will print rowcouner usage advice to standard output. Specify the tablename, - column to count, and output - directory. If you have classpath errors, see . -
    - -
    - Map-Task Splitting -
    - The Default HBase MapReduce Splitter - When TableInputFormat - is used to source an HBase table in a MapReduce job, its splitter will make a map task for - each region of the table. Thus, if there are 100 regions in the table, there will be 100 - map-tasks for the job - regardless of how many column families are selected in the - Scan. -
    -
    - Custom Splitters - For those interested in implementing custom splitters, see the method - getSplits in TableInputFormatBase. - That is where the logic for map-task assignment resides. -
    -
    -
    - HBase MapReduce Examples -
    - HBase MapReduce Read Example - The following is an example of using HBase as a MapReduce source in read-only manner. - Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from - the Mapper. There job would be defined as follows... - -Configuration config = HBaseConfiguration.create(); -Job job = new Job(config, "ExampleRead"); -job.setJarByClass(MyReadJob.class); // class that contains mapper - -Scan scan = new Scan(); -scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs -scan.setCacheBlocks(false); // don't set to true for MR jobs -// set other scan attrs -... - -TableMapReduceUtil.initTableMapperJob( - tableName, // input HBase table name - scan, // Scan instance to control CF and attribute selection - MyMapper.class, // mapper - null, // mapper output key - null, // mapper output value - job); -job.setOutputFormatClass(NullOutputFormat.class); // because we aren't emitting anything from mapper - -boolean b = job.waitForCompletion(true); -if (!b) { - throw new IOException("error with job!"); -} - - ...and the mapper instance would extend TableMapper... - -public static class MyMapper extends TableMapper<Text, Text> { - - public void map(ImmutableBytesWritable row, Result value, Context context) throws InterruptedException, IOException { - // process data for the row from the Result instance. - } -} - -
    -
    - HBase MapReduce Read/Write Example - The following is an example of using HBase both as a source and as a sink with - MapReduce. This example will simply copy data from one table to another. - -Configuration config = HBaseConfiguration.create(); -Job job = new Job(config,"ExampleReadWrite"); -job.setJarByClass(MyReadWriteJob.class); // class that contains mapper - -Scan scan = new Scan(); -scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs -scan.setCacheBlocks(false); // don't set to true for MR jobs -// set other scan attrs - -TableMapReduceUtil.initTableMapperJob( - sourceTable, // input table - scan, // Scan instance to control CF and attribute selection - MyMapper.class, // mapper class - null, // mapper output key - null, // mapper output value - job); -TableMapReduceUtil.initTableReducerJob( - targetTable, // output table - null, // reducer class - job); -job.setNumReduceTasks(0); - -boolean b = job.waitForCompletion(true); -if (!b) { - throw new IOException("error with job!"); -} - - An explanation is required of what TableMapReduceUtil is doing, - especially with the reducer. TableOutputFormat - is being used as the outputFormat class, and several parameters are being set on the - config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key - to ImmutableBytesWritable and reducer value to - Writable. These could be set by the programmer on the job and - conf, but TableMapReduceUtil tries to make things easier. - The following is the example mapper, which will create a Put - and matching the input Result and emit it. Note: this is what the - CopyTable utility does. - -public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put> { - - public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { - // this example is just copying the data from the source table... - context.write(row, resultToPut(row,value)); - } - - private static Put resultToPut(ImmutableBytesWritable key, Result result) throws IOException { - Put put = new Put(key.get()); - for (KeyValue kv : result.raw()) { - put.add(kv); - } - return put; - } -} - - There isn't actually a reducer step, so TableOutputFormat takes - care of sending the Put to the target table. - This is just an example, developers could choose not to use - TableOutputFormat and connect to the target table themselves. - -
    -
    - HBase MapReduce Read/Write Example With Multi-Table Output - TODO: example for MultiTableOutputFormat. -
    -
    - HBase MapReduce Summary to HBase Example - The following example uses HBase as a MapReduce source and sink with a summarization - step. This example will count the number of distinct instances of a value in a table and - write those summarized counts in another table. - -Configuration config = HBaseConfiguration.create(); -Job job = new Job(config,"ExampleSummary"); -job.setJarByClass(MySummaryJob.class); // class that contains mapper and reducer - -Scan scan = new Scan(); -scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs -scan.setCacheBlocks(false); // don't set to true for MR jobs -// set other scan attrs - -TableMapReduceUtil.initTableMapperJob( - sourceTable, // input table - scan, // Scan instance to control CF and attribute selection - MyMapper.class, // mapper class - Text.class, // mapper output key - IntWritable.class, // mapper output value - job); -TableMapReduceUtil.initTableReducerJob( - targetTable, // output table - MyTableReducer.class, // reducer class - job); -job.setNumReduceTasks(1); // at least one, adjust as required - -boolean b = job.waitForCompletion(true); -if (!b) { - throw new IOException("error with job!"); -} - - In this example mapper a column with a String-value is chosen as the value to summarize - upon. This value is used as the key to emit from the mapper, and an - IntWritable represents an instance counter. - -public static class MyMapper extends TableMapper<Text, IntWritable> { - public static final byte[] CF = "cf".getBytes(); - public static final byte[] ATTR1 = "attr1".getBytes(); - - private final IntWritable ONE = new IntWritable(1); - private Text text = new Text(); - - public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { - String val = new String(value.getValue(CF, ATTR1)); - text.set(val); // we can only emit Writables... - - context.write(text, ONE); - } -} - - In the reducer, the "ones" are counted (just like any other MR example that does this), - and then emits a Put. - -public static class MyTableReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable> { - public static final byte[] CF = "cf".getBytes(); - public static final byte[] COUNT = "count".getBytes(); - - public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { - int i = 0; - for (IntWritable val : values) { - i += val.get(); - } - Put put = new Put(Bytes.toBytes(key.toString())); - put.add(CF, COUNT, Bytes.toBytes(i)); - - context.write(null, put); - } -} - - -
    -
    - HBase MapReduce Summary to File Example - This very similar to the summary example above, with exception that this is using - HBase as a MapReduce source but HDFS as the sink. The differences are in the job setup and - in the reducer. The mapper remains the same. - -Configuration config = HBaseConfiguration.create(); -Job job = new Job(config,"ExampleSummaryToFile"); -job.setJarByClass(MySummaryFileJob.class); // class that contains mapper and reducer - -Scan scan = new Scan(); -scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs -scan.setCacheBlocks(false); // don't set to true for MR jobs -// set other scan attrs - -TableMapReduceUtil.initTableMapperJob( - sourceTable, // input table - scan, // Scan instance to control CF and attribute selection - MyMapper.class, // mapper class - Text.class, // mapper output key - IntWritable.class, // mapper output value - job); -job.setReducerClass(MyReducer.class); // reducer class -job.setNumReduceTasks(1); // at least one, adjust as required -FileOutputFormat.setOutputPath(job, new Path("/tmp/mr/mySummaryFile")); // adjust directories as required - -boolean b = job.waitForCompletion(true); -if (!b) { - throw new IOException("error with job!"); -} - - As stated above, the previous Mapper can run unchanged with this example. As for the - Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting - Puts. - - public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> { - - public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { - int i = 0; - for (IntWritable val : values) { - i += val.get(); - } - context.write(key, new IntWritable(i)); - } -} - -
    -
    - HBase MapReduce Summary to HBase Without Reducer - It is also possible to perform summaries without a reducer - if you use HBase as the - reducer. - An HBase target table would need to exist for the job summary. The Table method - incrementColumnValue would be used to atomically increment values. From a - performance perspective, it might make sense to keep a Map of values with their values to - be incremeneted for each map-task, and make one update per key at during the - cleanup method of the mapper. However, your milage may vary depending on the - number of rows to be processed and unique keys. - In the end, the summary results are in HBase. -
    -
    - HBase MapReduce Summary to RDBMS - Sometimes it is more appropriate to generate summaries to an RDBMS. For these cases, - it is possible to generate summaries directly to an RDBMS via a custom reducer. The - setup method can connect to an RDBMS (the connection information can be - passed via custom parameters in the context) and the cleanup method can close the - connection. - It is critical to understand that number of reducers for the job affects the - summarization implementation, and you'll have to design this into your reducer. - Specifically, whether it is designed to run as a singleton (one reducer) or multiple - reducers. Neither is right or wrong, it depends on your use-case. Recognize that the more - reducers that are assigned to the job, the more simultaneous connections to the RDBMS will - be created - this will scale, but only to a point. - - public static class MyRdbmsReducer extends Reducer<Text, IntWritable, Text, IntWritable> { - - private Connection c = null; - - public void setup(Context context) { - // create DB connection... - } - - public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { - // do summarization - // in this example the keys are Text, but this is just an example - } - - public void cleanup(Context context) { - // close db connection - } - -} - - In the end, the summary results are written to your RDBMS table/s. -
    - -
    - -
    - Accessing Other HBase Tables in a MapReduce Job - Although the framework currently allows one HBase table as input to a MapReduce job, - other HBase tables can be accessed as lookup tables, etc., in a MapReduce job via creating - an Table instance in the setup method of the Mapper. - public class MyMapper extends TableMapper<Text, LongWritable> { - private Table myOtherTable; - - public void setup(Context context) { - // In here create a Connection to the cluster and save it or use the Connection - // from the existing table - myOtherTable = connection.getTable("myOtherTable"); - } - - public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { - // process Result... - // use 'myOtherTable' for lookups - } - - - -
    -
    - Speculative Execution - It is generally advisable to turn off speculative execution for MapReduce jobs that use - HBase as a source. This can either be done on a per-Job basis through properties, on on the - entire cluster. Especially for longer running jobs, speculative execution will create - duplicate map-tasks which will double-write your data to HBase; this is probably not what - you want. - See for more information. -
    -
    - + - - - Architecture -
    - Overview -
    - NoSQL? - HBase is a type of "NoSQL" database. "NoSQL" is a general term meaning that the database isn't an RDBMS which - supports SQL as its primary access language, but there are many types of NoSQL databases: BerkeleyDB is an - example of a local NoSQL database, whereas HBase is very much a distributed database. Technically speaking, - HBase is really more a "Data Store" than "Data Base" because it lacks many of the features you find in an RDBMS, - such as typed columns, secondary indexes, triggers, and advanced query languages, etc. - - However, HBase has many features which supports both linear and modular scaling. HBase clusters expand - by adding RegionServers that are hosted on commodity class servers. If a cluster expands from 10 to 20 - RegionServers, for example, it doubles both in terms of storage and as well as processing capacity. - RDBMS can scale well, but only up to a point - specifically, the size of a single database server - and for the best - performance requires specialized hardware and storage devices. HBase features of note are: - - Strongly consistent reads/writes: HBase is not an "eventually consistent" DataStore. This - makes it very suitable for tasks such as high-speed counter aggregation. - Automatic sharding: HBase tables are distributed on the cluster via regions, and regions are - automatically split and re-distributed as your data grows. - Automatic RegionServer failover - Hadoop/HDFS Integration: HBase supports HDFS out of the box as its distributed file system. - MapReduce: HBase supports massively parallelized processing via MapReduce for using HBase as both - source and sink. - Java Client API: HBase supports an easy to use Java API for programmatic access. - Thrift/REST API: HBase also supports Thrift and REST for non-Java front-ends. - Block Cache and Bloom Filters: HBase supports a Block Cache and Bloom Filters for high volume query optimization. - Operational Management: HBase provides build-in web-pages for operational insight as well as JMX metrics. - - -
    - -
    - When Should I Use HBase? - HBase isn't suitable for every problem. - First, make sure you have enough data. If you have hundreds of millions or billions of rows, then - HBase is a good candidate. If you only have a few thousand/million rows, then using a traditional RDBMS - might be a better choice due to the fact that all of your data might wind up on a single node (or two) and - the rest of the cluster may be sitting idle. - - Second, make sure you can live without all the extra features that an RDBMS provides (e.g., typed columns, - secondary indexes, transactions, advanced query languages, etc.) An application built against an RDBMS cannot be - "ported" to HBase by simply changing a JDBC driver, for example. Consider moving from an RDBMS to HBase as a - complete redesign as opposed to a port. - - Third, make sure you have enough hardware. Even HDFS doesn't do well with anything less than - 5 DataNodes (due to things such as HDFS block replication which has a default of 3), plus a NameNode. - - HBase can run quite well stand-alone on a laptop - but this should be considered a development - configuration only. - -
    -
    - What Is The Difference Between HBase and Hadoop/HDFS? - HDFS is a distributed file system that is well suited for the storage of large files. - Its documentation states that it is not, however, a general purpose file system, and does not provide fast individual record lookups in files. - HBase, on the other hand, is built on top of HDFS and provides fast record lookups (and updates) for large tables. - This can sometimes be a point of conceptual confusion. HBase internally puts your data in indexed "StoreFiles" that exist - on HDFS for high-speed lookups. See the and the rest of this chapter for more information on how HBase achieves its goals. - -
    -
    - -
    - Catalog Tables - The catalog table hbase:meta exists as an HBase table and is filtered out of the HBase - shell's list command, but is in fact a table just like any other. -
    - -ROOT- - - The -ROOT- table was removed in HBase 0.96.0. Information here should - be considered historical. - - The -ROOT- table kept track of the location of the - .META table (the previous name for the table now called hbase:meta) prior to HBase - 0.96. The -ROOT- table structure was as follows: - - Key - - .META. region key (.META.,,1) - - - - - Values - - info:regioninfo (serialized HRegionInfo - instance of hbase:meta) - - - info:server (server:port of the RegionServer holding - hbase:meta) - - - info:serverstartcode (start-time of the RegionServer process holding - hbase:meta) - - -
    -
    - hbase:meta - The hbase:meta table (previously called .META.) keeps a list - of all regions in the system. The location of hbase:meta was previously - tracked within the -ROOT- table, but is now stored in Zookeeper. - The hbase:meta table structure is as follows: - - Key - - Region key of the format ([table],[region start key],[region - id]) - - - - Values - - info:regioninfo (serialized - HRegionInfo instance for this region) - - - info:server (server:port of the RegionServer containing this - region) - - - info:serverstartcode (start-time of the RegionServer process - containing this region) - - - When a table is in the process of splitting, two other columns will be created, called - info:splitA and info:splitB. These columns represent the two - daughter regions. The values for these columns are also serialized HRegionInfo instances. - After the region has been split, eventually this row will be deleted. - - Note on HRegionInfo - The empty key is used to denote table start and table end. A region with an empty - start key is the first region in a table. If a region has both an empty start and an - empty end key, it is the only region in the table - - In the (hopefully unlikely) event that programmatic processing of catalog metadata is - required, see the Writables - utility. -
    -
    - Startup Sequencing - First, the location of hbase:meta is looked up in Zookeeper. Next, - hbase:meta is updated with server and startcode values. - For information on region-RegionServer assignment, see . -
    -
    - -
    - Client - The HBase client finds the RegionServers that are serving the particular row range of - interest. It does this by querying the hbase:meta table. See for details. After locating the required region(s), the - client contacts the RegionServer serving that region, rather than going through the master, - and issues the read or write request. This information is cached in the client so that - subsequent requests need not go through the lookup process. Should a region be reassigned - either by the master load balancer or because a RegionServer has died, the client will - requery the catalog tables to determine the new location of the user region. - - See for more information about the impact of the Master on HBase - Client communication. - Administrative functions are done via an instance of Admin - - -
    - Cluster Connections - The API changed in HBase 1.0. Its been cleaned up and users are returned - Interfaces to work against rather than particular types. In HBase 1.0, - obtain a cluster Connection from ConnectionFactory and thereafter, get from it - instances of Table, Admin, and RegionLocator on an as-need basis. When done, close - obtained instances. Finally, be sure to cleanup your Connection instance before - exiting. Connections are heavyweight objects. Create once and keep an instance around. - Table, Admin and RegionLocator instances are lightweight. Create as you go and then - let go as soon as you are done by closing them. See the - Client Package Javadoc Description for example usage of the new HBase 1.0 API. - - For connection configuration information, see . - - Table - instances are not thread-safe. Only one thread can use an instance of Table at - any given time. When creating Table instances, it is advisable to use the same HBaseConfiguration - instance. This will ensure sharing of ZooKeeper and socket instances to the RegionServers - which is usually what you want. For example, this is preferred: - HBaseConfiguration conf = HBaseConfiguration.create(); -HTable table1 = new HTable(conf, "myTable"); -HTable table2 = new HTable(conf, "myTable"); - as opposed to this: - HBaseConfiguration conf1 = HBaseConfiguration.create(); -HTable table1 = new HTable(conf1, "myTable"); -HBaseConfiguration conf2 = HBaseConfiguration.create(); -HTable table2 = new HTable(conf2, "myTable"); - - For more information about how connections are handled in the HBase client, - see HConnectionManager. - -
    Connection Pooling - For applications which require high-end multithreaded access (e.g., web-servers or application servers that may serve many application threads - in a single JVM), you can pre-create an HConnection, as shown in - the following example: - - Pre-Creating a <code>HConnection</code> - // Create a connection to the cluster. -HConnection connection = HConnectionManager.createConnection(Configuration); -HTableInterface table = connection.getTable("myTable"); -// use table as needed, the table returned is lightweight -table.close(); -// use the connection for other access to the cluster -connection.close(); - - Constructing HTableInterface implementation is very lightweight and resources are - controlled. - - <code>HTablePool</code> is Deprecated - Previous versions of this guide discussed HTablePool, which was - deprecated in HBase 0.94, 0.95, and 0.96, and removed in 0.98.1, by HBASE-6500. - Please use HConnection instead. - -
    -
    -
    WriteBuffer and Batch Methods - If is turned off on - HTable, - Puts are sent to RegionServers when the writebuffer - is filled. The writebuffer is 2MB by default. Before an HTable instance is - discarded, either close() or - flushCommits() should be invoked so Puts - will not be lost. - - Note: htable.delete(Delete); does not go in the writebuffer! This only applies to Puts. - - For additional information on write durability, review the ACID semantics page. - - For fine-grained control of batching of - Puts or Deletes, - see the batch methods on HTable. - -
    -
    External Clients - Information on non-Java clients and custom protocols is covered in - -
    -
    - -
    Client Request Filters - Get and Scan instances can be - optionally configured with filters which are applied on the RegionServer. - - Filters can be confusing because there are many different types, and it is best to approach them by understanding the groups - of Filter functionality. - -
    Structural - Structural Filters contain other Filters. -
    FilterList - FilterList - represents a list of Filters with a relationship of FilterList.Operator.MUST_PASS_ALL or - FilterList.Operator.MUST_PASS_ONE between the Filters. The following example shows an 'or' between two - Filters (checking for either 'my value' or 'my other value' on the same attribute). - -FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ONE); -SingleColumnValueFilter filter1 = new SingleColumnValueFilter( - cf, - column, - CompareOp.EQUAL, - Bytes.toBytes("my value") - ); -list.add(filter1); -SingleColumnValueFilter filter2 = new SingleColumnValueFilter( - cf, - column, - CompareOp.EQUAL, - Bytes.toBytes("my other value") - ); -list.add(filter2); -scan.setFilter(list); - -
    -
    -
    - Column Value -
    - SingleColumnValueFilter - SingleColumnValueFilter - can be used to test column values for equivalence (CompareOp.EQUAL - ), inequality (CompareOp.NOT_EQUAL), or ranges (e.g., - CompareOp.GREATER). The following is example of testing equivalence a - column to a String value "my value"... - -SingleColumnValueFilter filter = new SingleColumnValueFilter( - cf, - column, - CompareOp.EQUAL, - Bytes.toBytes("my value") - ); -scan.setFilter(filter); - -
    -
    -
    - Column Value Comparators - There are several Comparator classes in the Filter package that deserve special - mention. These Comparators are used in concert with other Filters, such as . -
    - RegexStringComparator - RegexStringComparator - supports regular expressions for value comparisons. - -RegexStringComparator comp = new RegexStringComparator("my."); // any value that starts with 'my' -SingleColumnValueFilter filter = new SingleColumnValueFilter( - cf, - column, - CompareOp.EQUAL, - comp - ); -scan.setFilter(filter); - - See the Oracle JavaDoc for supported - RegEx patterns in Java. -
    -
    - SubstringComparator - SubstringComparator - can be used to determine if a given substring exists in a value. The comparison is - case-insensitive. - -SubstringComparator comp = new SubstringComparator("y val"); // looking for 'my value' -SingleColumnValueFilter filter = new SingleColumnValueFilter( - cf, - column, - CompareOp.EQUAL, - comp - ); -scan.setFilter(filter); - -
    -
    - BinaryPrefixComparator - See BinaryPrefixComparator. -
    -
    - BinaryComparator - See BinaryComparator. -
    -
    -
    - KeyValue Metadata - As HBase stores data internally as KeyValue pairs, KeyValue Metadata Filters evaluate - the existence of keys (i.e., ColumnFamily:Column qualifiers) for a row, as opposed to - values the previous section. -
    - FamilyFilter - FamilyFilter - can be used to filter on the ColumnFamily. It is generally a better idea to select - ColumnFamilies in the Scan than to do it with a Filter. -
    -
    - QualifierFilter - QualifierFilter - can be used to filter based on Column (aka Qualifier) name. -
    -
    - ColumnPrefixFilter - ColumnPrefixFilter - can be used to filter based on the lead portion of Column (aka Qualifier) names. - A ColumnPrefixFilter seeks ahead to the first column matching the prefix in each row - and for each involved column family. It can be used to efficiently get a subset of the - columns in very wide rows. - Note: The same column qualifier can be used in different column families. This - filter returns all matching columns. - Example: Find all columns in a row and family that start with "abc" - -HTableInterface t = ...; -byte[] row = ...; -byte[] family = ...; -byte[] prefix = Bytes.toBytes("abc"); -Scan scan = new Scan(row, row); // (optional) limit to one row -scan.addFamily(family); // (optional) limit to one family -Filter f = new ColumnPrefixFilter(prefix); -scan.setFilter(f); -scan.setBatch(10); // set this if there could be many columns returned -ResultScanner rs = t.getScanner(scan); -for (Result r = rs.next(); r != null; r = rs.next()) { - for (KeyValue kv : r.raw()) { - // each kv represents a column - } -} -rs.close(); - -
    -
    - MultipleColumnPrefixFilter - MultipleColumnPrefixFilter - behaves like ColumnPrefixFilter but allows specifying multiple prefixes. - Like ColumnPrefixFilter, MultipleColumnPrefixFilter efficiently seeks ahead to the - first column matching the lowest prefix and also seeks past ranges of columns between - prefixes. It can be used to efficiently get discontinuous sets of columns from very wide - rows. - Example: Find all columns in a row and family that start with "abc" or "xyz" - -HTableInterface t = ...; -byte[] row = ...; -byte[] family = ...; -byte[][] prefixes = new byte[][] {Bytes.toBytes("abc"), Bytes.toBytes("xyz")}; -Scan scan = new Scan(row, row); // (optional) limit to one row -scan.addFamily(family); // (optional) limit to one family -Filter f = new MultipleColumnPrefixFilter(prefixes); -scan.setFilter(f); -scan.setBatch(10); // set this if there could be many columns returned -ResultScanner rs = t.getScanner(scan); -for (Result r = rs.next(); r != null; r = rs.next()) { - for (KeyValue kv : r.raw()) { - // each kv represents a column - } -} -rs.close(); - -
    -
    - ColumnRangeFilter - A ColumnRangeFilter - allows efficient intra row scanning. - A ColumnRangeFilter can seek ahead to the first matching column for each involved - column family. It can be used to efficiently get a 'slice' of the columns of a very wide - row. i.e. you have a million columns in a row but you only want to look at columns - bbbb-bbdd. - Note: The same column qualifier can be used in different column families. This - filter returns all matching columns. - Example: Find all columns in a row and family between "bbbb" (inclusive) and "bbdd" - (inclusive) - -HTableInterface t = ...; -byte[] row = ...; -byte[] family = ...; -byte[] startColumn = Bytes.toBytes("bbbb"); -byte[] endColumn = Bytes.toBytes("bbdd"); -Scan scan = new Scan(row, row); // (optional) limit to one row -scan.addFamily(family); // (optional) limit to one family -Filter f = new ColumnRangeFilter(startColumn, true, endColumn, true); -scan.setFilter(f); -scan.setBatch(10); // set this if there could be many columns returned -ResultScanner rs = t.getScanner(scan); -for (Result r = rs.next(); r != null; r = rs.next()) { - for (KeyValue kv : r.raw()) { - // each kv represents a column - } -} -rs.close(); - - Note: Introduced in HBase 0.92 -
    -
    -
    RowKey -
    RowFilter - It is generally a better idea to use the startRow/stopRow methods on Scan for row selection, however - RowFilter can also be used. -
    -
    -
    Utility -
    FirstKeyOnlyFilter - This is primarily used for rowcount jobs. - See FirstKeyOnlyFilter. -
    -
    -
    - -
    Master - HMaster is the implementation of the Master Server. The Master server is - responsible for monitoring all RegionServer instances in the cluster, and is the interface - for all metadata changes. In a distributed cluster, the Master typically runs on the . J Mohamed Zahoor goes into some more detail on the Master - Architecture in this blog posting, HBase HMaster - Architecture . -
    Startup Behavior - If run in a multi-Master environment, all Masters compete to run the cluster. If the active - Master loses its lease in ZooKeeper (or the Master shuts down), then then the remaining Masters jostle to - take over the Master role. - -
    -
    - Runtime Impact - A common dist-list question involves what happens to an HBase cluster when the Master - goes down. Because the HBase client talks directly to the RegionServers, the cluster can - still function in a "steady state." Additionally, per , hbase:meta exists as an HBase table and is not - resident in the Master. However, the Master controls critical functions such as - RegionServer failover and completing region splits. So while the cluster can still run for - a short time without the Master, the Master should be restarted as soon as possible. - -
    -
    Interface - The methods exposed by HMasterInterface are primarily metadata-oriented methods: - - Table (createTable, modifyTable, removeTable, enable, disable) - - ColumnFamily (addColumn, modifyColumn, removeColumn) - - Region (move, assign, unassign) - - - For example, when the HBaseAdmin method disableTable is invoked, it is serviced by the Master server. - -
    -
    Processes - The Master runs several background threads: - -
    LoadBalancer - Periodically, and when there are no regions in transition, - a load balancer will run and move regions around to balance the cluster's load. - See for configuring this property. - See for more information on region assignment. - -
    -
    CatalogJanitor - Periodically checks and cleans up the hbase:meta table. See for more information on META. -
    -
    - -
    -
    - RegionServer - HRegionServer is the RegionServer implementation. It is responsible for - serving and managing regions. In a distributed cluster, a RegionServer runs on a . -
    - Interface - The methods exposed by HRegionRegionInterface contain both data-oriented - and region-maintenance methods: - - Data (get, put, delete, next, etc.) - - - Region (splitRegion, compactRegion, etc.) - - For example, when the HBaseAdmin method - majorCompact is invoked on a table, the client is actually iterating - through all regions for the specified table and requesting a major compaction directly to - each region. -
    -
    - Processes - The RegionServer runs a variety of background threads: -
    - CompactSplitThread - Checks for splits and handle minor compactions. -
    -
    - MajorCompactionChecker - Checks for major compactions. -
    -
    - MemStoreFlusher - Periodically flushes in-memory writes in the MemStore to StoreFiles. -
    -
    - LogRoller - Periodically checks the RegionServer's WAL. -
    -
    - -
    - Coprocessors - Coprocessors were added in 0.92. There is a thorough Blog Overview - of CoProcessors posted. Documentation will eventually move to this reference - guide, but the blog is the most current information available at this time. -
    - -
    - Block Cache - - HBase provides two different BlockCache implementations: the default onheap - LruBlockCache and BucketCache, which is (usually) offheap. This section - discusses benefits and drawbacks of each implementation, how to choose the appropriate - option, and configuration options for each. - - Block Cache Reporting: UI - See the RegionServer UI for detail on caching deploy. Since HBase-0.98.4, the - Block Cache detail has been significantly extended showing configurations, - sizings, current usage, time-in-the-cache, and even detail on block counts and types. - - -
    - - Cache Choices - LruBlockCache is the original implementation, and is - entirely within the Java heap. BucketCache is mainly - intended for keeping blockcache data offheap, although BucketCache can also - keep data onheap and serve from a file-backed cache. - BucketCache is production ready as of hbase-0.98.6 - To run with BucketCache, you need HBASE-11678. This was included in - hbase-0.98.6. - - - - - Fetching will always be slower when fetching from BucketCache, - as compared to the native onheap LruBlockCache. However, latencies tend to be - less erratic across time, because there is less garbage collection when you use - BucketCache since it is managing BlockCache allocations, not the GC. If the - BucketCache is deployed in offheap mode, this memory is not managed by the - GC at all. This is why you'd use BucketCache, so your latencies are less erratic and to mitigate GCs - and heap fragmentation. See Nick Dimiduk's BlockCache 101 for - comparisons running onheap vs offheap tests. Also see - Comparing BlockCache Deploys - which finds that if your dataset fits inside your LruBlockCache deploy, use it otherwise - if you are experiencing cache churn (or you want your cache to exist beyond the - vagaries of java GC), use BucketCache. - - - When you enable BucketCache, you are enabling a two tier caching - system, an L1 cache which is implemented by an instance of LruBlockCache and - an offheap L2 cache which is implemented by BucketCache. Management of these - two tiers and the policy that dictates how blocks move between them is done by - CombinedBlockCache. It keeps all DATA blocks in the L2 - BucketCache and meta blocks -- INDEX and BLOOM blocks -- - onheap in the L1 LruBlockCache. - See for more detail on going offheap. -
    - -
    - General Cache Configurations - Apart from the cache implementation itself, you can set some general configuration - options to control how the cache performs. See . After setting any of these options, restart or rolling restart your cluster for the - configuration to take effect. Check logs for errors or unexpected behavior. - See also , which discusses a new option - introduced in HBASE-9857. -
    - -
    - LruBlockCache Design - The LruBlockCache is an LRU cache that contains three levels of block priority to - allow for scan-resistance and in-memory ColumnFamilies: - - - Single access priority: The first time a block is loaded from HDFS it normally - has this priority and it will be part of the first group to be considered during - evictions. The advantage is that scanned blocks are more likely to get evicted than - blocks that are getting more usage. - - - Mutli access priority: If a block in the previous priority group is accessed - again, it upgrades to this priority. It is thus part of the second group considered - during evictions. - - - In-memory access priority: If the block's family was configured to be - "in-memory", it will be part of this priority disregarding the number of times it - was accessed. Catalog tables are configured like this. This group is the last one - considered during evictions. - To mark a column family as in-memory, call - HColumnDescriptor.setInMemory(true); if creating a table from java, - or set IN_MEMORY => true when creating or altering a table in - the shell: e.g. hbase(main):003:0> create 't', {NAME => 'f', IN_MEMORY => 'true'} - - - For more information, see the LruBlockCache - source - -
    -
    - LruBlockCache Usage - Block caching is enabled by default for all the user tables which means that any - read operation will load the LRU cache. This might be good for a large number of use - cases, but further tunings are usually required in order to achieve better performance. - An important concept is the working set size, or - WSS, which is: "the amount of memory needed to compute the answer to a problem". For a - website, this would be the data that's needed to answer the queries over a short amount - of time. - The way to calculate how much memory is available in HBase for caching is: - - number of region servers * heap size * hfile.block.cache.size * 0.99 - - The default value for the block cache is 0.25 which represents 25% of the available - heap. The last value (99%) is the default acceptable loading factor in the LRU cache - after which eviction is started. The reason it is included in this equation is that it - would be unrealistic to say that it is possible to use 100% of the available memory - since this would make the process blocking from the point where it loads new blocks. - Here are some examples: - - - One region server with the default heap size (1 GB) and the default block cache - size will have 253 MB of block cache available. - - - 20 region servers with the heap size set to 8 GB and a default block cache size - will have 39.6 of block cache. - - - 100 region servers with the heap size set to 24 GB and a block cache size of 0.5 - will have about 1.16 TB of block cache. - - - Your data is not the only resident of the block cache. Here are others that you may have to take into account: - - - - Catalog Tables - - The -ROOT- (prior to HBase 0.96. See ) and hbase:meta tables are forced - into the block cache and have the in-memory priority which means that they are - harder to evict. The former never uses more than a few hundreds of bytes while the - latter can occupy a few MBs (depending on the number of regions). - - - - HFiles Indexes - - An hfile is the file format that HBase uses to store - data in HDFS. It contains a multi-layered index which allows HBase to seek to the - data without having to read the whole file. The size of those indexes is a factor - of the block size (64KB by default), the size of your keys and the amount of data - you are storing. For big data sets it's not unusual to see numbers around 1GB per - region server, although not all of it will be in cache because the LRU will evict - indexes that aren't used. - - - - Keys - - The values that are stored are only half the picture, since each value is - stored along with its keys (row key, family qualifier, and timestamp). See . - - - - Bloom Filters - - Just like the HFile indexes, those data structures (when enabled) are stored - in the LRU. - - - - Currently the recommended way to measure HFile indexes and bloom filters sizes is to - look at the region server web UI and checkout the relevant metrics. For keys, sampling - can be done by using the HFile command line tool and look for the average key size - metric. Since HBase 0.98.3, you can view detail on BlockCache stats and metrics - in a special Block Cache section in the UI. - It's generally bad to use block caching when the WSS doesn't fit in memory. This is - the case when you have for example 40GB available across all your region servers' block - caches but you need to process 1TB of data. One of the reasons is that the churn - generated by the evictions will trigger more garbage collections unnecessarily. Here are - two use cases: - - - Fully random reading pattern: This is a case where you almost never access the - same row twice within a short amount of time such that the chance of hitting a - cached block is close to 0. Setting block caching on such a table is a waste of - memory and CPU cycles, more so that it will generate more garbage to pick up by the - JVM. For more information on monitoring GC, see . - - - Mapping a table: In a typical MapReduce job that takes a table in input, every - row will be read only once so there's no need to put them into the block cache. The - Scan object has the option of turning this off via the setCaching method (set it to - false). You can still keep block caching turned on on this table if you need fast - random read access. An example would be counting the number of rows in a table that - serves live traffic, caching every block of that table would create massive churn - and would surely evict data that's currently in use. - - -
    - Caching META blocks only (DATA blocks in fscache) - An interesting setup is one where we cache META blocks only and we read DATA - blocks in on each access. If the DATA blocks fit inside fscache, this alternative - may make sense when access is completely random across a very large dataset. - To enable this setup, alter your table and for each column family - set BLOCKCACHE => 'false'. You are 'disabling' the - BlockCache for this column family only you can never disable the caching of - META blocks. Since - HBASE-4683 Always cache index and bloom blocks, - we will cache META blocks even if the BlockCache is disabled. - -
    -
    -
    - Offheap Block Cache -
    - How to Enable BucketCache - The usual deploy of BucketCache is via a managing class that sets up two caching tiers: an L1 onheap cache - implemented by LruBlockCache and a second L2 cache implemented with BucketCache. The managing class is CombinedBlockCache by default. - The just-previous link describes the caching 'policy' implemented by CombinedBlockCache. In short, it works - by keeping meta blocks -- INDEX and BLOOM in the L1, onheap LruBlockCache tier -- and DATA - blocks are kept in the L2, BucketCache tier. It is possible to amend this behavior in - HBase since version 1.0 and ask that a column family have both its meta and DATA blocks hosted onheap in the L1 tier by - setting cacheDataInL1 via - (HColumnDescriptor.setCacheDataInL1(true) - or in the shell, creating or amending column families setting CACHE_DATA_IN_L1 - to true: e.g. hbase(main):003:0> create 't', {NAME => 't', CONFIGURATION => {CACHE_DATA_IN_L1 => 'true'}} - - The BucketCache Block Cache can be deployed onheap, offheap, or file based. - You set which via the - hbase.bucketcache.ioengine setting. Setting it to - heap will have BucketCache deployed inside the - allocated java heap. Setting it to offheap will have - BucketCache make its allocations offheap, - and an ioengine setting of file:PATH_TO_FILE will direct - BucketCache to use a file caching (Useful in particular if you have some fast i/o attached to the box such - as SSDs). - - It is possible to deploy an L1+L2 setup where we bypass the CombinedBlockCache - policy and have BucketCache working as a strict L2 cache to the L1 - LruBlockCache. For such a setup, set CacheConfig.BUCKET_CACHE_COMBINED_KEY to - false. In this mode, on eviction from L1, blocks go to L2. - When a block is cached, it is cached first in L1. When we go to look for a cached block, - we look first in L1 and if none found, then search L2. Let us call this deploy format, - Raw L1+L2. - Other BucketCache configs include: specifying a location to persist cache to across - restarts, how many threads to use writing the cache, etc. See the - CacheConfig.html - class for configuration options and descriptions. - - - BucketCache Example Configuration - This sample provides a configuration for a 4 GB offheap BucketCache with a 1 GB - onheap cache. Configuration is performed on the RegionServer. Setting - hbase.bucketcache.ioengine and - hbase.bucketcache.size > 0 enables CombinedBlockCache. - Let us presume that the RegionServer has been set to run with a 5G heap: - i.e. HBASE_HEAPSIZE=5g. - - - First, edit the RegionServer's hbase-env.sh and set - HBASE_OFFHEAPSIZE to a value greater than the offheap size wanted, in - this case, 4 GB (expressed as 4G). Lets set it to 5G. That'll be 4G - for our offheap cache and 1G for any other uses of offheap memory (there are - other users of offheap memory other than BlockCache; e.g. DFSClient - in RegionServer can make use of offheap memory). See . - HBASE_OFFHEAPSIZE=5G - - - Next, add the following configuration to the RegionServer's - hbase-site.xml. - - - hbase.bucketcache.ioengine - offheap - - - hfile.block.cache.size - 0.2 - - - hbase.bucketcache.size - 4196 -]]> - - - - Restart or rolling restart your cluster, and check the logs for any - issues. - - - In the above, we set bucketcache to be 4G. The onheap lrublockcache we - configured to have 0.2 of the RegionServer's heap size (0.2 * 5G = 1G). - In other words, you configure the L1 LruBlockCache as you would normally, - as you would when there is no L2 BucketCache present. - - HBASE-10641 introduced the ability to configure multiple sizes for the - buckets of the bucketcache, in HBase 0.98 and newer. To configurable multiple bucket - sizes, configure the new property (instead of - ) to a comma-separated list of block sizes, - ordered from smallest to largest, with no spaces. The goal is to optimize the bucket - sizes based on your data access patterns. The following example configures buckets of - size 4096 and 8192. - - hfile.block.cache.sizes - 4096,8192 - - ]]> - - Direct Memory Usage In HBase - The default maximum direct memory varies by JVM. Traditionally it is 64M - or some relation to allocated heap size (-Xmx) or no limit at all (JDK7 apparently). - HBase servers use direct memory, in particular short-circuit reading, the hosted DFSClient will - allocate direct memory buffers. If you do offheap block caching, you'll - be making use of direct memory. Starting your JVM, make sure - the -XX:MaxDirectMemorySize setting in - conf/hbase-env.sh is set to some value that is - higher than what you have allocated to your offheap blockcache - (hbase.bucketcache.size). It should be larger than your offheap block - cache and then some for DFSClient usage (How much the DFSClient uses is not - easy to quantify; it is the number of open hfiles * hbase.dfs.client.read.shortcircuit.buffer.size - where hbase.dfs.client.read.shortcircuit.buffer.size is set to 128k in HBase -- see hbase-default.xml - default configurations). - Direct memory, which is part of the Java process heap, is separate from the object - heap allocated by -Xmx. The value allocated by MaxDirectMemorySize must not exceed - physical RAM, and is likely to be less than the total available RAM due to other - memory requirements and system constraints. - - You can see how much memory -- onheap and offheap/direct -- a RegionServer is - configured to use and how much it is using at any one time by looking at the - Server Metrics: Memory tab in the UI. It can also be gotten - via JMX. In particular the direct memory currently used by the server can be found - on the java.nio.type=BufferPool,name=direct bean. Terracotta has - a good write up on using offheap memory in java. It is for their product - BigMemory but alot of the issues noted apply in general to any attempt at going - offheap. Check it out. - - hbase.bucketcache.percentage.in.combinedcache - This is a pre-HBase 1.0 configuration removed because it - was confusing. It was a float that you would set to some value - between 0.0 and 1.0. Its default was 0.9. If the deploy was using - CombinedBlockCache, then the LruBlockCache L1 size was calculated to - be (1 - hbase.bucketcache.percentage.in.combinedcache) * size-of-bucketcache - and the BucketCache size was hbase.bucketcache.percentage.in.combinedcache * size-of-bucket-cache. - where size-of-bucket-cache itself is EITHER the value of the configuration hbase.bucketcache.size - IF it was specified as megabytes OR hbase.bucketcache.size * -XX:MaxDirectMemorySize if - hbase.bucketcache.size between 0 and 1.0. - - In 1.0, it should be more straight-forward. L1 LruBlockCache size - is set as a fraction of java heap using hfile.block.cache.size setting - (not the best name) and L2 is set as above either in absolute - megabytes or as a fraction of allocated maximum direct memory. - - -
    -
    -
    - Comprewssed Blockcache - HBASE-11331 introduced lazy blockcache decompression, more simply referred to - as compressed blockcache. When compressed blockcache is enabled. data and encoded data - blocks are cached in the blockcache in their on-disk format, rather than being - decompressed and decrypted before caching. - For a RegionServer - hosting more data than can fit into cache, enabling this feature with SNAPPY compression - has been shown to result in 50% increase in throughput and 30% improvement in mean - latency while, increasing garbage collection by 80% and increasing overall CPU load by - 2%. See HBASE-11331 for more details about how performance was measured and achieved. - For a RegionServer hosting data that can comfortably fit into cache, or if your workload - is sensitive to extra CPU or garbage-collection load, you may receive less - benefit. - Compressed blockcache is disabled by default. To enable it, set - hbase.block.data.cachecompressed to true in - hbase-site.xml on all RegionServers. -
    -
    - -
    - Write Ahead Log (WAL) - -
    - Purpose - The Write Ahead Log (WAL) records all changes to data in - HBase, to file-based storage. Under normal operations, the WAL is not needed because - data changes move from the MemStore to StoreFiles. However, if a RegionServer crashes or - becomes unavailable before the MemStore is flushed, the WAL ensures that the changes to - the data can be replayed. If writing to the WAL fails, the entire operation to modify the - data fails. - - HBase uses an implementation of the WAL interface. Usually, there is only one instance of a WAL per RegionServer. - The RegionServer records Puts and Deletes to it, before recording them to the for the affected . - - - The HLog - - Prior to 2.0, the interface for WALs in HBase was named HLog. - In 0.94, HLog was the name of the implementation of the WAL. You will likely find - references to the HLog in documentation tailored to these older versions. - - - The WAL resides in HDFS in the /hbase/WALs/ directory (prior to - HBase 0.94, they were stored in /hbase/.logs/), with subdirectories per - region. - For more general information about the concept of write ahead logs, see the - Wikipedia Write-Ahead Log - article. -
    -
    - WAL Flushing - TODO (describe). -
    - -
    - WAL Splitting - - A RegionServer serves many regions. All of the regions in a region server share the - same active WAL file. Each edit in the WAL file includes information about which region - it belongs to. When a region is opened, the edits in the WAL file which belong to that - region need to be replayed. Therefore, edits in the WAL file must be grouped by region - so that particular sets can be replayed to regenerate the data in a particular region. - The process of grouping the WAL edits by region is called log - splitting. It is a critical process for recovering data if a region server - fails. - Log splitting is done by the HMaster during cluster start-up or by the ServerShutdownHandler - as a region server shuts down. So that consistency is guaranteed, affected regions - are unavailable until data is restored. All WAL edits need to be recovered and replayed - before a given region can become available again. As a result, regions affected by - log splitting are unavailable until the process completes. - - Log Splitting, Step by Step - - The <filename>/hbase/WALs/<host>,<port>,<startcode></filename> directory is renamed. - Renaming the directory is important because a RegionServer may still be up and - accepting requests even if the HMaster thinks it is down. If the RegionServer does - not respond immediately and does not heartbeat its ZooKeeper session, the HMaster - may interpret this as a RegionServer failure. Renaming the logs directory ensures - that existing, valid WAL files which are still in use by an active but busy - RegionServer are not written to by accident. - The new directory is named according to the following pattern: - ,,-splitting]]> - An example of such a renamed directory might look like the following: - /hbase/WALs/srv.example.com,60020,1254173957298-splitting - - - Each log file is split, one at a time. - The log splitter reads the log file one edit entry at a time and puts each edit - entry into the buffer corresponding to the edit’s region. At the same time, the - splitter starts several writer threads. Writer threads pick up a corresponding - buffer and write the edit entries in the buffer to a temporary recovered edit - file. The temporary edit file is stored to disk with the following naming pattern: - //recovered.edits/.temp]]> - This file is used to store all the edits in the WAL log for this region. After - log splitting completes, the .temp file is renamed to the - sequence ID of the first log written to the file. - To determine whether all edits have been written, the sequence ID is compared to - the sequence of the last edit that was written to the HFile. If the sequence of the - last edit is greater than or equal to the sequence ID included in the file name, it - is clear that all writes from the edit file have been completed. - - - After log splitting is complete, each affected region is assigned to a - RegionServer. - When the region is opened, the recovered.edits folder is checked for recovered - edits files. If any such files are present, they are replayed by reading the edits - and saving them to the MemStore. After all edit files are replayed, the contents of - the MemStore are written to disk (HFile) and the edit files are deleted. - - - -
    - Handling of Errors During Log Splitting - - If you set the hbase.hlog.split.skip.errors option to - true, errors are treated as follows: - - - Any error encountered during splitting will be logged. - - - The problematic WAL log will be moved into the .corrupt - directory under the hbase rootdir, - - - Processing of the WAL will continue - - - If the hbase.hlog.split.skip.errors optionset to - false, the default, the exception will be propagated and the - split will be logged as failed. See HBASE-2958 When - hbase.hlog.split.skip.errors is set to false, we fail the split but thats - it. We need to do more than just fail split if this flag is set. - -
    - How EOFExceptions are treated when splitting a crashed RegionServers' - WALs - - If an EOFException occurs while splitting logs, the split proceeds even when - hbase.hlog.split.skip.errors is set to - false. An EOFException while reading the last log in the set of - files to split is likely, because the RegionServer is likely to be in the process of - writing a record at the time of a crash. For background, see HBASE-2643 - Figure how to deal with eof splitting logs -
    -
    - -
    - Performance Improvements during Log Splitting - - WAL log splitting and recovery can be resource intensive and take a long time, - depending on the number of RegionServers involved in the crash and the size of the - regions. and were developed to improve - performance during log splitting. - -
    - Distributed Log Splitting - Distributed Log Splitting was added in HBase version 0.92 - (HBASE-1364) - by Prakash Khemani from Facebook. It reduces the time to complete log splitting - dramatically, improving the availability of regions and tables. For - example, recovering a crashed cluster took around 9 hours with single-threaded log - splitting, but only about six minutes with distributed log splitting. - The information in this section is sourced from Jimmy Xiang's blog post at . - - - Enabling or Disabling Distributed Log Splitting - Distributed log processing is enabled by default since HBase 0.92. The setting - is controlled by the hbase.master.distributed.log.splitting - property, which can be set to true or false, - but defaults to true. - - - Distributed Log Splitting, Step by Step - After configuring distributed log splitting, the HMaster controls the process. - The HMaster enrolls each RegionServer in the log splitting process, and the actual - work of splitting the logs is done by the RegionServers. The general process for - log splitting, as described in still applies here. - - If distributed log processing is enabled, the HMaster creates a - split log manager instance when the cluster is started. - The split log manager manages all log files which need - to be scanned and split. The split log manager places all the logs into the - ZooKeeper splitlog node (/hbase/splitlog) as tasks. You can - view the contents of the splitlog by issuing the following - zkcli command. Example output is shown. - ls /hbase/splitlog -[hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost8.sample.com%2C57020%2C1340474893275-splitting%2Fhost8.sample.com%253A57020.1340474893900, -hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost3.sample.com%2C57020%2C1340474893299-splitting%2Fhost3.sample.com%253A57020.1340474893931, -hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost4.sample.com%2C57020%2C1340474893287-splitting%2Fhost4.sample.com%253A57020.1340474893946] - - The output contains some non-ASCII characters. When decoded, it looks much - more simple: - -[hdfs://host2.sample.com:56020/hbase/.logs -/host8.sample.com,57020,1340474893275-splitting -/host8.sample.com%3A57020.1340474893900, -hdfs://host2.sample.com:56020/hbase/.logs -/host3.sample.com,57020,1340474893299-splitting -/host3.sample.com%3A57020.1340474893931, -hdfs://host2.sample.com:56020/hbase/.logs -/host4.sample.com,57020,1340474893287-splitting -/host4.sample.com%3A57020.1340474893946] - - The listing represents WAL file names to be scanned and split, which is a - list of log splitting tasks. - - - The split log manager monitors the log-splitting tasks and workers. - The split log manager is responsible for the following ongoing tasks: - - - Once the split log manager publishes all the tasks to the splitlog - znode, it monitors these task nodes and waits for them to be - processed. - - - Checks to see if there are any dead split log - workers queued up. If it finds tasks claimed by unresponsive workers, it - will resubmit those tasks. If the resubmit fails due to some ZooKeeper - exception, the dead worker is queued up again for retry. - - - Checks to see if there are any unassigned - tasks. If it finds any, it create an ephemeral rescan node so that each - split log worker is notified to re-scan unassigned tasks via the - nodeChildrenChanged ZooKeeper event. - - - Checks for tasks which are assigned but expired. If any are found, they - are moved back to TASK_UNASSIGNED state again so that they can - be retried. It is possible that these tasks are assigned to slow workers, or - they may already be finished. This is not a problem, because log splitting - tasks have the property of idempotence. In other words, the same log - splitting task can be processed many times without causing any - problem. - - - The split log manager watches the HBase split log znodes constantly. If - any split log task node data is changed, the split log manager retrieves the - node data. The - node data contains the current state of the task. You can use the - zkcli get command to retrieve the - current state of a task. In the example output below, the first line of the - output shows that the task is currently unassigned. - -get /hbase/splitlog/hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost6.sample.com%2C57020%2C1340474893287-splitting%2Fhost6.sample.com%253A57020.1340474893945 - -unassigned host2.sample.com:57000 -cZxid = 0×7115 -ctime = Sat Jun 23 11:13:40 PDT 2012 -... - - Based on the state of the task whose data is changed, the split log - manager does one of the following: - - - - Resubmit the task if it is unassigned - - - Heartbeat the task if it is assigned - - - Resubmit or fail the task if it is resigned (see ) - - - Resubmit or fail the task if it is completed with errors (see ) - - - Resubmit or fail the task if it could not complete due to - errors (see ) - - - Delete the task if it is successfully completed or failed - - - - Reasons a Task Will Fail - The task has been deleted. - The node no longer exists. - The log status manager failed to move the state of the task - to TASK_UNASSIGNED. - The number of resubmits is over the resubmit - threshold. - - - - - - Each RegionServer's split log worker performs the log-splitting tasks. - Each RegionServer runs a daemon thread called the split log - worker, which does the work to split the logs. The daemon thread - starts when the RegionServer starts, and registers itself to watch HBase znodes. - If any splitlog znode children change, it notifies a sleeping worker thread to - wake up and grab more tasks. If if a worker's current task’s node data is - changed, the worker checks to see if the task has been taken by another worker. - If so, the worker thread stops work on the current task. - The worker monitors - the splitlog znode constantly. When a new task appears, the split log worker - retrieves the task paths and checks each one until it finds an unclaimed task, - which it attempts to claim. If the claim was successful, it attempts to perform - the task and updates the task's state property based on the - splitting outcome. At this point, the split log worker scans for another - unclaimed task. - - How the Split Log Worker Approaches a Task - - - It queries the task state and only takes action if the task is in - TASK_UNASSIGNED state. - - - If the task is is in TASK_UNASSIGNED state, the - worker attempts to set the state to TASK_OWNED by itself. - If it fails to set the state, another worker will try to grab it. The split - log manager will also ask all workers to rescan later if the task remains - unassigned. - - - If the worker succeeds in taking ownership of the task, it tries to get - the task state again to make sure it really gets it asynchronously. In the - meantime, it starts a split task executor to do the actual work: - - - Get the HBase root folder, create a temp folder under the root, and - split the log file to the temp folder. - - - If the split was successful, the task executor sets the task to - state TASK_DONE. - - - If the worker catches an unexpected IOException, the task is set to - state TASK_ERR. - - - If the worker is shutting down, set the the task to state - TASK_RESIGNED. - - - If the task is taken by another worker, just log it. - - - - - - - The split log manager monitors for uncompleted tasks. - The split log manager returns when all tasks are completed successfully. If - all tasks are completed with some failures, the split log manager throws an - exception so that the log splitting can be retried. Due to an asynchronous - implementation, in very rare cases, the split log manager loses track of some - completed tasks. For that reason, it periodically checks for remaining - uncompleted task in its task map or ZooKeeper. If none are found, it throws an - exception so that the log splitting can be retried right away instead of hanging - there waiting for something that won’t happen. - - -
    -
    - Distributed Log Replay - After a RegionServer fails, its failed region is assigned to another - RegionServer, which is marked as "recovering" in ZooKeeper. A split log worker directly - replays edits from the WAL of the failed region server to the region at its new - location. When a region is in "recovering" state, it can accept writes but no reads - (including Append and Increment), region splits or merges. - Distributed Log Replay extends the framework. It works by - directly replaying WAL edits to another RegionServer instead of creating - recovered.edits files. It provides the following advantages - over distributed log splitting alone: - - It eliminates the overhead of writing and reading a large number of - recovered.edits files. It is not unusual for thousands of - recovered.edits files to be created and written concurrently - during a RegionServer recovery. Many small random writes can degrade overall - system performance. - It allows writes even when a region is in recovering state. It only takes seconds for a recovering region to accept writes again. - - - - Enabling Distributed Log Replay - To enable distributed log replay, set hbase.master.distributed.log.replay to - true. This will be the default for HBase 0.99 (HBASE-10888). - - You must also enable HFile version 3 (which is the default HFile format starting - in HBase 0.99. See HBASE-10855). - Distributed log replay is unsafe for rolling upgrades. -
    -
    -
    -
    - Disabling the WAL - It is possible to disable the WAL, to improve performace in certain specific - situations. However, disabling the WAL puts your data at risk. The only situation where - this is recommended is during a bulk load. This is because, in the event of a problem, - the bulk load can be re-run with no risk of data loss. - The WAL is disabled by calling the HBase client field - Mutation.writeToWAL(false). Use the - Mutation.setDurability(Durability.SKIP_WAL) and Mutation.getDurability() - methods to set and get the field's value. There is no way to disable the WAL for only a - specific table. - - If you disable the WAL for anything other than bulk loads, your data is at - risk. -
    -
    - -
    - -
    - Regions - Regions are the basic element of availability and - distribution for tables, and are comprised of a Store per Column Family. The heirarchy of objects - is as follows: - -Table (HBase table) - Region (Regions for the table) - Store (Store per ColumnFamily for each Region for the table) - MemStore (MemStore for each Store for each Region for the table) - StoreFile (StoreFiles for each Store for each Region for the table) - Block (Blocks within a StoreFile within a Store for each Region for the table) - - For a description of what HBase files look like when written to HDFS, see . - -
    - Considerations for Number of Regions - In general, HBase is designed to run with a small (20-200) number of relatively large (5-20Gb) regions per server. The considerations for this are as follows: -
    - Why cannot I have too many regions? - - Typically you want to keep your region count low on HBase for numerous reasons. - Usually right around 100 regions per RegionServer has yielded the best results. - Here are some of the reasons below for keeping region count low: - - - MSLAB requires 2mb per memstore (that's 2mb per family per region). - 1000 regions that have 2 families each is 3.9GB of heap used, and it's not even storing data yet. NB: the 2MB value is configurable. - - If you fill all the regions at somewhat the same rate, the global memory usage makes it that it forces tiny - flushes when you have too many regions which in turn generates compactions. - Rewriting the same data tens of times is the last thing you want. - An example is filling 1000 regions (with one family) equally and let's consider a lower bound for global memstore - usage of 5GB (the region server would have a big heap). - Once it reaches 5GB it will force flush the biggest region, - at that point they should almost all have about 5MB of data so - it would flush that amount. 5MB inserted later, it would flush another - region that will now have a bit over 5MB of data, and so on. - This is currently the main limiting factor for the number of regions; see - for detailed formula. - - The master as is is allergic to tons of regions, and will - take a lot of time assigning them and moving them around in batches. - The reason is that it's heavy on ZK usage, and it's not very async - at the moment (could really be improved -- and has been imporoved a bunch - in 0.96 hbase). - - - In older versions of HBase (pre-v2 hfile, 0.90 and previous), tons of regions - on a few RS can cause the store file index to rise, increasing heap usage and potentially - creating memory pressure or OOME on the RSs - - - Another issue is the effect of the number of regions on mapreduce jobs; it is typical to have one mapper per HBase region. - Thus, hosting only 5 regions per RS may not be enough to get sufficient number of tasks for a mapreduce job, while 1000 regions will generate far too many tasks. - - See for configuration guidelines. - -
    - -
    - -
    - Region-RegionServer Assignment - This section describes how Regions are assigned to RegionServers. - - -
    - Startup - When HBase starts regions are assigned as follows (short version): - - The Master invokes the AssignmentManager upon startup. - - The AssignmentManager looks at the existing region assignments in META. - - If the region assignment is still valid (i.e., if the RegionServer is still online) - then the assignment is kept. - - If the assignment is invalid, then the LoadBalancerFactory is invoked to assign the - region. The DefaultLoadBalancer will randomly assign the region to a RegionServer. - - META is updated with the RegionServer assignment (if needed) and the RegionServer start codes - (start time of the RegionServer process) upon region opening by the RegionServer. - - - -
    - -
    - Failover - When a RegionServer fails: - - The regions immediately become unavailable because the RegionServer is - down. - - - The Master will detect that the RegionServer has failed. - - - The region assignments will be considered invalid and will be re-assigned just - like the startup sequence. - - - In-flight queries are re-tried, and not lost. - - - Operations are switched to a new RegionServer within the following amount of - time: - ZooKeeper session timeout + split time + assignment/replay time - - - -
    - -
    - Region Load Balancing - - Regions can be periodically moved by the . - -
    - -
    - Region State Transition - HBase maintains a state for each region and persists the state in META. The state - of the META region itself is persisted in ZooKeeper. You can see the states of regions - in transition in the Master web UI. Following is the list of possible region - states. - - - Possible Region States - - OFFLINE: the region is offline and not opening - - - OPENING: the region is in the process of being opened - - - OPEN: the region is open and the region server has notified the master - - - FAILED_OPEN: the region server failed to open the region - - - CLOSING: the region is in the process of being closed - - - CLOSED: the region server has closed the region and notified the master - - - FAILED_CLOSE: the region server failed to close the region - - - SPLITTING: the region server notified the master that the region is - splitting - - - SPLIT: the region server notified the master that the region has finished - splitting - - - SPLITTING_NEW: this region is being created by a split which is in - progress - - - MERGING: the region server notified the master that this region is being merged - with another region - - - MERGED: the region server notified the master that this region has been - merged - - - MERGING_NEW: this region is being created by a merge of two regions - - - -
    - Region State Transitions - - - - -
    - - - - - Graph Legend - - Brown: Offline state, a special state that can be transient (after closed before - opening), terminal (regions of disabled tables), or initial (regions of newly - created tables) - - Palegreen: Online state that regions can serve requests - - Lightblue: Transient states - - Red: Failure states that need OPS attention - - Gold: Terminal states of regions split/merged - - Grey: Initial states of regions created through split/merge - - - - Region State Transitions Explained - - The master moves a region from OFFLINE to - OPENING state and tries to assign the region to a region - server. The region server may or may not have received the open region request. The - master retries sending the open region request to the region server until the RPC - goes through or the master runs out of retries. After the region server receives the - open region request, the region server begins opening the region. - - - If the master is running out of retries, the master prevents the region server - from opening the region by moving the region to CLOSING state and - trying to close it, even if the region server is starting to open the region. - - - After the region server opens the region, it continues to try to notify the - master until the master moves the region to OPEN state and - notifies the region server. The region is now open. - - - If the region server cannot open the region, it notifies the master. The master - moves the region to CLOSED state and tries to open the region on - a different region server. - - - If the master cannot open the region on any of a certain number of regions, it - moves the region to FAILED_OPEN state, and takes no further - action until an operator intervenes from the HBase shell, or the server is - dead. - - - The master moves a region from OPEN to - CLOSING state. The region server holding the region may or may - not have received the close region request. The master retries sending the close - request to the server until the RPC goes through or the master runs out of - retries. - - - If the region server is not online, or throws - NotServingRegionException, the master moves the region to - OFFLINE state and re-assigns it to a different region - server. - - - If the region server is online, but not reachable after the master runs out of - retries, the master moves the region to FAILED_CLOSE state and - takes no further action until an operator intervenes from the HBase shell, or the - server is dead. - - - If the region server gets the close region request, it closes the region and - notifies the master. The master moves the region to CLOSED state - and re-assigns it to a different region server. - - - Before assigning a region, the master moves the region to - OFFLINE state automatically if it is in - CLOSED state. - - - When a region server is about to split a region, it notifies the master. The - master moves the region to be split from OPEN to - SPLITTING state and add the two new regions to be created to - the region server. These two regions are in SPLITING_NEW state - initially. - - - After notifying the master, the region server starts to split the region. Once - past the point of no return, the region server notifies the master again so the - master can update the META. However, the master does not update the region states - until it is notified by the server that the split is done. If the split is - successful, the splitting region is moved from SPLITTING to - SPLIT state and the two new regions are moved from - SPLITTING_NEW to OPEN state. - - - If the split fails, the splitting region is moved from - SPLITTING back to OPEN state, and the two - new regions which were created are moved from SPLITTING_NEW to - OFFLINE state. - - - When a region server is about to merge two regions, it notifies the master - first. The master moves the two regions to be merged from OPEN to - MERGINGstate, and adds the new region which will hold the - contents of the merged regions region to the region server. The new region is in - MERGING_NEW state initially. - - - After notifying the master, the region server starts to merge the two regions. - Once past the point of no return, the region server notifies the master again so the - master can update the META. However, the master does not update the region states - until it is notified by the region server that the merge has completed. If the merge - is successful, the two merging regions are moved from MERGING to - MERGED state and the new region is moved from - MERGING_NEW to OPEN state. - - - If the merge fails, the two merging regions are moved from - MERGING back to OPEN state, and the new - region which was created to hold the contents of the merged regions is moved from - MERGING_NEW to OFFLINE state. - - - For regions in FAILED_OPEN or FAILED_CLOSE - states , the master tries to close them again when they are reassigned by an - operator via HBase Shell. - - - - - - -
    - Region-RegionServer Locality - Over time, Region-RegionServer locality is achieved via HDFS block replication. - The HDFS client does the following by default when choosing locations to write replicas: - - First replica is written to local node - - Second replica is written to a random node on another rack - - Third replica is written on the same rack as the second, but on a different node chosen randomly - - Subsequent replicas are written on random nodes on the cluster. See Replica Placement: The First Baby Steps on this page: HDFS Architecture - - - Thus, HBase eventually achieves locality for a region after a flush or a compaction. - In a RegionServer failover situation a RegionServer may be assigned regions with non-local - StoreFiles (because none of the replicas are local), however as new data is written - in the region, or the table is compacted and StoreFiles are re-written, they will become "local" - to the RegionServer. - - For more information, see Replica Placement: The First Baby Steps on this page: HDFS Architecture - and also Lars George's blog on HBase and HDFS locality. - -
    - -
    - Region Splits - Regions split when they reach a configured threshold. - Below we treat the topic in short. For a longer exposition, - see Apache HBase Region Splitting and Merging - by our Enis Soztutar. - - - Splits run unaided on the RegionServer; i.e. the Master does not - participate. The RegionServer splits a region, offlines the split - region and then adds the daughter regions to META, opens daughters on - the parent's hosting RegionServer and then reports the split to the - Master. See for how to manually manage - splits (and for why you might do this) -
    - Custom Split Policies - The default split policy can be overwritten using a custom RegionSplitPolicy (HBase 0.94+). - Typically a custom split policy should extend HBase's default split policy: ConstantSizeRegionSplitPolicy. - - The policy can set globally through the HBaseConfiguration used or on a per table basis: - -HTableDescriptor myHtd = ...; -myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName()); - - -
    -
    - -
    - Manual Region Splitting - It is possible to manually split your table, either at table creation (pre-splitting), - or at a later time as an administrative action. You might choose to split your region for - one or more of the following reasons. There may be other valid reasons, but the need to - manually split your table might also point to problems with your schema design. - - Reasons to Manually Split Your Table - - Your data is sorted by timeseries or another similar algorithm that sorts new data - at the end of the table. This means that the Region Server holding the last region is - always under load, and the other Region Servers are idle, or mostly idle. See also - . - - - You have developed an unexpected hotspot in one region of your table. For - instance, an application which tracks web searches might be inundated by a lot of - searches for a celebrity in the event of news about that celebrity. See for more discussion about this particular - scenario. - - - After a big increase to the number of Region Servers in your cluster, to get the - load spread out quickly. - - - Before a bulk-load which is likely to cause unusual and uneven load across - regions. - - - See for a discussion about the dangers and - possible benefits of managing splitting completely manually. -
    - Determining Split Points - The goal of splitting your table manually is to improve the chances of balancing the - load across the cluster in situations where good rowkey design alone won't get you - there. Keeping that in mind, the way you split your regions is very dependent upon the - characteristics of your data. It may be that you already know the best way to split your - table. If not, the way you split your table depends on what your keys are like. - - - Alphanumeric Rowkeys - - If your rowkeys start with a letter or number, you can split your table at - letter or number boundaries. For instance, the following command creates a table - with regions that split at each vowel, so the first region has A-D, the second - region has E-H, the third region has I-N, the fourth region has O-V, and the fifth - region has U-Z. - hbase> create 'test_table', 'f1', SPLITS=> ['a', 'e', 'i', 'o', 'u'] - The following command splits an existing table at split point '2'. - hbase> split 'test_table', '2' - You can also split a specific region by referring to its ID. You can find the - region ID by looking at either the table or region in the Web UI. It will be a - long number such as - t2,1,1410227759524.829850c6eaba1acc689480acd8f081bd.. The - format is table_name,start_key,region_idTo split that - region into two, as close to equally as possible (at the nearest row boundary), - issue the following command. - hbase> split 't2,1,1410227759524.829850c6eaba1acc689480acd8f081bd.' - The split key is optional. If it is omitted, the table or region is split in - half. - The following example shows how to use the RegionSplitter to create 10 - regions, split at hexadecimal values. - hbase org.apache.hadoop.hbase.util.RegionSplitter test_table HexStringSplit -c 10 -f f1 - - - - Using a Custom Algorithm - - The RegionSplitter tool is provided with HBase, and uses a SplitAlgorithm to determine split points for you. As - parameters, you give it the algorithm, desired number of regions, and column - families. It includes two split algorithms. The first is the HexStringSplit algorithm, which assumes the row keys are - hexadecimal strings. The second, UniformSplit, assumes the row keys are random byte arrays. You will - probably need to develop your own SplitAlgorithm, using the provided ones as - models. - - - -
    -
    -
    - Online Region Merges - - Both Master and Regionserver participate in the event of online region merges. - Client sends merge RPC to master, then master moves the regions together to the - same regionserver where the more heavily loaded region resided, finally master - send merge request to this regionserver and regionserver run the region merges. - Similar with process of region splits, region merges run as a local transaction - on the regionserver, offlines the regions and then merges two regions on the file - system, atomically delete merging regions from META and add merged region to the META, - opens merged region on the regionserver and reports the merge to Master at last. - - An example of region merges in the hbase shell - $ hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME' - hbase> merge_region 'ENCODED_REGIONNAME', 'ENCODED_REGIONNAME', true - - It's an asynchronous operation and call returns immediately without waiting merge completed. - Passing 'true' as the optional third parameter will force a merge ('force' merges regardless - else merge will fail unless passed adjacent regions. 'force' is for expert use only) - -
    - -
    - Store - A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region. - -
    - MemStore - The MemStore holds in-memory modifications to the Store. Modifications are - Cells/KeyValues. When a flush is requested, the current memstore is moved to a snapshot and is - cleared. HBase continues to serve edits from the new memstore and backing snapshot until - the flusher reports that the flush succeeded. At this point, the snapshot is discarded. - Note that when the flush happens, Memstores that belong to the same region will all be - flushed. -
    -
    - MemStoreFlush - A MemStore flush can be triggered under any of the conditions listed below. The - minimum flush unit is per region, not at individual MemStore level. - - - When a MemStore reaches the value specified by - hbase.hregion.memstore.flush.size, all MemStores that belong to - its region will be flushed out to disk. - - - When overall memstore usage reaches the value specified by - hbase.regionserver.global.memstore.upperLimit, MemStores from - various regions will be flushed out to disk to reduce overall MemStore usage in a - Region Server. The flush order is based on the descending order of a region's - MemStore usage. Regions will have their MemStores flushed until the overall MemStore - usage drops to or slightly below - hbase.regionserver.global.memstore.lowerLimit. - - - When the number of WAL per region server reaches the value specified in - hbase.regionserver.max.logs, MemStores from various regions - will be flushed out to disk to reduce WAL count. The flush order is based on time. - Regions with the oldest MemStores are flushed first until WAL count drops below - hbase.regionserver.max.logs. - - -
    -
    - Scans - - - When a client issues a scan against a table, HBase generates - RegionScanner objects, one per region, to serve the scan request. - - - - The RegionScanner object contains a list of - StoreScanner objects, one per column family. - - - Each StoreScanner object further contains a list of - StoreFileScanner objects, corresponding to each StoreFile and - HFile of the corresponding column family, and a list of - KeyValueScanner objects for the MemStore. - - - The two lists are merge into one, which is sorted in ascending order with the - scan object for the MemStore at the end of the list. - - - When a StoreFileScanner object is constructed, it is associated - with a MultiVersionConsistencyControl read point, which is the - current memstoreTS, filtering out any new updates beyond the read - point. - - -
    -
    - StoreFile (HFile) - StoreFiles are where your data lives. - -
    HFile Format - The hfile file format is based on - the SSTable file described in the BigTable [2006] paper and on - Hadoop's tfile - (The unit test suite and the compression harness were taken directly from tfile). - Schubert Zhang's blog post on HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs makes for a thorough introduction to HBase's hfile. Matteo Bertozzi has also put up a - helpful description, HBase I/O: HFile. - - For more information, see the HFile source code. - Also see for information about the HFile v2 format that was included in 0.92. - -
    -
    - HFile Tool - - To view a textualized version of hfile content, you can do use - the org.apache.hadoop.hbase.io.hfile.HFile - tool. Type the following to see usage:$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile For - example, to view the content of the file - hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475, - type the following: $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475 If - you leave off the option -v to see just a summary on the hfile. See - usage for other things to do with the HFile - tool. -
    -
    - StoreFile Directory Structure on HDFS - For more information of what StoreFiles look like on HDFS with respect to the directory structure, see . - -
    -
    - -
    - Blocks - StoreFiles are composed of blocks. The blocksize is configured on a per-ColumnFamily basis. - - Compression happens at the block level within StoreFiles. For more information on compression, see . - - For more information on blocks, see the HFileBlock source code. - -
    -
    - KeyValue - The KeyValue class is the heart of data storage in HBase. KeyValue wraps a byte array and takes offsets and lengths into passed array - at where to start interpreting the content as KeyValue. - - The KeyValue format inside a byte array is: - - keylength - valuelength - key - value - - - The Key is further decomposed as: - - rowlength - row (i.e., the rowkey) - columnfamilylength - columnfamily - columnqualifier - timestamp - keytype (e.g., Put, Delete, DeleteColumn, DeleteFamily) - - - KeyValue instances are not split across blocks. - For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read - in as a coherent block. For more information, see the KeyValue source code. - -
    Example - To emphasize the points above, examine what happens with two Puts for two different columns for the same row: - - Put #1: rowkey=row1, cf:attr1=value1 - Put #2: rowkey=row1, cf:attr2=value2 - - Even though these are for the same row, a KeyValue is created for each column: - Key portion for Put #1: - - rowlength ------------> 4 - row -----------------> row1 - columnfamilylength ---> 2 - columnfamily --------> cf - columnqualifier ------> attr1 - timestamp -----------> server time of Put - keytype -------------> Put - - - Key portion for Put #2: - - rowlength ------------> 4 - row -----------------> row1 - columnfamilylength ---> 2 - columnfamily --------> cf - columnqualifier ------> attr2 - timestamp -----------> server time of Put - keytype -------------> Put - - - - It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within - the KeyValue instance. The longer these identifiers are, the bigger the KeyValue is. -
    - -
    -
    - Compaction - - Ambiguous Terminology - A StoreFile is a facade of HFile. In terms of compaction, use of - StoreFile seems to have prevailed in the past. - A Store is the same thing as a ColumnFamily. - StoreFiles are related to a Store, or ColumnFamily. - - If you want to read more about StoreFiles versus HFiles and Stores versus - ColumnFamilies, see HBASE-11316. - - - When the MemStore reaches a given size - (hbase.hregion.memstore.flush.size), it flushes its contents to a - StoreFile. The number of StoreFiles in a Store increases over time. - Compaction is an operation which reduces the number of - StoreFiles in a Store, by merging them together, in order to increase performance on - read operations. Compactions can be resource-intensive to perform, and can either help - or hinder performance depending on many factors. - Compactions fall into two categories: minor and major. Minor and major compactions - differ in the following ways. - Minor compactions usually select a small number of small, - adjacent StoreFiles and rewrite them as a single StoreFile. Minor compactions do not - drop (filter out) deletes or expired versions, because of potential side effects. See and for information on how deletes and versions are - handled in relation to compactions. The end result of a minor compaction is fewer, - larger StoreFiles for a given Store. - The end result of a major compaction is a single StoreFile - per Store. Major compactions also process delete markers and max versions. See and for information on how deletes and versions are - handled in relation to compactions. - - - Compaction and Deletions - When an explicit deletion occurs in HBase, the data is not actually deleted. - Instead, a tombstone marker is written. The tombstone marker - prevents the data from being returned with queries. During a major compaction, the - data is actually deleted, and the tombstone marker is removed from the StoreFile. If - the deletion happens because of an expired TTL, no tombstone is created. Instead, the - expired data is filtered out and is not written back to the compacted - StoreFile. - - - - Compaction and Versions - When you create a Column Family, you can specify the maximum number of versions - to keep, by specifying HColumnDescriptor.setMaxVersions(int - versions). The default value is 3. If more versions - than the specified maximum exist, the excess versions are filtered out and not written - back to the compacted StoreFile. - - - - Major Compactions Can Impact Query Results - In some situations, older versions can be inadvertently resurrected if a newer - version is explicitly deleted. See for a more in-depth explanation. - This situation is only possible before the compaction finishes. - - - In theory, major compactions improve performance. However, on a highly loaded - system, major compactions can require an inappropriate number of resources and adversely - affect performance. In a default configuration, major compactions are scheduled - automatically to run once in a 7-day period. This is sometimes inappropriate for systems - in production. You can manage major compactions manually. See . - Compactions do not perform region merges. See for more information on region merging. -
    - Compaction Policy - HBase 0.96.x and newer - Compacting large StoreFiles, or too many StoreFiles at once, can cause more IO - load than your cluster is able to handle without causing performance problems. The - method by which HBase selects which StoreFiles to include in a compaction (and whether - the compaction is a minor or major compaction) is called the compaction - policy. - Prior to HBase 0.96.x, there was only one compaction policy. That original - compaction policy is still available as - RatioBasedCompactionPolicy The new compaction default - policy, called ExploringCompactionPolicy, was subsequently - backported to HBase 0.94 and HBase 0.95, and is the default in HBase 0.96 and newer. - It was implemented in HBASE-7842. In - short, ExploringCompactionPolicy attempts to select the best - possible set of StoreFiles to compact with the least amount of work, while the - RatioBasedCompactionPolicy selects the first set that meets - the criteria. - Regardless of the compaction policy used, file selection is controlled by several - configurable parameters and happens in a multi-step approach. These parameters will be - explained in context, and then will be given in a table which shows their - descriptions, defaults, and implications of changing them. - -
    - Being Stuck - When the MemStore gets too large, it needs to flush its contents to a StoreFile. - However, a Store can only have hbase.hstore.blockingStoreFiles - files, so the MemStore needs to wait for the number of StoreFiles to be reduced by - one or more compactions. However, if the MemStore grows larger than - hbase.hregion.memstore.flush.size, it is not able to flush its - contents to a StoreFile. If the MemStore is too large and the number of StpreFo;es - is also too high, the algorithm is said to be "stuck". The compaction algorithm - checks for this "stuck" situation and provides mechanisms to alleviate it. -
    - -
    - The ExploringCompactionPolicy Algorithm - The ExploringCompactionPolicy algorithm considers each possible set of - adjacent StoreFiles before choosing the set where compaction will have the most - benefit. - One situation where the ExploringCompactionPolicy works especially well is when - you are bulk-loading data and the bulk loads create larger StoreFiles than the - StoreFiles which are holding data older than the bulk-loaded data. This can "trick" - HBase into choosing to perform a major compaction each time a compaction is needed, - and cause a lot of extra overhead. With the ExploringCompactionPolicy, major - compactions happen much less frequently because minor compactions are more - efficient. - In general, ExploringCompactionPolicy is the right choice for most situations, - and thus is the default compaction policy. You can also use - ExploringCompactionPolicy along with . - The logic of this policy can be examined in - hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java. - The following is a walk-through of the logic of the - ExploringCompactionPolicy. - - - Make a list of all existing StoreFiles in the Store. The rest of the - algorithm filters this list to come up with the subset of HFiles which will be - chosen for compaction. - - - If this was a user-requested compaction, attempt to perform the requested - compaction type, regardless of what would normally be chosen. Note that even if - the user requests a major compaction, it may not be possible to perform a major - compaction. This may be because not all StoreFiles in the Column Family are - available to compact or because there are too many Stores in the Column - Family. - - - Some StoreFiles are automatically excluded from consideration. These - include: - - - StoreFiles that are larger than - hbase.hstore.compaction.max.size - - - StoreFiles that were created by a bulk-load operation which explicitly - excluded compaction. You may decide to exclude StoreFiles resulting from - bulk loads, from compaction. To do this, specify the - hbase.mapreduce.hfileoutputformat.compaction.exclude - parameter during the bulk load operation. - - - - - Iterate through the list from step 1, and make a list of all potential sets - of StoreFiles to compact together. A potential set is a grouping of - hbase.hstore.compaction.min contiguous StoreFiles in the - list. For each set, perform some sanity-checking and figure out whether this is - the best compaction that could be done: - - - If the number of StoreFiles in this set (not the size of the StoreFiles) - is fewer than hbase.hstore.compaction.min or more than - hbase.hstore.compaction.max, take it out of - consideration. - - - Compare the size of this set of StoreFiles with the size of the smallest - possible compaction that has been found in the list so far. If the size of - this set of StoreFiles represents the smallest compaction that could be - done, store it to be used as a fall-back if the algorithm is "stuck" and no - StoreFiles would otherwise be chosen. See . - - - Do size-based sanity checks against each StoreFile in this set of - StoreFiles. - - - If the size of this StoreFile is larger than - hbase.hstore.compaction.max.size, take it out of - consideration. - - - If the size is greater than or equal to - hbase.hstore.compaction.min.size, sanity-check it - against the file-based ratio to see whether it is too large to be - considered. The sanity-checking is successful if: - - - There is only one StoreFile in this set, or - - - For each StoreFile, its size multiplied by - hbase.hstore.compaction.ratio (or - hbase.hstore.compaction.ratio.offpeak if - off-peak hours are configured and it is during off-peak hours) is - less than the sum of the sizes of the other HFiles in the - set. - - - - - - - - - If this set of StoreFiles is still in consideration, compare it to the - previously-selected best compaction. If it is better, replace the - previously-selected best compaction with this one. - - - When the entire list of potential compactions has been processed, perform - the best compaction that was found. If no StoreFiles were selected for - compaction, but there are multiple StoreFiles, assume the algorithm is stuck - (see ) and if so, perform the smallest - compaction that was found in step 3. - - -
    - -
    - RatioBasedCompactionPolicy Algorithm - The RatioBasedCompactionPolicy was the only compaction policy prior to HBase - 0.96, though ExploringCompactionPolicy has now been backported to HBase 0.94 and - 0.95. To use the RatioBasedCompactionPolicy rather than the - ExploringCompactionPolicy, set - hbase.hstore.defaultengine.compactionpolicy.class to - RatioBasedCompactionPolicy in the - hbase-site.xml file. To switch back to the - ExploringCompactionPolicy, remove the setting from the - hbase-site.xml. - The following section walks you through the algorithm used to select StoreFiles - for compaction in the RatioBasedCompactionPolicy. - - - The first phase is to create a list of all candidates for compaction. A list - is created of all StoreFiles not already in the compaction queue, and all - StoreFiles newer than the newest file that is currently being compacted. This - list of StoreFiles is ordered by the sequence ID. The sequence ID is generated - when a Put is appended to the write-ahead log (WAL), and is stored in the - metadata of the HFile. - - - Check to see if the algorithm is stuck (see , and if so, a major compaction is forced. - This is a key area where is often a better choice than the - RatioBasedCompactionPolicy. - - - If the compaction was user-requested, try to perform the type of compaction - that was requested. Note that a major compaction may not be possible if all - HFiles are not available for compaction or if too may StoreFiles exist (more - than hbase.hstore.compaction.max). - - - Some StoreFiles are automatically excluded from consideration. These - include: - - - StoreFiles that are larger than - hbase.hstore.compaction.max.size - - - StoreFiles that were created by a bulk-load operation which explicitly - excluded compaction. You may decide to exclude StoreFiles resulting from - bulk loads, from compaction. To do this, specify the - hbase.mapreduce.hfileoutputformat.compaction.exclude - parameter during the bulk load operation. - - - - - The maximum number of StoreFiles allowed in a major compaction is controlled - by the hbase.hstore.compaction.max parameter. If the list - contains more than this number of StoreFiles, a minor compaction is performed - even if a major compaction would otherwise have been done. However, a - user-requested major compaction still occurs even if there are more than - hbase.hstore.compaction.max StoreFiles to compact. - - - If the list contains fewer than - hbase.hstore.compaction.min StoreFiles to compact, a minor - compaction is aborted. Note that a major compaction can be performed on a single - HFile. Its function is to remove deletes and expired versions, and reset - locality on the StoreFile. - - - The value of the hbase.hstore.compaction.ratio parameter - is multiplied by the sum of StoreFiles smaller than a given file, to determine - whether that StoreFile is selected for compaction during a minor compaction. For - instance, if hbase.hstore.compaction.ratio is 1.2, FileX is 5 mb, FileY is 2 mb, - and FileZ is 3 mb: - 5 <= 1.2 x (2 + 3) or 5 <= 6 - In this scenario, FileX is eligible for minor compaction. If FileX were 7 - mb, it would not be eligible for minor compaction. This ratio favors smaller - StoreFile. You can configure a different ratio for use in off-peak hours, using - the parameter hbase.hstore.compaction.ratio.offpeak, if you - also configure hbase.offpeak.start.hour and - hbase.offpeak.end.hour. - - - - If the last major compaction was too long ago and there is more than one - StoreFile to be compacted, a major compaction is run, even if it would otherwise - have been minor. By default, the maximum time between major compactions is 7 - days, plus or minus a 4.8 hour period, and determined randomly within those - parameters. Prior to HBase 0.96, the major compaction period was 24 hours. See - hbase.hregion.majorcompaction in the table below to tune or - disable time-based major compactions. - - -
    - -
    - - Parameters Used by Compaction Algorithm - This table contains the main configuration parameters for compaction. This list - is not exhaustive. To tune these parameters from the defaults, edit the - hbase-default.xml file. For a full list of all configuration - parameters available, see - - -
    - - Parameter - Description - Default - - - - - hbase.hstore.compaction.min - The minimum number of StoreFiles which must be eligible for - compaction before compaction can run. - The goal of tuning hbase.hstore.compaction.min - is to avoid ending up with too many tiny StoreFiles to compact. Setting - this value to 2 would cause a minor compaction each - time you have two StoreFiles in a Store, and this is probably not - appropriate. If you set this value too high, all the other values will - need to be adjusted accordingly. For most cases, the default value is - appropriate. - In previous versions of HBase, the parameter - hbase.hstore.compaction.min was called - hbase.hstore.compactionThreshold. - - 3 - - - hbase.hstore.compaction.max - The maximum number of StoreFiles which will be selected for a - single minor compaction, regardless of the number of eligible - StoreFiles. - Effectively, the value of - hbase.hstore.compaction.max controls the length of - time it takes a single compaction to complete. Setting it larger means - that more StoreFiles are included in a compaction. For most cases, the - default value is appropriate. - - 10 - - - hbase.hstore.compaction.min.size - A StoreFile smaller than this size will always be eligible for - minor compaction. StoreFiles this size or larger are evaluated by - hbase.hstore.compaction.ratio to determine if they are - eligible. - Because this limit represents the "automatic include" limit for - all StoreFiles smaller than this value, this value may need to be reduced - in write-heavy environments where many files in the 1-2 MB range are being - flushed, because every StoreFile will be targeted for compaction and the - resulting StoreFiles may still be under the minimum size and require - further compaction. - If this parameter is lowered, the ratio check is triggered more - quickly. This addressed some issues seen in earlier versions of HBase but - changing this parameter is no longer necessary in most situations. - - 128 MB - - - hbase.hstore.compaction.max.size - An StoreFile larger than this size will be excluded from - compaction. The effect of raising - hbase.hstore.compaction.max.size is fewer, larger - StoreFiles that do not get compacted often. If you feel that compaction is - happening too often without much benefit, you can try raising this - value. - Long.MAX_VALUE - - - hbase.hstore.compaction.ratio - For minor compaction, this ratio is used to determine whether a - given StoreFile which is larger than - hbase.hstore.compaction.min.size is eligible for - compaction. Its effect is to limit compaction of large StoreFile. The - value of hbase.hstore.compaction.ratio is expressed as - a floating-point decimal. - A large ratio, such as 10, will produce a - single giant StoreFile. Conversely, a value of .25, - will produce behavior similar to the BigTable compaction algorithm, - producing four StoreFiles. - A moderate value of between 1.0 and 1.4 is recommended. When - tuning this value, you are balancing write costs with read costs. Raising - the value (to something like 1.4) will have more write costs, because you - will compact larger StoreFiles. However, during reads, HBase will need to seek - through fewer StpreFo;es to accomplish the read. Consider this approach if you - cannot take advantage of . - Alternatively, you can lower this value to something like 1.0 to - reduce the background cost of writes, and use to limit the number of StoreFiles touched - during reads. - For most cases, the default value is appropriate. - - 1.2F - - - hbase.hstore.compaction.ratio.offpeak - The compaction ratio used during off-peak compactions, if off-peak - hours are also configured (see below). Expressed as a floating-point - decimal. This allows for more aggressive (or less aggressive, if you set it - lower than hbase.hstore.compaction.ratio) compaction - during a set time period. Ignored if off-peak is disabled (default). This - works the same as hbase.hstore.compaction.ratio. - 5.0F - - - hbase.offpeak.start.hour - The start of off-peak hours, expressed as an integer between 0 and 23, - inclusive. Set to -1 to disable off-peak. - -1 (disabled) - - - hbase.offpeak.end.hour - The end of off-peak hours, expressed as an integer between 0 and 23, - inclusive. Set to -1 to disable off-peak. - -1 (disabled) - - - hbase.regionserver.thread.compaction.throttle - There are two different thread pools for compactions, one for - large compactions and the other for small compactions. This helps to keep - compaction of lean tables (such as hbase:meta) - fast. If a compaction is larger than this threshold, it goes into the - large compaction pool. In most cases, the default value is - appropriate. - 2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size - (which defaults to 128) - - - hbase.hregion.majorcompaction - Time between major compactions, expressed in milliseconds. Set to - 0 to disable time-based automatic major compactions. User-requested and - size-based major compactions will still run. This value is multiplied by - hbase.hregion.majorcompaction.jitter to cause - compaction to start at a somewhat-random time during a given window of - time. - 7 days (604800000 milliseconds) - - - hbase.hregion.majorcompaction.jitter - A multiplier applied to - hbase.hregion.majorcompaction to cause compaction to - occur a given amount of time either side of - hbase.hregion.majorcompaction. The smaller the - number, the closer the compactions will happen to the - hbase.hregion.majorcompaction interval. Expressed as - a floating-point decimal. - .50F - - - - - - -
    - Compaction File Selection - - Legacy Information - This section has been preserved for historical reasons and refers to the way - compaction worked prior to HBase 0.96.x. You can still use this behavior if you - enable For information on - the way that compactions work in HBase 0.96.x and later, see . - - To understand the core algorithm for StoreFile selection, there is some ASCII-art - in the Store - source code that will serve as useful reference. It has been copied below: - -/* normal skew: - * - * older ----> newer - * _ - * | | _ - * | | | | _ - * --|-|- |-|- |-|---_-------_------- minCompactSize - * | | | | | | | | _ | | - * | | | | | | | | | | | | - * | | | | | | | | | | | | - */ - - Important knobs: - - hbase.hstore.compaction.ratio Ratio used in compaction file - selection algorithm (default 1.2f). - - - hbase.hstore.compaction.min (.90 - hbase.hstore.compactionThreshold) (files) Minimum number of StoreFiles per Store - to be selected for a compaction to occur (default 2). - - - hbase.hstore.compaction.max (files) Maximum number of - StoreFiles to compact per minor compaction (default 10). - - - hbase.hstore.compaction.min.size (bytes) Any StoreFile smaller - than this setting with automatically be a candidate for compaction. Defaults to - hbase.hregion.memstore.flush.size (128 mb). - - - hbase.hstore.compaction.max.size (.92) (bytes) Any StoreFile - larger than this setting with automatically be excluded from compaction (default - Long.MAX_VALUE). - - - - The minor compaction StoreFile selection logic is size based, and selects a file - for compaction when the file <= sum(smaller_files) * - hbase.hstore.compaction.ratio. - -
    - Minor Compaction File Selection - Example #1 (Basic Example) - This example mirrors an example from the unit test - TestCompactSelection. - - - hbase.hstore.compaction.ratio = 1.0f - - - hbase.hstore.compaction.min = 3 (files) - - - hbase.hstore.compaction.max = 5 (files) - - - hbase.hstore.compaction.min.size = 10 (bytes) - - - hbase.hstore.compaction.max.size = 1000 (bytes) - - - The following StoreFiles exist: 100, 50, 23, 12, and 12 bytes apiece (oldest to - newest). With the above parameters, the files that would be selected for minor - compaction are 23, 12, and 12. - Why? - - 100 --> No, because sum(50, 23, 12, 12) * 1.0 = 97. - - - 50 --> No, because sum(23, 12, 12) * 1.0 = 47. - - - 23 --> Yes, because sum(12, 12) * 1.0 = 24. - - - 12 --> Yes, because the previous file has been included, and because this - does not exceed the the max-file limit of 5 - - - 12 --> Yes, because the previous file had been included, and because this - does not exceed the the max-file limit of 5. - - - -
    -
    - Minor Compaction File Selection - Example #2 (Not Enough Files To - Compact) - This example mirrors an example from the unit test - TestCompactSelection. - - hbase.hstore.compaction.ratio = 1.0f - - - hbase.hstore.compaction.min = 3 (files) - - - hbase.hstore.compaction.max = 5 (files) - - - hbase.hstore.compaction.min.size = 10 (bytes) - - - hbase.hstore.compaction.max.size = 1000 (bytes) - - - - The following StoreFiles exist: 100, 25, 12, and 12 bytes apiece (oldest to - newest). With the above parameters, no compaction will be started. - Why? - - 100 --> No, because sum(25, 12, 12) * 1.0 = 47 - - - 25 --> No, because sum(12, 12) * 1.0 = 24 - - - 12 --> No. Candidate because sum(12) * 1.0 = 12, there are only 2 files - to compact and that is less than the threshold of 3 - - - 12 --> No. Candidate because the previous StoreFile was, but there are - not enough files to compact - - - -
    -
    - Minor Compaction File Selection - Example #3 (Limiting Files To Compact) - This example mirrors an example from the unit test - TestCompactSelection. - - hbase.hstore.compaction.ratio = 1.0f - - - hbase.hstore.compaction.min = 3 (files) - - - hbase.hstore.compaction.max = 5 (files) - - - hbase.hstore.compaction.min.size = 10 (bytes) - - - hbase.hstore.compaction.max.size = 1000 (bytes) - - The following StoreFiles exist: 7, 6, 5, 4, 3, 2, and 1 bytes apiece - (oldest to newest). With the above parameters, the files that would be selected for - minor compaction are 7, 6, 5, 4, 3. - Why? - - 7 --> Yes, because sum(6, 5, 4, 3, 2, 1) * 1.0 = 21. Also, 7 is less than - the min-size - - - 6 --> Yes, because sum(5, 4, 3, 2, 1) * 1.0 = 15. Also, 6 is less than - the min-size. - - - 5 --> Yes, because sum(4, 3, 2, 1) * 1.0 = 10. Also, 5 is less than the - min-size. - - - 4 --> Yes, because sum(3, 2, 1) * 1.0 = 6. Also, 4 is less than the - min-size. - - - 3 --> Yes, because sum(2, 1) * 1.0 = 3. Also, 3 is less than the - min-size. - - - 2 --> No. Candidate because previous file was selected and 2 is less than - the min-size, but the max-number of files to compact has been reached. - - - 1 --> No. Candidate because previous file was selected and 1 is less than - the min-size, but max-number of files to compact has been reached. - - - -
    - Impact of Key Configuration Options - - This information is now included in the configuration parameter table in . - -
    -
    -
    -
    - Experimental: Stripe Compactions - Stripe compactions is an experimental feature added in HBase 0.98 which aims to - improve compactions for large regions or non-uniformly distributed row keys. In order - to achieve smaller and/or more granular compactions, the StoreFiles within a region - are maintained separately for several row-key sub-ranges, or "stripes", of the region. - The stripes are transparent to the rest of HBase, so other operations on the HFiles or - data work without modification. - Stripe compactions change the HFile layout, creating sub-regions within regions. - These sub-regions are easier to compact, and should result in fewer major compactions. - This approach alleviates some of the challenges of larger regions. - Stripe compaction is fully compatible with and works in conjunction with either the - ExploringCompactionPolicy or RatioBasedCompactionPolicy. It can be enabled for - existing tables, and the table will continue to operate normally if it is disabled - later. -
    -
    - When To Use Stripe Compactions - Consider using stripe compaction if you have either of the following: - - - Large regions. You can get the positive effects of smaller regions without - additional overhead for MemStore and region management overhead. - - - Non-uniform keys, such as time dimension in a key. Only the stripes receiving - the new keys will need to compact. Old data will not compact as often, if at - all - - - - Performance Improvements - Performance testing has shown that the performance of reads improves somewhat, - and variability of performance of reads and writes is greatly reduced. An overall - long-term performance improvement is seen on large non-uniform-row key regions, such - as a hash-prefixed timestamp key. These performance gains are the most dramatic on a - table which is already large. It is possible that the performance improvement might - extend to region splits. - -
    - Enabling Stripe Compaction - You can enable stripe compaction for a table or a column family, by setting its - hbase.hstore.engine.class to - org.apache.hadoop.hbase.regionserver.StripeStoreEngine. You - also need to set the hbase.hstore.blockingStoreFiles to a high - number, such as 100 (rather than the default value of 10). - - Enable Stripe Compaction - - If the table already exists, disable the table. - - - Run one of following commands in the HBase shell. Replace the table name - orders_table with the name of your table. - -alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'} -alter 'orders_table', {NAME => 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'}} -create 'orders_table', 'blobs_cf', CONFIGURATION => {'hbase.hstore.engine.class' => 'org.apache.hadoop.hbase.regionserver.StripeStoreEngine', 'hbase.hstore.blockingStoreFiles' => '100'} - - - - Configure other options if needed. See for more information. - - - Enable the table. - - - - - Disable Stripe Compaction - - Disable the table. - - - Set the hbase.hstore.engine.class option to either nil or - org.apache.hadoop.hbase.regionserver.DefaultStoreEngine. - Either option has the same effect. - -alter 'orders_table', CONFIGURATION => {'hbase.hstore.engine.class' => ''} - - - - Enable the table. - - - When you enable a large table after changing the store engine either way, a - major compaction will likely be performed on most regions. This is not necessary on - new tables. -
    -
    - Configuring Stripe Compaction - Each of the settings for stripe compaction should be configured at the table or - column family, after disabling the table. If you use HBase shell, the general - command pattern is as follows: - - -alter 'orders_table', CONFIGURATION => {'key' => 'value', ..., 'key' => 'value'}} - -
    - Region and stripe sizing - You can configure your stripe sizing bsaed upon your region sizing. By - default, your new regions will start with one stripe. On the next compaction after - the stripe has grown too large (16 x MemStore flushes size), it is split into two - stripes. Stripe splitting continues as the region grows, until the region is large - enough to split. - You can improve this pattern for your own data. A good rule is to aim for a - stripe size of at least 1 GB, and about 8-12 stripes for uniform row keys. For - example, if your regions are 30 GB, 12 x 2.5 GB stripes might be a good starting - point. - -
    - This graph shows all allowed transitions a region can undergo. In the graph, - each node is a state. A node has a color based on the state type, for readability. - A directed line in the graph is a possible state transition. -
    - Stripe Sizing Settings - - - - - - Setting - Notes - - - - - - hbase.store.stripe.initialStripeCount - - - The number of stripes to create when stripe compaction is enabled. - You can use it as follows: - - For relatively uniform row keys, if you know the approximate - target number of stripes from the above, you can avoid some - splitting overhead by starting with several stripes (2, 5, 10...). - If the early data is not representative of overall row key - distribution, this will not be as efficient. - - - For existing tables with a large amount of data, this setting - will effectively pre-split your stripes. - - - For keys such as hash-prefixed sequential keys, with more than - one hash prefix per region, pre-splitting may make sense. - - - - - - - hbase.store.stripe.sizeToSplit - - The maximum size a stripe grows before splitting. Use this in - conjunction with hbase.store.stripe.splitPartCount to - control the target stripe size (sizeToSplit = splitPartsCount * target - stripe size), according to the above sizing considerations. - - - - hbase.store.stripe.splitPartCount - - The number of new stripes to create when splitting a stripe. The - default is 2, which is appropriate for most cases. For non-uniform row - keys, you can experiment with increasing the number to 3 or 4, to isolate - the arriving updates into narrower slice of the region without additional - splits being required. - - - -
    -
    -
    - MemStore Size Settings - By default, the flush creates several files from one MemStore, according to - existing stripe boundaries and row keys to flush. This approach minimizes write - amplification, but can be undesirable if the MemStore is small and there are many - stripes, because the files will be too small. - In this type of situation, you can set - hbase.store.stripe.compaction.flushToL0 to - true. This will cause a MemStore flush to create a single - file instead. When at least - hbase.store.stripe.compaction.minFilesL0 such files (by - default, 4) accumulate, they will be compacted into striped files. -
    -
    - Normal Compaction Configuration and Stripe Compaction - All the settings that apply to normal compactions (see ) apply to stripe compactions. - The exceptions are the minimum and maximum number of files, which are set to - higher values by default because the files in stripes are smaller. To control - these for stripe compactions, use - hbase.store.stripe.compaction.minFiles and - hbase.store.stripe.compaction.maxFiles, rather than - hbase.hstore.compaction.min and - hbase.hstore.compaction.max. -
    -
    - - - - - - - -
    Bulk Loading -
    Overview - - HBase includes several methods of loading data into tables. - The most straightforward method is to either use the TableOutputFormat - class from a MapReduce job, or use the normal client APIs; however, - these are not always the most efficient methods. - - - The bulk load feature uses a MapReduce job to output table data in HBase's internal - data format, and then directly loads the generated StoreFiles into a running - cluster. Using bulk load will use less CPU and network resources than - simply using the HBase API. - -
    -
    Bulk Load Limitations - As bulk loading bypasses the write path, the WAL doesn’t get written to as part of the process. - Replication works by reading the WAL files so it won’t see the bulk loaded data – and the same goes for the edits that use Put.setWriteToWAL(true). - One way to handle that is to ship the raw files or the HFiles to the other cluster and do the other processing there. -
    -
    Bulk Load Architecture - - The HBase bulk load process consists of two main steps. - -
    Preparing data via a MapReduce job - - The first step of a bulk load is to generate HBase data files (StoreFiles) from - a MapReduce job using HFileOutputFormat. This output format writes - out data in HBase's internal storage format so that they can be - later loaded very efficiently into the cluster. - - - In order to function efficiently, HFileOutputFormat must be - configured such that each output HFile fits within a single region. - In order to do this, jobs whose output will be bulk loaded into HBase - use Hadoop's TotalOrderPartitioner class to partition the map output - into disjoint ranges of the key space, corresponding to the key - ranges of the regions in the table. - - - HFileOutputFormat includes a convenience function, - configureIncrementalLoad(), which automatically sets up - a TotalOrderPartitioner based on the current region boundaries of a - table. - -
    -
    Completing the data load - - After the data has been prepared using - HFileOutputFormat, it is loaded into the cluster using - completebulkload. This command line tool iterates - through the prepared data files, and for each one determines the - region the file belongs to. It then contacts the appropriate Region - Server which adopts the HFile, moving it into its storage directory - and making the data available to clients. - - - If the region boundaries have changed during the course of bulk load - preparation, or between the preparation and completion steps, the - completebulkloads utility will automatically split the - data files into pieces corresponding to the new boundaries. This - process is not optimally efficient, so users should take care to - minimize the delay between preparing a bulk load and importing it - into the cluster, especially if other clients are simultaneously - loading data through other means. - -
    -
    -
    Importing the prepared data using the completebulkload tool - - After a data import has been prepared, either by using the - importtsv tool with the - "importtsv.bulk.output" option or by some other MapReduce - job using the HFileOutputFormat, the - completebulkload tool is used to import the data into the - running cluster. - - - The completebulkload tool simply takes the output path - where importtsv or your MapReduce job put its results, and - the table name to import into. For example: - - $ hadoop jar hbase-server-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable - - The -c config-file option can be used to specify a file - containing the appropriate hbase parameters (e.g., hbase-site.xml) if - not supplied already on the CLASSPATH (In addition, the CLASSPATH must - contain the directory that has the zookeeper configuration file if - zookeeper is NOT managed by HBase). - - - Note: If the target table does not already exist in HBase, this - tool will create the table automatically. - - This tool will run quickly, after which point the new data will be visible in - the cluster. - -
    -
    See Also - For more information about the referenced utilities, see and . - - - See How-to: Use HBase Bulk Loading, and Why - for a recent blog on current state of bulk loading. - -
    -
    Advanced Usage - - Although the importtsv tool is useful in many cases, advanced users may - want to generate data programatically, or import data from other formats. To get - started doing so, dig into ImportTsv.java and check the JavaDoc for - HFileOutputFormat. - - - The import step of the bulk load can also be done programatically. See the - LoadIncrementalHFiles class for more information. - -
    -
    - -
    HDFS - As HBase runs on HDFS (and each StoreFile is written as a file on HDFS), - it is important to have an understanding of the HDFS Architecture - especially in terms of how it stores files, handles failovers, and replicates blocks. - - See the Hadoop documentation on HDFS Architecture - for more information. - -
    NameNode - The NameNode is responsible for maintaining the filesystem metadata. See the above HDFS Architecture link - for more information. - -
    -
    DataNode - The DataNodes are responsible for storing HDFS blocks. See the above HDFS Architecture link - for more information. - -
    -
    - -
    - Timeline-consistent High Available Reads -
    - Introduction - - HBase, architecturally, always had the strong consistency guarantee from the start. All reads and writes are routed through a single region server, which guarantees that all writes happen in an order, and all reads are seeing the most recent committed data. - - However, because of this single homing of the reads to a single location, if the server becomes unavailable, the regions of the table that were hosted in the region server become unavailable for some time. There are three phases in the region recovery process - detection, assignment, and recovery. Of these, the detection is usually the longest and is presently in the order of 20-30 seconds depending on the zookeeper session timeout. During this time and before the recovery is complete, the clients will not be able to read the region data. - - However, for some use cases, either the data may be read-only, or doing reads againsts some stale data is acceptable. With timeline-consistent high available reads, HBase can be used for these kind of latency-sensitive use cases where the application can expect to have a time bound on the read completion. - - For achieving high availability for reads, HBase provides a feature called “region replication”. In this model, for each region of a table, there will be multiple replicas that are opened in different region servers. By default, the region replication is set to 1, so only a single region replica is deployed and there will not be any changes from the original model. If region replication is set to 2 or more, than the master will assign replicas of the regions of the table. The Load Balancer ensures that the region replicas are not co-hosted in the same region servers and also in the same rack (if possible). - - All of the replicas for a single region will have a unique replica_id, starting from 0. The region replica having replica_id==0 is called the primary region, and the others “secondary regions” or secondaries. Only the primary can accept writes from the client, and the primary will always contain the latest changes. Since all writes still have to go through the primary region, the writes are not highly-available (meaning they might block for some time if the region becomes unavailable). - - The writes are asynchronously sent to the secondary region replicas using an “Async WAL replication” feature. This works similarly to HBase’s multi-datacenter replication, but instead the data from a region is replicated to the secondary regions. Each secondary replica always receives and observes the writes in the same order that the primary region committed them. This ensures that the secondaries won’t diverge from the primary regions data, but since the log replication is asnyc, the data might be stale in secondary regions. In some sense, this design can be thought of as “in-cluster replication”, where instead of replicating to a different datacenter, the data goes to a secondary region to keep secondary region’s in-memory state up to date. The data files are shared between the primary region and the other replicas, so that there is no extra storage overhead. However, the secondary regions will have recent non-flushed data in their memstores, which increases the memory overhead. - - Async WAL replication feature is being implemented in Phase 2 of issue HBASE-10070. Before this, region replicas will only be updated with flushed data files from the primary (see hbase.regionserver.storefile.refresh.period below). It is also possible to use this without setting storefile.refresh.period for read only tables. - -
    -
    - Timeline Consistency - - With this feature, HBase introduces a Consistency definition, which can be provided per read operation (get or scan). - -public enum Consistency { - STRONG, - TIMELINE -} - - Consistency.STRONG is the default consistency model provided by HBase. In case the table has region replication = 1, or in a table with region replicas but the reads are done with this consistency, the read is always performed by the primary regions, so that there will not be any change from the previous behaviour, and the client always observes the latest data. - - In case a read is performed with Consistency.TIMELINE, then the read RPC will be sent to the primary region server first. After a short interval (hbase.client.primaryCallTimeout.get, 10ms by default), parallel RPC for secondary region replicas will also be sent if the primary does not respond back. After this, the result is returned from whichever RPC is finished first. If the response came back from the primary region replica, we can always know that the data is latest. For this Result.isStale() API has been added to inspect the staleness. If the result is from a secondary region, then Result.isStale() will be set to true. The user can then inspect this field to possibly reason about the data. - - In terms of semantics, TIMELINE consistency as implemented by HBase differs from pure eventual - consistency in these respects: - - - Single homed and ordered updates: Region replication or not, on the write side, - there is still only 1 defined replica (primary) which can accept writes. This - replica is responsible for ordering the edits and preventing conflicts. This - guarantees that two different writes are not committed at the same time by different - replicas and the data diverges. With this, there is no need to do read-repair or - last-timestamp-wins kind of conflict resolution. - - - The secondaries also apply the edits in the order that the primary committed - them. This way the secondaries will contain a snapshot of the primaries data at any - point in time. This is similar to RDBMS replications and even HBase’s own - multi-datacenter replication, however in a single cluster. - - - On the read side, the client can detect whether the read is coming from - up-to-date data or is stale data. Also, the client can issue reads with different - consistency requirements on a per-operation basis to ensure its own semantic - guarantees. - - - The client can still observe edits out-of-order, and can go back in time, if it - observes reads from one secondary replica first, then another secondary replica. - There is no stickiness to region replicas or a transaction-id based guarantee. If - required, this can be implemented later though. - - - -
    - HFile Version 1 - - - - - - HFile Version 1 - - -
    - - To better understand the TIMELINE semantics, lets look at the above diagram. Lets say that there are two clients, and the first one writes x=1 at first, then x=2 and x=3 later. As above, all writes are handled by the primary region replica. The writes are saved in the write ahead log (WAL), and replicated to the other replicas asynchronously. In the above diagram, notice that replica_id=1 received 2 updates, and it’s data shows that x=2, while the replica_id=2 only received a single update, and its data shows that x=1. - - If client1 reads with STRONG consistency, it will only talk with the replica_id=0, and thus is guaranteed to observe the latest value of x=3. In case of a client issuing TIMELINE consistency reads, the RPC will go to all replicas (after primary timeout) and the result from the first response will be returned back. Thus the client can see either 1, 2 or 3 as the value of x. Let’s say that the primary region has failed and log replication cannot continue for some time. If the client does multiple reads with TIMELINE consistency, she can observe x=2 first, then x=1, and so on. - - -
    -
    - Tradeoffs - Having secondary regions hosted for read availability comes with some tradeoffs which - should be carefully evaluated per use case. Following are advantages and - disadvantages. - - Advantages - - High availability for read-only tables. - - - High availability for stale reads - - - Ability to do very low latency reads with very high percentile (99.9%+) latencies - for stale reads - - - - - Disadvantages - - Double / Triple memstore usage (depending on region replication count) for tables - with region replication > 1 - - - Increased block cache usage - - - Extra network traffic for log replication - - - Extra backup RPCs for replicas - - - To serve the region data from multiple replicas, HBase opens the regions in secondary - mode in the region servers. The regions opened in secondary mode will share the same data - files with the primary region replica, however each secondary region replica will have its - own memstore to keep the unflushed data (only primary region can do flushes). Also to - serve reads from secondary regions, the blocks of data files may be also cached in the - block caches for the secondary regions. -
    -
    - Configuration properties - - To use highly available reads, you should set the following properties in hbase-site.xml file. There is no specific configuration to enable or disable region replicas. Instead you can change the number of region replicas per table to increase or decrease at the table creation or with alter table. - -
    - Server side properties - - hbase.regionserver.storefile.refresh.period - 0 - - The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this feature is disabled. Secondary regions sees new files (from flushes and compactions) from primary once the secondary region refreshes the list of files in the region. But too frequent refreshes might cause extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger value is also recommended with this setting. - - -]]> - - One thing to keep in mind also is that, region replica placement policy is only - enforced by the StochasticLoadBalancer which is the default balancer. If - you are using a custom load balancer property in hbase-site.xml - (hbase.master.loadbalancer.class) replicas of regions might end up being - hosted in the same server. -
    -
    - Client side properties - Ensure to set the following for all clients (and servers) that will use region - replicas. - - hbase.ipc.client.allowsInterrupt - true - - Whether to enable interruption of RPC threads at the client side. This is required for region replicas with fallback RPC’s to secondary regions. - - - - hbase.client.primaryCallTimeout.get - 10000 - - The timeout (in microseconds), before secondary fallback RPC’s are submitted for get requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. - - - - hbase.client.primaryCallTimeout.multiget - 10000 - - The timeout (in microseconds), before secondary fallback RPC’s are submitted for multi-get requests (HTable.get(List)) with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 10ms. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. - - - - hbase.client.replicaCallTimeout.scan - 1000000 - - The timeout (in microseconds), before secondary fallback RPC’s are submitted for scan requests with Consistency.TIMELINE to the secondary replicas of the regions. Defaults to 1 sec. Setting this lower will increase the number of RPC’s, but will lower the p99 latencies. - - -]]> - -
    -
    -
    - Creating a table with region replication - - Region replication is a per-table property. All tables have REGION_REPLICATION = 1 by default, which means that there is only one replica per region. You can set and change the number of replicas per region of a table by supplying the REGION_REPLICATION property in the table descriptor. - -
    Shell - 2} - -describe 't1' -for i in 1..100 -put 't1', "r#{i}", 'f1:c1', i -end -flush 't1' -]]> - -
    -
    Java - - - You can also use setRegionReplication() and alter table to increase, decrease the - region replication for a table. -
    -
    -
    - Region splits and merges - Region splits and merges are not compatible with regions with replicas yet. So you - have to pre-split the table, and disable the region splits. Also you should not execute - region merges on tables with region replicas. To disable region splits you can use - DisabledRegionSplitPolicy as the split policy. -
    -
    - User Interface - In the masters user interface, the region replicas of a table are also shown together - with the primary regions. You can notice that the replicas of a region will share the same - start and end keys and the same region name prefix. The only difference would be the - appended replica_id (which is encoded as hex), and the region encoded name will be - different. You can also see the replica ids shown explicitly in the UI. -
    -
    - API and Usage -
    - Shell - You can do reads in shell using a the Consistency.TIMELINE semantics as follows - - get 't1','r6', {CONSISTENCY => "TIMELINE"} -]]> - You can simulate a region server pausing or becoming unavailable and do a read from - the secondary replica: - - -hbase(main):001:0> get 't1','r6', {CONSISTENCY => "TIMELINE"} -]]> - Using scans is also similar - scan 't1', {CONSISTENCY => 'TIMELINE'} -]]> -
    -
    - Java - You can set set the consistency for Gets and Scans and do requests as - follows. - - You can also pass multiple gets: - gets = new ArrayList(); -gets.add(get1); -... -Result[] results = table.get(gets); -]]> - And Scans: - - You can inspect whether the results are coming from primary region or not by calling - the Result.isStale() method: - - -
    -
    - -
    - Resources - - - More information about the design and implementation can be found at the jira - issue: HBASE-10070 - - - - HBaseCon 2014 talk also contains some - details and slides. - - -
    -
    - -
    + @@ -5013,1091 +104,17 @@ if (result.isStale()) { - - - FAQ - - General - - When should I use HBase? - - See the in the Architecture chapter. - - - - - Are there other HBase FAQs? - - - See the FAQ that is up on the wiki, HBase Wiki FAQ. - - - - - Does HBase support SQL? - - - Not really. SQL-ish support for HBase via Hive is in development, however Hive is based on MapReduce which is not generally suitable for low-latency requests. - See the section for examples on the HBase client. - - - - - How can I find examples of NoSQL/HBase? - - See the link to the BigTable paper in in the appendix, as - well as the other papers. - - - - - What is the history of HBase? - - See . - - - - - - Upgrading - - - How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+? - - - In HBase 0.96, the project moved to a modular structure. Adjust your project's - dependencies to rely upon the hbase-client module or another - module as appropriate, rather than a single JAR. You can model your Maven depency - after one of the following, depending on your targeted version of HBase. See or for more - information. - - Maven Dependency for HBase 0.98 - - org.apache.hbase - hbase-client - 0.98.5-hadoop2 - - ]]> - - - Maven Dependency for HBase 0.96 - - org.apache.hbase - hbase-client - 0.96.2-hadoop2 - - ]]> - - - Maven Dependency for HBase 0.94 - - org.apache.hbase - hbase - 0.94.3 - - ]]> - - - - - Architecture - - How does HBase handle Region-RegionServer assignment and locality? - - - See . - - - - - Configuration - - How can I get started with my first cluster? - - - See . - - - - - Where can I learn about the rest of the configuration options? - - - See . - - - - - Schema Design / Data Access - - How should I design my schema in HBase? - - - See and - - - - - - How can I store (fill in the blank) in HBase? - - - - See . - - - - - - How can I handle secondary indexes in HBase? - - - - See - - - - - Can I change a table's rowkeys? - - This is a very common question. You can't. See . - - - - What APIs does HBase support? - - - See , and . - - - - - MapReduce - - How can I use MapReduce with HBase? - - - See - - - - - Performance and Troubleshooting - - - How can I improve HBase cluster performance? - - - - See . - - - - - - How can I troubleshoot my HBase cluster? - - - - See . - - - - - Amazon EC2 - - - I am running HBase on Amazon EC2 and... - - - - EC2 issues are a special case. See Troubleshooting and Performance sections. - - - - - Operations - - - How do I manage my HBase cluster? - - - - See - - - - - - How do I back up my HBase cluster? - - - - See - - - - - HBase in Action - - Where can I find interesting videos and presentations on HBase? - - - See - - - - - - - - - hbck In Depth - HBaseFsck (hbck) is a tool for checking for region consistency and table integrity problems -and repairing a corrupted HBase. It works in two basic modes -- a read-only inconsistency -identifying mode and a multi-phase read-write repair mode. - -
    - Running hbck to identify inconsistencies -To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster: - -$ ./bin/hbase hbck - - -At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES -present. You may also want to run run hbck a few times because some inconsistencies can be -transient (e.g. cluster is starting up or a region is splitting). Operationally you may want to run -hbck regularly and setup alert (e.g. via nagios) if it repeatedly reports inconsistencies . -A run of hbck will report a list of inconsistencies along with a brief description of the regions and -tables affected. The using the -details option will report more details including a representative -listing of all the splits present in all the tables. - - -$ ./bin/hbase hbck -details - -If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies -in only specific tables. For example the following command would only attempt to check table -TableFoo and TableBar. The benefit is that hbck will run in less time. - -$ ./bin/hbase hbck TableFoo TableBar - -
    -
    Inconsistencies - - If after several runs, inconsistencies continue to be reported, you may have encountered a -corruption. These should be rare, but in the event they occur newer versions of HBase include -the hbck tool enabled with automatic repair options. - - - There are two invariants that when violated create inconsistencies in HBase: - - - HBase’s region consistency invariant is satisfied if every region is assigned and -deployed on exactly one region server, and all places where this state kept is in -accordance. - - HBase’s table integrity invariant is satisfied if for each table, every possible row key -resolves to exactly one region. - - - -Repairs generally work in three phases -- a read-only information gathering phase that identifies -inconsistencies, a table integrity repair phase that restores the table integrity invariant, and then -finally a region consistency repair phase that restores the region consistency invariant. -Starting from version 0.90.0, hbck could detect region consistency problems report on a subset -of possible table integrity problems. It also included the ability to automatically fix the most -common inconsistency, region assignment and deployment consistency problems. This repair -could be done by using the -fix command line option. These problems close regions if they are -open on the wrong server or on multiple region servers and also assigns regions to region -servers if they are not open. - - -Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are -introduced to aid repairing a corrupted HBase. This hbck sometimes goes by the nickname -“uberhbck”. Each particular version of uber hbck is compatible with the HBase’s of the same -major version (0.90.7 uberhbck can repair a 0.90.4). However, versions <=0.90.6 and versions -<=0.92.1 may require restarting the master or failing over to a backup master. - -
    -
    Localized repairs - - When repairing a corrupted HBase, it is best to repair the lowest risk inconsistencies first. -These are generally region consistency repairs -- localized single region repairs, that only modify -in-memory data, ephemeral zookeeper data, or patch holes in the META table. -Region consistency requires that the HBase instance has the state of the region’s data in HDFS -(.regioninfo files), the region’s row in the hbase:meta table., and region’s deployment/assignments on -region servers and the master in accordance. Options for repairing region consistency include: - - -fixAssignments (equivalent to the 0.90 -fix option) repairs unassigned, incorrectly -assigned or multiply assigned regions. - - -fixMeta which removes meta rows when corresponding regions are not present in - HDFS and adds new meta rows if they regions are present in HDFS while not in META. - - - To fix deployment and assignment problems you can run this command: - - -$ ./bin/hbase hbck -fixAssignments - -To fix deployment and assignment problems as well as repairing incorrect meta rows you can -run this command: - -$ ./bin/hbase hbck -fixAssignments -fixMeta - -There are a few classes of table integrity problems that are low risk repairs. The first two are -degenerate (startkey == endkey) regions and backwards regions (startkey > endkey). These are -automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). -The third low-risk class is hdfs region holes. This can be repaired by using the: - - -fixHdfsHoles option for fabricating new empty regions on the file system. -If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent. - - - -$ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles - -Since this is a common operation, we’ve added a the -repairHoles flag that is equivalent to the -previous command: - -$ ./bin/hbase hbck -repairHoles - -If inconsistencies still remain after these steps, you most likely have table integrity problems -related to orphaned or overlapping regions. -
    -
    Region Overlap Repairs -Table integrity problems can require repairs that deal with overlaps. This is a riskier operation -because it requires modifications to the file system, requires some decision making, and may -require some manual steps. For these repairs it is best to analyze the output of a hbck -details -run so that you isolate repairs attempts only upon problems the checks identify. Because this is -riskier, there are safeguard that should be used to limit the scope of the repairs. -WARNING: This is a relatively new and have only been tested on online but idle HBase instances -(no reads/writes). Use at your own risk in an active production environment! -The options for repairing table integrity violations include: - - -fixHdfsOrphans option for “adopting” a region directory that is missing a region -metadata file (the .regioninfo file). - - -fixHdfsOverlaps ability for fixing overlapping regions - - -When repairing overlapping regions, a region’s data can be modified on the file system in two -ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to -“sideline” directory where data could be restored later. Merging a large number of regions is -technically correct but could result in an extremely large region that requires series of costly -compactions and splitting operations. In these cases, it is probably better to sideline the regions -that overlap with the most other regions (likely the largest ranges) so that merges can happen on -a more reasonable scale. Since these sidelined regions are already laid out in HBase’s native -directory and HFile format, they can be restored by using HBase’s bulk load mechanism. -The default safeguard thresholds are conservative. These options let you override the default -thresholds and to enable the large region sidelining feature. - - -maxMerge <n> maximum number of overlapping regions to merge - - -sidelineBigOverlaps if more than maxMerge regions are overlapping, sideline attempt -to sideline the regions overlapping with the most other regions. - - -maxOverlapsToSideline <n> if sidelining large overlapping regions, sideline at most n -regions. - - - -Since often times you would just want to get the tables repaired, you can use this option to turn -on all repair options: - - -repair includes all the region consistency options and only the hole repairing table -integrity options. - - -Finally, there are safeguards to limit repairs to only specific tables. For example the following -command would only attempt to check and repair table TableFoo and TableBar. - -$ ./bin/hbase hbck -repair TableFoo TableBar - -
    Special cases: Meta is not properly assigned -There are a few special cases that hbck can handle as well. -Sometimes the meta table’s only region is inconsistently assigned or deployed. In this case -there is a special -fixMetaOnly option that can try to fix meta assignments. - -$ ./bin/hbase hbck -fixMetaOnly -fixAssignments - -
    -
    Special cases: HBase version file is missing -HBase’s data on the file system requires a version file in order to start. If this flie is missing, you -can use the -fixVersionFile option to fabricating a new HBase version file. This assumes that -the version of hbck you are running is the appropriate version for the HBase cluster. -
    -
    Special case: Root and META are corrupt. -The most drastic corruption scenario is the case where the ROOT or META is corrupted and -HBase will not start. In this case you can use the OfflineMetaRepair tool create new ROOT -and META regions and tables. -This tool assumes that HBase is offline. It then marches through the existing HBase home -directory, loads as much information from region metadata files (.regioninfo files) as possible -from the file system. If the region metadata has proper table integrity, it sidelines the original root -and meta table directories, and builds new ones with pointers to the region directories and their -data. - -$ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair - -NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck -can complete. -If the tool succeeds you should be able to start hbase and run online repairs if necessary. -
    -
    Special cases: Offline split parent - -Once a region is split, the offline parent will be cleaned up automatically. Sometimes, daughter regions -are split again before their parents are cleaned up. HBase can clean up parents in the right order. However, -there could be some lingering offline split parents sometimes. They are in META, in HDFS, and not deployed. -But HBase can't clean them up. In this case, you can use the -fixSplitParents option to reset -them in META to be online and not split. Therefore, hbck can merge them with other regions if fixing -overlapping regions option is used. - - -This option should not normally be used, and it is not in -fixAll. - -
    -
    -
    - + + - - - Compression and Data Block Encoding In - HBase<indexterm><primary>Compression</primary><secondary>Data Block - Encoding</secondary><seealso>codecs</seealso></indexterm> - - Codecs mentioned in this section are for encoding and decoding data blocks or row keys. - For information about replication codecs, see . - - Some of the information in this section is pulled from a discussion on the - HBase Development mailing list. - HBase supports several different compression algorithms which can be enabled on a - ColumnFamily. Data block encoding attempts to limit duplication of information in keys, taking - advantage of some of the fundamental designs and patterns of HBase, such as sorted row keys - and the schema of a given table. Compressors reduce the size of large, opaque byte arrays in - cells, and can significantly reduce the storage space needed to store uncompressed - data. - Compressors and data block encoding can be used together on the same ColumnFamily. - - - Changes Take Effect Upon Compaction - If you change compression or encoding for a ColumnFamily, the changes take effect during - compaction. - - - Some codecs take advantage of capabilities built into Java, such as GZip compression. - Others rely on native libraries. Native libraries may be available as part of Hadoop, such as - LZ4. In this case, HBase only needs access to the appropriate shared library. Other codecs, - such as Google Snappy, need to be installed first. Some codecs are licensed in ways that - conflict with HBase's license and cannot be shipped as part of HBase. - - This section discusses common codecs that are used and tested with HBase. No matter what - codec you use, be sure to test that it is installed correctly and is available on all nodes in - your cluster. Extra operational steps may be necessary to be sure that codecs are available on - newly-deployed nodes. You can use the utility to check that a given codec is correctly - installed. - - To configure HBase to use a compressor, see . To enable a compressor for a ColumnFamily, see . To enable data block encoding for a ColumnFamily, see - . - - Block Compressors - - none - - - Snappy - - - LZO - - - LZ4 - - - GZ - - - - - - Data Block Encoding Types - - Prefix - Often, keys are very similar. Specifically, keys often share a common prefix - and only differ near the end. For instance, one key might be - RowKey:Family:Qualifier0 and the next key might be - RowKey:Family:Qualifier1. In Prefix encoding, an extra column is - added which holds the length of the prefix shared between the current key and the previous - key. Assuming the first key here is totally different from the key before, its prefix - length is 0. The second key's prefix length is 23, since they have the - first 23 characters in common. - Obviously if the keys tend to have nothing in common, Prefix will not provide much - benefit. - The following image shows a hypothetical ColumnFamily with no data block encoding. -
    - ColumnFamily with No Encoding - - - - - A ColumnFamily with no encoding> - -
    - Here is the same data with prefix data encoding. -
    - ColumnFamily with Prefix Encoding - - - - - A ColumnFamily with prefix encoding - -
    -
    - - Diff - Diff encoding expands upon Prefix encoding. Instead of considering the key - sequentially as a monolithic series of bytes, each key field is split so that each part of - the key can be compressed more efficiently. Two new fields are added: timestamp and type. - If the ColumnFamily is the same as the previous row, it is omitted from the current row. - If the key length, value length or type are the same as the previous row, the field is - omitted. In addition, for increased compression, the timestamp is stored as a Diff from - the previous row's timestamp, rather than being stored in full. Given the two row keys in - the Prefix example, and given an exact match on timestamp and the same type, neither the - value length, or type needs to be stored for the second row, and the timestamp value for - the second row is just 0, rather than a full timestamp. - Diff encoding is disabled by default because writing and scanning are slower but more - data is cached. - This image shows the same ColumnFamily from the previous images, with Diff encoding. -
    - ColumnFamily with Diff Encoding - - - - - A ColumnFamily with diff encoding - -
    -
    - - Fast Diff - Fast Diff works similar to Diff, but uses a faster implementation. It also - adds another field which stores a single bit to track whether the data itself is the same - as the previous row. If it is, the data is not stored again. Fast Diff is the recommended - codec to use if you have long keys or many columns. The data format is nearly identical to - Diff encoding, so there is not an image to illustrate it. - - - Prefix Tree encoding was introduced as an experimental feature in HBase 0.96. It - provides similar memory savings to the Prefix, Diff, and Fast Diff encoder, but provides - faster random access at a cost of slower encoding speed. Prefix Tree may be appropriate - for applications that have high block cache hit ratios. It introduces new 'tree' fields - for the row and column. The row tree field contains a list of offsets/references - corresponding to the cells in that row. This allows for a good deal of compression. For - more details about Prefix Tree encoding, see HBASE-4676. It is - difficult to graphically illustrate a prefix tree, so no image is included. See the - Wikipedia article for Trie for more general information - about this data structure. - -
    - -
    - Which Compressor or Data Block Encoder To Use - The compression or codec type to use depends on the characteristics of your data. - Choosing the wrong type could cause your data to take more space rather than less, and can - have performance implications. In general, you need to weigh your options between smaller - size and faster compression/decompression. Following are some general guidelines, expanded from a discussion at Documenting Guidance on compression and codecs. - - - If you have long keys (compared to the values) or many columns, use a prefix - encoder. FAST_DIFF is recommended, as more testing is needed for Prefix Tree - encoding. - - - If the values are large (and not precompressed, such as images), use a data block - compressor. - - - Use GZIP for cold data, which is accessed infrequently. GZIP - compression uses more CPU resources than Snappy or LZO, but provides a higher - compression ratio. - - - Use Snappy or LZO for hot data, which is accessed - frequently. Snappy and LZO use fewer CPU resources than GZIP, but do not provide as high - of a compression ratio. - - - In most cases, enabling Snappy or LZO by default is a good choice, because they have - a low performance overhead and provide space savings. - - - Before Snappy became available by Google in 2011, LZO was the default. Snappy has - similar qualities as LZO but has been shown to perform better. - - -
    -
    - Making use of Hadoop Native Libraries in HBase - The Hadoop shared library has a bunch of facility including - compression libraries and fast crc'ing. To make this facility available - to HBase, do the following. HBase/Hadoop will fall back to use - alternatives if it cannot find the native library versions -- or - fail outright if you asking for an explicit compressor and there is - no alternative available. - If you see the following in your HBase logs, you know that HBase was unable - to locate the Hadoop native libraries: - 2014-08-07 09:26:20,139 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable - If the libraries loaded successfully, the WARN message does not show. - - Lets presume your Hadoop shipped with a native library that - suits the platform you are running HBase on. To check if the Hadoop - native library is available to HBase, run the following tool (available in - Hadoop 2.1 and greater): - $ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker -2014-08-26 13:15:38,717 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable -Native library checking: -hadoop: false -zlib: false -snappy: false -lz4: false -bzip2: false -2014-08-26 13:15:38,863 INFO [main] util.ExitUtil: Exiting with status 1 -Above shows that the native hadoop library is not available in HBase context. - - To fix the above, either copy the Hadoop native libraries local or symlink to - them if the Hadoop and HBase stalls are adjacent in the filesystem. - You could also point at their location by setting the LD_LIBRARY_PATH environment - variable. - Where the JVM looks to find native librarys is "system dependent" - (See java.lang.System#loadLibrary(name)). On linux, by default, - is going to look in lib/native/PLATFORM where PLATFORM - is the label for the platform your HBase is installed on. - On a local linux machine, it seems to be the concatenation of the java properties - os.name and os.arch followed by whether 32 or 64 bit. - HBase on startup prints out all of the java system properties so find the os.name and os.arch - in the log. For example: - .... - 2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux - 2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 - ... - - So in this case, the PLATFORM string is Linux-amd64-64. - Copying the Hadoop native libraries or symlinking at lib/native/Linux-amd64-64 - will ensure they are found. Check with the Hadoop NativeLibraryChecker. - - - Here is example of how to point at the Hadoop libs with LD_LIBRARY_PATH - environment variable: - $ LD_LIBRARY_PATH=~/hadoop-2.5.0-SNAPSHOT/lib/native ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker -2014-08-26 13:42:49,332 INFO [main] bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native -2014-08-26 13:42:49,337 INFO [main] zlib.ZlibFactory: Successfully loaded & initialized native-zlib library -Native library checking: -hadoop: true /home/stack/hadoop-2.5.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0 -zlib: true /lib64/libz.so.1 -snappy: true /usr/lib64/libsnappy.so.1 -lz4: true revision:99 -bzip2: true /lib64/libbz2.so.1 -Set in hbase-env.sh the LD_LIBRARY_PATH environment variable when starting your HBase. - -
    - -
    - Compressor Configuration, Installation, and Use -
    - Configure HBase For Compressors - Before HBase can use a given compressor, its libraries need to be available. Due to - licensing issues, only GZ compression is available to HBase (via native Java libraries) in - a default installation. Other compression libraries are available via the shared library - bundled with your hadoop. The hadoop native library needs to be findable when HBase - starts. See -
    - Compressor Support On the Master - A new configuration setting was introduced in HBase 0.95, to check the Master to - determine which data block encoders are installed and configured on it, and assume that - the entire cluster is configured the same. This option, - hbase.master.check.compression, defaults to true. This - prevents the situation described in HBASE-6370, where - a table is created or modified to support a codec that a region server does not support, - leading to failures that take a long time to occur and are difficult to debug. - If hbase.master.check.compression is enabled, libraries for all desired - compressors need to be installed and configured on the Master, even if the Master does - not run a region server. -
    -
    - Install GZ Support Via Native Libraries - HBase uses Java's built-in GZip support unless the native Hadoop libraries are - available on the CLASSPATH. The recommended way to add libraries to the CLASSPATH is to - set the environment variable HBASE_LIBRARY_PATH for the user running - HBase. If native libraries are not available and Java's GZIP is used, Got - brand-new compressor reports will be present in the logs. See ). -
    -
    - Install LZO Support - HBase cannot ship with LZO because of incompatibility between HBase, which uses an - Apache Software License (ASL) and LZO, which uses a GPL license. See the Using LZO - Compression wiki page for information on configuring LZO support for HBase. - If you depend upon LZO compression, consider configuring your RegionServers to fail - to start if LZO is not available. See . -
    -
    - Configure LZ4 Support - LZ4 support is bundled with Hadoop. Make sure the hadoop shared library - (libhadoop.so) is accessible when you start - HBase. After configuring your platform (see ), you can make a symbolic link from HBase to the native Hadoop - libraries. This assumes the two software installs are colocated. For example, if my - 'platform' is Linux-amd64-64: - $ cd $HBASE_HOME -$ mkdir lib/native -$ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64 - Use the compression tool to check that LZ4 is installed on all nodes. Start up (or restart) - HBase. Afterward, you can create and alter tables to enable LZ4 as a - compression codec.: - -hbase(main):003:0> alter 'TestTable', {NAME => 'info', COMPRESSION => 'LZ4'} - - -
    -
    - Install Snappy Support - HBase does not ship with Snappy support because of licensing issues. You can install - Snappy binaries (for instance, by using yum install snappy on CentOS) - or build Snappy from source. After installing Snappy, search for the shared library, - which will be called libsnappy.so.X where X is a number. If you - built from source, copy the shared library to a known location on your system, such as - /opt/snappy/lib/. - In addition to the Snappy library, HBase also needs access to the Hadoop shared - library, which will be called something like libhadoop.so.X.Y, - where X and Y are both numbers. Make note of the location of the Hadoop library, or copy - it to the same location as the Snappy library. - - The Snappy and Hadoop libraries need to be available on each node of your cluster. - See to find out how to test that this is the case. - See to configure your RegionServers to fail to - start if a given compressor is not available. - - Each of these library locations need to be added to the environment variable - HBASE_LIBRARY_PATH for the operating system user that runs HBase. You - need to restart the RegionServer for the changes to take effect. -
    - - -
    - CompressionTest - You can use the CompressionTest tool to verify that your compressor is available to - HBase: - - $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy - -
    - - -
    - Enforce Compression Settings On a RegionServer - You can configure a RegionServer so that it will fail to restart if compression is - configured incorrectly, by adding the option hbase.regionserver.codecs to the - hbase-site.xml, and setting its value to a comma-separated list - of codecs that need to be available. For example, if you set this property to - lzo,gz, the RegionServer would fail to start if both compressors - were not available. This would prevent a new server from being added to the cluster - without having codecs configured properly. -
    -
    - -
    - Enable Compression On a ColumnFamily - To enable compression for a ColumnFamily, use an alter command. You do - not need to re-create the table or copy data. If you are changing codecs, be sure the old - codec is still available until all the old StoreFiles have been compacted. - - Enabling Compression on a ColumnFamily of an Existing Table using HBase - Shell - disable 'test' -hbase> alter 'test', {NAME => 'cf', COMPRESSION => 'GZ'} -hbase> enable 'test']]> - - - - Creating a New Table with Compression On a ColumnFamily - create 'test2', { NAME => 'cf2', COMPRESSION => 'SNAPPY' } - ]]> - - - Verifying a ColumnFamily's Compression Settings - describe 'test' -DESCRIPTION ENABLED - 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE false - ', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', - VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERSIONS - => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'fa - lse', BLOCKSIZE => '65536', IN_MEMORY => 'false', B - LOCKCACHE => 'true'} -1 row(s) in 0.1070 seconds - ]]> - -
    - -
    - Testing Compression Performance - HBase includes a tool called LoadTestTool which provides mechanisms to test your - compression performance. You must specify either -write or - -update-read as your first parameter, and if you do not specify another - parameter, usage advice is printed for each option. - - <command>LoadTestTool</command> Usage - -Options: - -batchupdate Whether to use batch as opposed to separate - updates for every column in a row - -bloom Bloom filter type, one of [NONE, ROW, ROWCOL] - -compression Compression type, one of [LZO, GZ, NONE, SNAPPY, - LZ4] - -data_block_encoding Encoding algorithm (e.g. prefix compression) to - use for data blocks in the test column family, one - of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE]. - -encryption Enables transparent encryption on the test table, - one of [AES] - -generator The class which generates load for the tool. Any - args for this class can be passed as colon - separated after class name - -h,--help Show usage - -in_memory Tries to keep the HFiles of the CF inmemory as far - as possible. Not guaranteed that reads are always - served from inmemory - -init_only Initialize the test table only, don't do any - loading - -key_window The 'key window' to maintain between reads and - writes for concurrent write/read workload. The - default is 0. - -max_read_errors The maximum number of read errors to tolerate - before terminating all reader threads. The default - is 10. - -multiput Whether to use multi-puts as opposed to separate - puts for every column in a row - -num_keys The number of keys to read/write - -num_tables A positive integer number. When a number n is - speicfied, load test tool will load n table - parallely. -tn parameter value becomes table name - prefix. Each table name is in format - _1..._n - -read [:<#threads=20>] - -regions_per_server A positive integer number. When a number n is - specified, load test tool will create the test - table with n regions per server - -skip_init Skip the initialization; assume test table already - exists - -start_key The first key to read/write (a 0-based index). The - default value is 0. - -tn The name of the table to read or write - -update [:<#threads=20>][:<#whether to - ignore nonce collisions=0>] - -write :[:<#threads=20>] - -zk ZK quorum as comma-separated host names without - port numbers - -zk_root name of parent znode in zookeeper - ]]> - - - Example Usage of LoadTestTool - -$ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000 - -read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE - - -
    -
    - -
    - Enable Data Block Encoding - Codecs are built into HBase so no extra configuration is needed. Codecs are enabled on a - table by setting the DATA_BLOCK_ENCODING property. Disable the table before - altering its DATA_BLOCK_ENCODING setting. Following is an example using HBase Shell: - - Enable Data Block Encoding On a Table - disable 'test' -hbase> alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' } -Updating all regions with the new schema... -0/1 regions updated. -1/1 regions updated. -Done. -0 row(s) in 2.2820 seconds -hbase> enable 'test' -0 row(s) in 0.1580 seconds - ]]> - - - Verifying a ColumnFamily's Data Block Encoding - describe 'test' -DESCRIPTION ENABLED - 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST true - _DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => - '0', VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERS - IONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS = - > 'false', BLOCKSIZE => '65536', IN_MEMORY => 'fals - e', BLOCKCACHE => 'true'} -1 row(s) in 0.0650 seconds - ]]> - -
    -
    - - - SQL over HBase -
    - Apache Phoenix - Apache Phoenix -
    -
    - Trafodion - Trafodion: Transactional SQL-on-HBase -
    -
    - - - <link xlink:href="https://github.com/brianfrankcooper/YCSB/">YCSB: The Yahoo! Cloud Serving Benchmark</link> and HBase - TODO: Describe how YCSB is poor for putting up a decent cluster load. - TODO: Describe setup of YCSB for HBase. In particular, presplit your tables before you start - a run. See HBASE-4163 Create Split Strategy for YCSB Benchmark - for why and a little shell command for how to do it. - Ted Dunning redid YCSB so it's mavenized and added facility for verifying workloads. See Ted Dunning's YCSB. - - - - - - - Other Information About HBase -
    HBase Videos - Introduction to HBase - - Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). - - Introduction to HBase by Todd Lipcon (2010). - - - - Building Real Time Services at Facebook with HBase by Jonathan Gray (Hadoop World 2011). - - HBase and Hadoop, Mixing Real-Time and Batch Processing at StumbleUpon by JD Cryans (Hadoop World 2010). - -
    -
    HBase Presentations (Slides) - Advanced HBase Schema Design by Lars George (Hadoop World 2011). - - Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). - - Getting The Most From Your HBase Install by Ryan Rawson, Jonathan Gray (Hadoop World 2009). - -
    -
    HBase Papers - BigTable by Google (2006). - - HBase and HDFS Locality by Lars George (2010). - - No Relation: The Mixed Blessings of Non-Relational Databases by Ian Varley (2009). - -
    -
    HBase Sites - Cloudera's HBase Blog has a lot of links to useful HBase information. - - CAP Confusion is a relevant entry for background information on - distributed storage systems. - - - - HBase Wiki has a page with a number of presentations. - - HBase RefCard from DZone. - -
    -
    HBase Books - HBase: The Definitive Guide by Lars George. - -
    -
    Hadoop Books - Hadoop: The Definitive Guide by Tom White. - -
    - -
    - - HBase History - - 2006: BigTable paper published by Google. - - 2006 (end of year): HBase development starts. - - 2008: HBase becomes Hadoop sub-project. - - 2010: HBase becomes Apache top-level project. - - - - - HBase and the Apache Software Foundation - HBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure - a healthy project. -
    ASF Development Process - See the Apache Development Process page - for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing - and getting involved, and how open-source works at ASF. - -
    -
    ASF Board Reporting - Once a quarter, each project in the ASF portfolio submits a report to the ASF board. This is done by the HBase project - lead and the committers. See ASF board reporting for more information. - -
    -
    - - Apache HBase Orca - - - - - - An Orca is the Apache HBase mascot. - See NOTICES.txt. Our Orca logo we got here: http://www.vectorfree.com/jumping-orca - It is licensed Creative Commons Attribution 3.0. See https://creativecommons.org/licenses/by/3.0/us/ - We changed the logo by stripping the colored background, inverting - it and then rotating it some. - - - + + + + + + + + diff --git src/main/docbkx/compression.xml src/main/docbkx/compression.xml new file mode 100644 index 0000000..d1971b1 --- /dev/null +++ src/main/docbkx/compression.xml @@ -0,0 +1,535 @@ + + + + + Compression and Data Block Encoding In + HBase<indexterm><primary>Compression</primary><secondary>Data Block + Encoding</secondary><seealso>codecs</seealso></indexterm> + + Codecs mentioned in this section are for encoding and decoding data blocks or row keys. + For information about replication codecs, see . + + Some of the information in this section is pulled from a discussion on the + HBase Development mailing list. + HBase supports several different compression algorithms which can be enabled on a + ColumnFamily. Data block encoding attempts to limit duplication of information in keys, taking + advantage of some of the fundamental designs and patterns of HBase, such as sorted row keys + and the schema of a given table. Compressors reduce the size of large, opaque byte arrays in + cells, and can significantly reduce the storage space needed to store uncompressed + data. + Compressors and data block encoding can be used together on the same ColumnFamily. + + + Changes Take Effect Upon Compaction + If you change compression or encoding for a ColumnFamily, the changes take effect during + compaction. + + + Some codecs take advantage of capabilities built into Java, such as GZip compression. + Others rely on native libraries. Native libraries may be available as part of Hadoop, such as + LZ4. In this case, HBase only needs access to the appropriate shared library. Other codecs, + such as Google Snappy, need to be installed first. Some codecs are licensed in ways that + conflict with HBase's license and cannot be shipped as part of HBase. + + This section discusses common codecs that are used and tested with HBase. No matter what + codec you use, be sure to test that it is installed correctly and is available on all nodes in + your cluster. Extra operational steps may be necessary to be sure that codecs are available on + newly-deployed nodes. You can use the utility to check that a given codec is correctly + installed. + + To configure HBase to use a compressor, see . To enable a compressor for a ColumnFamily, see . To enable data block encoding for a ColumnFamily, see + . + + Block Compressors + + none + + + Snappy + + + LZO + + + LZ4 + + + GZ + + + + + + Data Block Encoding Types + + Prefix - Often, keys are very similar. Specifically, keys often share a common prefix + and only differ near the end. For instance, one key might be + RowKey:Family:Qualifier0 and the next key might be + RowKey:Family:Qualifier1. In Prefix encoding, an extra column is + added which holds the length of the prefix shared between the current key and the previous + key. Assuming the first key here is totally different from the key before, its prefix + length is 0. The second key's prefix length is 23, since they have the + first 23 characters in common. + Obviously if the keys tend to have nothing in common, Prefix will not provide much + benefit. + The following image shows a hypothetical ColumnFamily with no data block encoding. +
    + ColumnFamily with No Encoding + + + + + A ColumnFamily with no encoding> + +
    + Here is the same data with prefix data encoding. +
    + ColumnFamily with Prefix Encoding + + + + + A ColumnFamily with prefix encoding + +
    +
    + + Diff - Diff encoding expands upon Prefix encoding. Instead of considering the key + sequentially as a monolithic series of bytes, each key field is split so that each part of + the key can be compressed more efficiently. Two new fields are added: timestamp and type. + If the ColumnFamily is the same as the previous row, it is omitted from the current row. + If the key length, value length or type are the same as the previous row, the field is + omitted. In addition, for increased compression, the timestamp is stored as a Diff from + the previous row's timestamp, rather than being stored in full. Given the two row keys in + the Prefix example, and given an exact match on timestamp and the same type, neither the + value length, or type needs to be stored for the second row, and the timestamp value for + the second row is just 0, rather than a full timestamp. + Diff encoding is disabled by default because writing and scanning are slower but more + data is cached. + This image shows the same ColumnFamily from the previous images, with Diff encoding. +
    + ColumnFamily with Diff Encoding + + + + + A ColumnFamily with diff encoding + +
    +
    + + Fast Diff - Fast Diff works similar to Diff, but uses a faster implementation. It also + adds another field which stores a single bit to track whether the data itself is the same + as the previous row. If it is, the data is not stored again. Fast Diff is the recommended + codec to use if you have long keys or many columns. The data format is nearly identical to + Diff encoding, so there is not an image to illustrate it. + + + Prefix Tree encoding was introduced as an experimental feature in HBase 0.96. It + provides similar memory savings to the Prefix, Diff, and Fast Diff encoder, but provides + faster random access at a cost of slower encoding speed. Prefix Tree may be appropriate + for applications that have high block cache hit ratios. It introduces new 'tree' fields + for the row and column. The row tree field contains a list of offsets/references + corresponding to the cells in that row. This allows for a good deal of compression. For + more details about Prefix Tree encoding, see HBASE-4676. It is + difficult to graphically illustrate a prefix tree, so no image is included. See the + Wikipedia article for Trie for more general information + about this data structure. + +
    + +
    + Which Compressor or Data Block Encoder To Use + The compression or codec type to use depends on the characteristics of your data. + Choosing the wrong type could cause your data to take more space rather than less, and can + have performance implications. In general, you need to weigh your options between smaller + size and faster compression/decompression. Following are some general guidelines, expanded from a discussion at Documenting Guidance on compression and codecs. + + + If you have long keys (compared to the values) or many columns, use a prefix + encoder. FAST_DIFF is recommended, as more testing is needed for Prefix Tree + encoding. + + + If the values are large (and not precompressed, such as images), use a data block + compressor. + + + Use GZIP for cold data, which is accessed infrequently. GZIP + compression uses more CPU resources than Snappy or LZO, but provides a higher + compression ratio. + + + Use Snappy or LZO for hot data, which is accessed + frequently. Snappy and LZO use fewer CPU resources than GZIP, but do not provide as high + of a compression ratio. + + + In most cases, enabling Snappy or LZO by default is a good choice, because they have + a low performance overhead and provide space savings. + + + Before Snappy became available by Google in 2011, LZO was the default. Snappy has + similar qualities as LZO but has been shown to perform better. + + +
    +
    + Making use of Hadoop Native Libraries in HBase + The Hadoop shared library has a bunch of facility including + compression libraries and fast crc'ing. To make this facility available + to HBase, do the following. HBase/Hadoop will fall back to use + alternatives if it cannot find the native library versions -- or + fail outright if you asking for an explicit compressor and there is + no alternative available. + If you see the following in your HBase logs, you know that HBase was unable + to locate the Hadoop native libraries: + 2014-08-07 09:26:20,139 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable + If the libraries loaded successfully, the WARN message does not show. + + Lets presume your Hadoop shipped with a native library that + suits the platform you are running HBase on. To check if the Hadoop + native library is available to HBase, run the following tool (available in + Hadoop 2.1 and greater): + $ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker +2014-08-26 13:15:38,717 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable +Native library checking: +hadoop: false +zlib: false +snappy: false +lz4: false +bzip2: false +2014-08-26 13:15:38,863 INFO [main] util.ExitUtil: Exiting with status 1 +Above shows that the native hadoop library is not available in HBase context. + + To fix the above, either copy the Hadoop native libraries local or symlink to + them if the Hadoop and HBase stalls are adjacent in the filesystem. + You could also point at their location by setting the LD_LIBRARY_PATH environment + variable. + Where the JVM looks to find native librarys is "system dependent" + (See java.lang.System#loadLibrary(name)). On linux, by default, + is going to look in lib/native/PLATFORM where PLATFORM + is the label for the platform your HBase is installed on. + On a local linux machine, it seems to be the concatenation of the java properties + os.name and os.arch followed by whether 32 or 64 bit. + HBase on startup prints out all of the java system properties so find the os.name and os.arch + in the log. For example: + .... + 2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux + 2014-08-06 15:27:22,853 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64 + ... + + So in this case, the PLATFORM string is Linux-amd64-64. + Copying the Hadoop native libraries or symlinking at lib/native/Linux-amd64-64 + will ensure they are found. Check with the Hadoop NativeLibraryChecker. + + + Here is example of how to point at the Hadoop libs with LD_LIBRARY_PATH + environment variable: + $ LD_LIBRARY_PATH=~/hadoop-2.5.0-SNAPSHOT/lib/native ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker +2014-08-26 13:42:49,332 INFO [main] bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native +2014-08-26 13:42:49,337 INFO [main] zlib.ZlibFactory: Successfully loaded & initialized native-zlib library +Native library checking: +hadoop: true /home/stack/hadoop-2.5.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0 +zlib: true /lib64/libz.so.1 +snappy: true /usr/lib64/libsnappy.so.1 +lz4: true revision:99 +bzip2: true /lib64/libbz2.so.1 +Set in hbase-env.sh the LD_LIBRARY_PATH environment variable when starting your HBase. + +
    + +
    + Compressor Configuration, Installation, and Use +
    + Configure HBase For Compressors + Before HBase can use a given compressor, its libraries need to be available. Due to + licensing issues, only GZ compression is available to HBase (via native Java libraries) in + a default installation. Other compression libraries are available via the shared library + bundled with your hadoop. The hadoop native library needs to be findable when HBase + starts. See +
    + Compressor Support On the Master + A new configuration setting was introduced in HBase 0.95, to check the Master to + determine which data block encoders are installed and configured on it, and assume that + the entire cluster is configured the same. This option, + hbase.master.check.compression, defaults to true. This + prevents the situation described in HBASE-6370, where + a table is created or modified to support a codec that a region server does not support, + leading to failures that take a long time to occur and are difficult to debug. + If hbase.master.check.compression is enabled, libraries for all desired + compressors need to be installed and configured on the Master, even if the Master does + not run a region server. +
    +
    + Install GZ Support Via Native Libraries + HBase uses Java's built-in GZip support unless the native Hadoop libraries are + available on the CLASSPATH. The recommended way to add libraries to the CLASSPATH is to + set the environment variable HBASE_LIBRARY_PATH for the user running + HBase. If native libraries are not available and Java's GZIP is used, Got + brand-new compressor reports will be present in the logs. See ). +
    +
    + Install LZO Support + HBase cannot ship with LZO because of incompatibility between HBase, which uses an + Apache Software License (ASL) and LZO, which uses a GPL license. See the Using LZO + Compression wiki page for information on configuring LZO support for HBase. + If you depend upon LZO compression, consider configuring your RegionServers to fail + to start if LZO is not available. See . +
    +
    + Configure LZ4 Support + LZ4 support is bundled with Hadoop. Make sure the hadoop shared library + (libhadoop.so) is accessible when you start + HBase. After configuring your platform (see ), you can make a symbolic link from HBase to the native Hadoop + libraries. This assumes the two software installs are colocated. For example, if my + 'platform' is Linux-amd64-64: + $ cd $HBASE_HOME +$ mkdir lib/native +$ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64 + Use the compression tool to check that LZ4 is installed on all nodes. Start up (or restart) + HBase. Afterward, you can create and alter tables to enable LZ4 as a + compression codec.: + +hbase(main):003:0> alter 'TestTable', {NAME => 'info', COMPRESSION => 'LZ4'} + + +
    +
    + Install Snappy Support + HBase does not ship with Snappy support because of licensing issues. You can install + Snappy binaries (for instance, by using yum install snappy on CentOS) + or build Snappy from source. After installing Snappy, search for the shared library, + which will be called libsnappy.so.X where X is a number. If you + built from source, copy the shared library to a known location on your system, such as + /opt/snappy/lib/. + In addition to the Snappy library, HBase also needs access to the Hadoop shared + library, which will be called something like libhadoop.so.X.Y, + where X and Y are both numbers. Make note of the location of the Hadoop library, or copy + it to the same location as the Snappy library. + + The Snappy and Hadoop libraries need to be available on each node of your cluster. + See to find out how to test that this is the case. + See to configure your RegionServers to fail to + start if a given compressor is not available. + + Each of these library locations need to be added to the environment variable + HBASE_LIBRARY_PATH for the operating system user that runs HBase. You + need to restart the RegionServer for the changes to take effect. +
    + + +
    + CompressionTest + You can use the CompressionTest tool to verify that your compressor is available to + HBase: + + $ hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://host/path/to/hbase snappy + +
    + + +
    + Enforce Compression Settings On a RegionServer + You can configure a RegionServer so that it will fail to restart if compression is + configured incorrectly, by adding the option hbase.regionserver.codecs to the + hbase-site.xml, and setting its value to a comma-separated list + of codecs that need to be available. For example, if you set this property to + lzo,gz, the RegionServer would fail to start if both compressors + were not available. This would prevent a new server from being added to the cluster + without having codecs configured properly. +
    +
    + +
    + Enable Compression On a ColumnFamily + To enable compression for a ColumnFamily, use an alter command. You do + not need to re-create the table or copy data. If you are changing codecs, be sure the old + codec is still available until all the old StoreFiles have been compacted. + + Enabling Compression on a ColumnFamily of an Existing Table using HBase + Shell + disable 'test' +hbase> alter 'test', {NAME => 'cf', COMPRESSION => 'GZ'} +hbase> enable 'test']]> + + + + Creating a New Table with Compression On a ColumnFamily + create 'test2', { NAME => 'cf2', COMPRESSION => 'SNAPPY' } + ]]> + + + Verifying a ColumnFamily's Compression Settings + describe 'test' +DESCRIPTION ENABLED + 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'NONE false + ', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', + VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERSIONS + => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'fa + lse', BLOCKSIZE => '65536', IN_MEMORY => 'false', B + LOCKCACHE => 'true'} +1 row(s) in 0.1070 seconds + ]]> + +
    + +
    + Testing Compression Performance + HBase includes a tool called LoadTestTool which provides mechanisms to test your + compression performance. You must specify either -write or + -update-read as your first parameter, and if you do not specify another + parameter, usage advice is printed for each option. + + <command>LoadTestTool</command> Usage + +Options: + -batchupdate Whether to use batch as opposed to separate + updates for every column in a row + -bloom Bloom filter type, one of [NONE, ROW, ROWCOL] + -compression Compression type, one of [LZO, GZ, NONE, SNAPPY, + LZ4] + -data_block_encoding Encoding algorithm (e.g. prefix compression) to + use for data blocks in the test column family, one + of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE]. + -encryption Enables transparent encryption on the test table, + one of [AES] + -generator The class which generates load for the tool. Any + args for this class can be passed as colon + separated after class name + -h,--help Show usage + -in_memory Tries to keep the HFiles of the CF inmemory as far + as possible. Not guaranteed that reads are always + served from inmemory + -init_only Initialize the test table only, don't do any + loading + -key_window The 'key window' to maintain between reads and + writes for concurrent write/read workload. The + default is 0. + -max_read_errors The maximum number of read errors to tolerate + before terminating all reader threads. The default + is 10. + -multiput Whether to use multi-puts as opposed to separate + puts for every column in a row + -num_keys The number of keys to read/write + -num_tables A positive integer number. When a number n is + speicfied, load test tool will load n table + parallely. -tn parameter value becomes table name + prefix. Each table name is in format + _1..._n + -read [:<#threads=20>] + -regions_per_server A positive integer number. When a number n is + specified, load test tool will create the test + table with n regions per server + -skip_init Skip the initialization; assume test table already + exists + -start_key The first key to read/write (a 0-based index). The + default value is 0. + -tn The name of the table to read or write + -update [:<#threads=20>][:<#whether to + ignore nonce collisions=0>] + -write :[:<#threads=20>] + -zk ZK quorum as comma-separated host names without + port numbers + -zk_root name of parent znode in zookeeper + ]]> + + + Example Usage of LoadTestTool + +$ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000 + -read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE + + +
    +
    + +
    + Enable Data Block Encoding + Codecs are built into HBase so no extra configuration is needed. Codecs are enabled on a + table by setting the DATA_BLOCK_ENCODING property. Disable the table before + altering its DATA_BLOCK_ENCODING setting. Following is an example using HBase Shell: + + Enable Data Block Encoding On a Table + disable 'test' +hbase> alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' } +Updating all regions with the new schema... +0/1 regions updated. +1/1 regions updated. +Done. +0 row(s) in 2.2820 seconds +hbase> enable 'test' +0 row(s) in 0.1580 seconds + ]]> + + + Verifying a ColumnFamily's Data Block Encoding + describe 'test' +DESCRIPTION ENABLED + 'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST true + _DIFF', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => + '0', VERSIONS => '1', COMPRESSION => 'GZ', MIN_VERS + IONS => '0', TTL => 'FOREVER', KEEP_DELETED_CELLS = + > 'false', BLOCKSIZE => '65536', IN_MEMORY => 'fals + e', BLOCKCACHE => 'true'} +1 row(s) in 0.0650 seconds + ]]> + +
    + + +
    diff --git src/main/docbkx/configuration.xml src/main/docbkx/configuration.xml index 74b8e52..1d6e160 100644 --- src/main/docbkx/configuration.xml +++ src/main/docbkx/configuration.xml @@ -925,8 +925,8 @@ stopping hbase............... - + href="hbase-default.xml"> +
    @@ -1007,7 +1007,7 @@ stopping hbase...............</screen> </section> </section> </section> - </xi:fallback> + </xi:fallback> </xi:include> </section> diff --git src/main/docbkx/customization-pdf.xsl src/main/docbkx/customization-pdf.xsl new file mode 100644 index 0000000..b21236f --- /dev/null +++ src/main/docbkx/customization-pdf.xsl @@ -0,0 +1,129 @@ +<?xml version="1.0"?> +<xsl:stylesheet + xmlns:xsl="http://www.w3.org/1999/XSL/Transform" + version="1.0"> +<!-- +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +--> + <xsl:import href="urn:docbkx:stylesheet/docbook.xsl"/> + <xsl:import href="urn:docbkx:stylesheet/highlight.xsl"/> + + + <!--################################################### + Paper & Page Size + ################################################### --> + + <!-- Paper type, no headers on blank pages, no double sided printing --> + <xsl:param name="paper.type" select="'USletter'"/> + <xsl:param name="double.sided">0</xsl:param> + <xsl:param name="headers.on.blank.pages">0</xsl:param> + <xsl:param name="footers.on.blank.pages">0</xsl:param> + + <!-- Space between paper border and content (chaotic stuff, don't touch) --> + <xsl:param name="page.margin.top">5mm</xsl:param> + <xsl:param name="region.before.extent">10mm</xsl:param> + <xsl:param name="body.margin.top">10mm</xsl:param> + + <xsl:param name="body.margin.bottom">15mm</xsl:param> + <xsl:param name="region.after.extent">10mm</xsl:param> + <xsl:param name="page.margin.bottom">0mm</xsl:param> + + <xsl:param name="page.margin.outer">18mm</xsl:param> + <xsl:param name="page.margin.inner">18mm</xsl:param> + + <!-- No intendation of Titles --> + <xsl:param name="title.margin.left">0pc</xsl:param> + + <!--################################################### + Fonts & Styles + ################################################### --> + + <!-- Left aligned text and no hyphenation --> + <xsl:param name="alignment">justify</xsl:param> + <xsl:param name="hyphenate">true</xsl:param> + + <!-- Default Font size --> + <xsl:param name="body.font.master">11</xsl:param> + <xsl:param name="body.font.small">8</xsl:param> + + <!-- Line height in body text --> + <xsl:param name="line-height">1.4</xsl:param> + + <!-- Force line break in long URLs --> + <xsl:param name="ulink.hyphenate.chars">/&?</xsl:param> + <xsl:param name="ulink.hyphenate">​</xsl:param> + + <!-- Monospaced fonts are smaller than regular text --> + <xsl:attribute-set name="monospace.properties"> + <xsl:attribute name="font-family"> + <xsl:value-of select="$monospace.font.family"/> + </xsl:attribute> + <xsl:attribute name="font-size">0.8em</xsl:attribute> + <xsl:attribute name="wrap-option">wrap</xsl:attribute> + <xsl:attribute name="hyphenate">true</xsl:attribute> + </xsl:attribute-set> + + + <!-- add page break after abstract block --> + <xsl:attribute-set name="abstract.properties"> + <xsl:attribute name="break-after">page</xsl:attribute> + </xsl:attribute-set> + + <!-- add page break after toc --> + <xsl:attribute-set name="toc.margin.properties"> + <xsl:attribute name="break-after">page</xsl:attribute> + </xsl:attribute-set> + + <!-- add page break after first level sections --> + <xsl:attribute-set name="section.level1.properties"> + <xsl:attribute name="break-after">page</xsl:attribute> + </xsl:attribute-set> + + <!-- Show only Sections up to level 3 in the TOCs --> + <xsl:param name="toc.section.depth">2</xsl:param> + + <!-- Dot and Whitespace as separator in TOC between Label and Title--> + <xsl:param name="autotoc.label.separator" select="'. '"/> + + <!-- program listings / examples formatting --> + <xsl:attribute-set name="monospace.verbatim.properties"> + <xsl:attribute name="font-family">Courier</xsl:attribute> + <xsl:attribute name="font-size">8pt</xsl:attribute> + <xsl:attribute name="keep-together.within-column">always</xsl:attribute> + </xsl:attribute-set> + + <xsl:param name="shade.verbatim" select="1" /> + + <xsl:attribute-set name="shade.verbatim.style"> + <xsl:attribute name="background-color">#E8E8E8</xsl:attribute> + <xsl:attribute name="border-width">0.5pt</xsl:attribute> + <xsl:attribute name="border-style">solid</xsl:attribute> + <xsl:attribute name="border-color">#575757</xsl:attribute> + <xsl:attribute name="padding">3pt</xsl:attribute> + </xsl:attribute-set> + + <!-- callouts customization --> + <xsl:param name="callout.unicode" select="1" /> + <xsl:param name="callout.graphics" select="0" /> + <xsl:param name="callout.defaultcolumn">90</xsl:param> + + <!-- Syntax Highlighting --> + + +</xsl:stylesheet> diff --git src/main/docbkx/datamodel.xml src/main/docbkx/datamodel.xml new file mode 100644 index 0000000..bdf697d --- /dev/null +++ src/main/docbkx/datamodel.xml @@ -0,0 +1,865 @@ +<?xml version="1.0" encoding="UTF-8"?> +<chapter + xml:id="datamodel" + version="5.0" + xmlns="http://docbook.org/ns/docbook" + xmlns:xlink="http://www.w3.org/1999/xlink" + xmlns:xi="http://www.w3.org/2001/XInclude" + xmlns:svg="http://www.w3.org/2000/svg" + xmlns:m="http://www.w3.org/1998/Math/MathML" + xmlns:html="http://www.w3.org/1999/xhtml" + xmlns:db="http://docbook.org/ns/docbook"> + <!--/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +--> + + <title>Data Model + In HBase, data is stored in tables, which have rows and columns. This is a terminology + overlap with relational databases (RDBMSs), but this is not a helpful analogy. Instead, it can + be helpful to think of an HBase table as a multi-dimensional map. + + HBase Data Model Terminology + + Table + + An HBase table consists of multiple rows. + + + + Row + + A row in HBase consists of a row key and one or more columns with values associated + with them. Rows are sorted alphabetically by the row key as they are stored. For this + reason, the design of the row key is very important. The goal is to store data in such a + way that related rows are near each other. A common row key pattern is a website domain. + If your row keys are domains, you should probably store them in reverse (org.apache.www, + org.apache.mail, org.apache.jira). This way, all of the Apache domains are near each + other in the table, rather than being spread out based on the first letter of the + subdomain. + + + + Column + + A column in HBase consists of a column family and a column qualifier, which are + delimited by a : (colon) character. + + + + Column Family + + Column families physically colocate a set of columns and their values, often for + performance reasons. Each column family has a set of storage properties, such as whether + its values should be cached in memory, how its data is compressed or its row keys are + encoded, and others. Each row in a table has the same column + families, though a given row might not store anything in a given column family. + Column families are specified when you create your table, and influence the way your + data is stored in the underlying filesystem. Therefore, the column families should be + considered carefully during schema design. + + + + Column Qualifier + + A column qualifier is added to a column family to provide the index for a given + piece of data. Given a column family content, a column qualifier + might be content:html, and another might be + content:pdf. Though column families are fixed at table creation, + column qualifiers are mutable and may differ greatly between rows. + + + + Cell + + A cell is a combination of row, column family, and column qualifier, and contains a + value and a timestamp, which represents the value's version. + A cell's value is an uninterpreted array of bytes. + + + + Timestamp + + A timestamp is written alongside each value, and is the identifier for a given + version of a value. By default, the timestamp represents the time on the RegionServer + when the data was written, but you can specify a different timestamp value when you put + data into the cell. + + Direct manipulation of timestamps is an advanced feature which is only exposed for + special cases that are deeply integrated with HBase, and is discouraged in general. + Encoding a timestamp at the application level is the preferred pattern. + + You can specify the maximum number of versions of a value that HBase retains, per column + family. When the maximum number of versions is reached, the oldest versions are + eventually deleted. By default, only the newest version is kept. + + + + +
    + Conceptual View + You can read a very understandable explanation of the HBase data model in the blog post Understanding + HBase and BigTable by Jim R. Wilson. Another good explanation is available in the + PDF Introduction + to Basic Schema Design by Amandeep Khurana. It may help to read different + perspectives to get a solid understanding of HBase schema design. The linked articles cover + the same ground as the information in this section. + The following example is a slightly modified form of the one on page 2 of the BigTable paper. There + is a table called webtable that contains two rows + (com.cnn.www + and com.example.www), three column families named + contents, anchor, and people. In + this example, for the first row (com.cnn.www), + anchor contains two columns (anchor:cssnsi.com, + anchor:my.look.ca) and contents contains one column + (contents:html). This example contains 5 versions of the row with the + row key com.cnn.www, and one version of the row with the row key + com.example.www. The contents:html column qualifier contains the entire + HTML of a given website. Qualifiers of the anchor column family each + contain the external site which links to the site represented by the row, along with the + text it used in the anchor of its link. The people column family represents + people associated with the site. + + + Column Names + By convention, a column name is made of its column family prefix and a + qualifier. For example, the column + contents:html is made up of the column family + contents and the html qualifier. The colon + character (:) delimits the column family from the column family + qualifier. + + + Table <varname>webtable</varname> + + + + + + + + + Row Key + Time Stamp + ColumnFamily contents + ColumnFamily anchor + ColumnFamily people + + + + + "com.cnn.www" + t9 + + anchor:cnnsi.com = "CNN" + + + + "com.cnn.www" + t8 + + anchor:my.look.ca = "CNN.com" + + + + "com.cnn.www" + t6 + contents:html = "<html>..." + + + + + "com.cnn.www" + t5 + contents:html = "<html>..." + + + + + "com.cnn.www" + t3 + contents:html = "<html>..." + + + + + "com.example.www" + t5 + contents:html = "<html>..." + + people:author = "John Doe" + + + +
    + Cells in this table that appear to be empty do not take space, or in fact exist, in + HBase. This is what makes HBase "sparse." A tabular view is not the only possible way to + look at data in HBase, or even the most accurate. The following represents the same + information as a multi-dimensional map. This is only a mock-up for illustrative + purposes and may not be strictly accurate. + ..." + t5: contents:html: "..." + t3: contents:html: "..." + } + anchor: { + t9: anchor:cnnsi.com = "CNN" + t8: anchor:my.look.ca = "CNN.com" + } + people: {} + } + "com.example.www": { + contents: { + t5: contents:html: "..." + } + anchor: {} + people: { + t5: people:author: "John Doe" + } + } +} + ]]> + +
    +
    + Physical View + Although at a conceptual level tables may be viewed as a sparse set of rows, they are + physically stored by column family. A new column qualifier (column_family:column_qualifier) + can be added to an existing column family at any time. + + ColumnFamily <varname>anchor</varname> + + + + + + + Row Key + Time Stamp + Column Family anchor + + + + + "com.cnn.www" + t9 + anchor:cnnsi.com = "CNN" + + + "com.cnn.www" + t8 + anchor:my.look.ca = "CNN.com" + + + +
    + + ColumnFamily <varname>contents</varname> + + + + + + + Row Key + Time Stamp + ColumnFamily "contents:" + + + + + "com.cnn.www" + t6 + contents:html = "<html>..." + + + "com.cnn.www" + t5 + contents:html = "<html>..." + + + "com.cnn.www" + t3 + contents:html = "<html>..." + + + +
    + The empty cells shown in the + conceptual view are not stored at all. + Thus a request for the value of the contents:html column at time stamp + t8 would return no value. Similarly, a request for an + anchor:my.look.ca value at time stamp t9 would + return no value. However, if no timestamp is supplied, the most recent value for a + particular column would be returned. Given multiple versions, the most recent is also the + first one found, since timestamps + are stored in descending order. Thus a request for the values of all columns in the row + com.cnn.www if no timestamp is specified would be: the value of + contents:html from timestamp t6, the value of + anchor:cnnsi.com from timestamp t9, the value of + anchor:my.look.ca from timestamp t8. + For more information about the internals of how Apache HBase stores data, see . +
    + +
    + Namespace + A namespace is a logical grouping of tables analogous to a database in relation + database systems. This abstraction lays the groundwork for upcoming multi-tenancy related + features: + + Quota Management (HBASE-8410) - Restrict the amount of resources (ie regions, + tables) a namespace can consume. + + + Namespace Security Administration (HBASE-9206) - provide another level of security + administration for tenants. + + + Region server groups (HBASE-6721) - A namespace/table can be pinned onto a subset + of regionservers thus guaranteeing a course level of isolation. + + + +
    + Namespace management + A namespace can be created, removed or altered. Namespace membership is determined + during table creation by specifying a fully-qualified table name of the form: + + :]]> + + + + Examples + + +#Create a namespace +create_namespace 'my_ns' + + +#create my_table in my_ns namespace +create 'my_ns:my_table', 'fam' + + +#drop namespace +drop_namespace 'my_ns' + + +#alter namespace +alter_namespace 'my_ns', {METHOD => 'set', 'PROPERTY_NAME' => 'PROPERTY_VALUE'} + + + +
    + Predefined namespaces + There are two predefined special namespaces: + + + hbase - system namespace, used to contain hbase internal tables + + + default - tables with no explicit specified namespace will automatically fall into + this namespace. + + + + Examples + + +#namespace=foo and table qualifier=bar +create 'foo:bar', 'fam' + +#namespace=default and table qualifier=bar +create 'bar', 'fam' + + +
    + + +
    + Table + Tables are declared up front at schema definition time. +
    + +
    + Row + Row keys are uninterrpreted bytes. Rows are lexicographically sorted with the lowest + order appearing first in a table. The empty byte array is used to denote both the start and + end of a tables' namespace. +
    + +
    + Column Family<indexterm><primary>Column Family</primary></indexterm> + Columns in Apache HBase are grouped into column families. All + column members of a column family have the same prefix. For example, the columns + courses:history and courses:math are both + members of the courses column family. The colon character + (:) delimits the column family from the column + family qualifierColumn Family Qualifier. + The column family prefix must be composed of printable characters. The + qualifying tail, the column family qualifier, can be made of any + arbitrary bytes. Column families must be declared up front at schema definition time whereas + columns do not need to be defined at schema time but can be conjured on the fly while the + table is up an running. + Physically, all column family members are stored together on the filesystem. Because + tunings and storage specifications are done at the column family level, it is advised that + all column family members have the same general access pattern and size + characteristics. + +
    +
    + Cells<indexterm><primary>Cells</primary></indexterm> + A {row, column, version} tuple exactly specifies a + cell in HBase. Cell content is uninterrpreted bytes +
    +
    + Data Model Operations + The four primary data model operations are Get, Put, Scan, and Delete. Operations are + applied via Table + instances. + +
    + Get + Get + returns attributes for a specified row. Gets are executed via + Table.get. +
    +
    + Put + Put + either adds new rows to a table (if the key is new) or can update existing rows (if the + key already exists). Puts are executed via + Table.put (writeBuffer) or + Table.batch (non-writeBuffer). +
    +
    + Scans + Scan + allow iteration over multiple rows for specified attributes. + The following is an example of a Scan on a Table instance. Assume that a table is + populated with rows with keys "row1", "row2", "row3", and then another set of rows with + the keys "abc1", "abc2", and "abc3". The following example shows how to set a Scan + instance to return the rows beginning with "row". + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... + +Table table = ... // instantiate a Table instance + +Scan scan = new Scan(); +scan.addColumn(CF, ATTR); +scan.setRowPrefixFilter(Bytes.toBytes("row")); +ResultScanner rs = table.getScanner(scan); +try { + for (Result r = rs.next(); r != null; r = rs.next()) { + // process result... +} finally { + rs.close(); // always close the ResultScanner! + + Note that generally the easiest way to specify a specific stop point for a scan is by + using the InclusiveStopFilter + class. +
    +
    + Delete + Delete + removes a row from a table. Deletes are executed via + HTable.delete. + HBase does not modify data in place, and so deletes are handled by creating new + markers called tombstones. These tombstones, along with the dead + values, are cleaned up on major compactions. + See for more information on deleting versions of columns, and + see for more information on compactions. + +
    + +
    + + +
    + Versions<indexterm><primary>Versions</primary></indexterm> + + A {row, column, version} tuple exactly specifies a + cell in HBase. It's possible to have an unbounded number of cells where + the row and column are the same but the cell address differs only in its version + dimension. + + While rows and column keys are expressed as bytes, the version is specified using a long + integer. Typically this long contains time instances such as those returned by + java.util.Date.getTime() or System.currentTimeMillis(), that is: + the difference, measured in milliseconds, between the current time and midnight, + January 1, 1970 UTC. + + The HBase version dimension is stored in decreasing order, so that when reading from a + store file, the most recent values are found first. + + There is a lot of confusion over the semantics of cell versions, in + HBase. In particular: + + + If multiple writes to a cell have the same version, only the last written is + fetchable. + + + + It is OK to write cells in a non-increasing version order. + + + + Below we describe how the version dimension in HBase currently works. See HBASE-2406 for + discussion of HBase versions. Bending time in HBase + makes for a good read on the version, or time, dimension in HBase. It has more detail on + versioning than is provided here. As of this writing, the limiitation + Overwriting values at existing timestamps mentioned in the + article no longer holds in HBase. This section is basically a synopsis of this article + by Bruno Dumon. + +
    + Specifying the Number of Versions to Store + The maximum number of versions to store for a given column is part of the column + schema and is specified at table creation, or via an alter command, via + HColumnDescriptor.DEFAULT_VERSIONS. Prior to HBase 0.96, the default number + of versions kept was 3, but in 0.96 and newer has been changed to + 1. + + Modify the Maximum Number of Versions for a Column + This example uses HBase Shell to keep a maximum of 5 versions of column + f1. You could also use HColumnDescriptor. + alter ‘t1′, NAME => ‘f1′, VERSIONS => 5]]> + + + Modify the Minimum Number of Versions for a Column + You can also specify the minimum number of versions to store. By default, this is + set to 0, which means the feature is disabled. The following example sets the minimum + number of versions on field f1 to 2, via HBase Shell. + You could also use HColumnDescriptor. + alter ‘t1′, NAME => ‘f1′, MIN_VERSIONS => 2]]> + + Starting with HBase 0.98.2, you can specify a global default for the maximum number of + versions kept for all newly-created columns, by setting + in hbase-site.xml. See + . +
    + +
    + Versions and HBase Operations + + In this section we look at the behavior of the version dimension for each of the core + HBase operations. + +
    + Get/Scan + + Gets are implemented on top of Scans. The below discussion of Get + applies equally to Scans. + + By default, i.e. if you specify no explicit version, when doing a + get, the cell whose version has the largest value is returned + (which may or may not be the latest one written, see later). The default behavior can be + modified in the following ways: + + + + to return more than one version, see Get.setMaxVersions() + + + + to return versions other than the latest, see Get.setTimeRange() + + To retrieve the latest version that is less than or equal to a given value, thus + giving the 'latest' state of the record at a certain point in time, just use a range + from 0 to the desired version and set the max versions to 1. + + + +
    +
    + Default Get Example + The following Get will only retrieve the current version of the row + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... +Get get = new Get(Bytes.toBytes("row1")); +Result r = table.get(get); +byte[] b = r.getValue(CF, ATTR); // returns current version of value + +
    +
    + Versioned Get Example + The following Get will return the last 3 versions of the row. + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... +Get get = new Get(Bytes.toBytes("row1")); +get.setMaxVersions(3); // will return last 3 versions of row +Result r = table.get(get); +byte[] b = r.getValue(CF, ATTR); // returns current version of value +List<KeyValue> kv = r.getColumn(CF, ATTR); // returns all versions of this column + +
    + +
    + Put + + Doing a put always creates a new version of a cell, at a certain + timestamp. By default the system uses the server's currentTimeMillis, + but you can specify the version (= the long integer) yourself, on a per-column level. + This means you could assign a time in the past or the future, or use the long value for + non-time purposes. + + To overwrite an existing value, do a put at exactly the same row, column, and + version as that of the cell you would overshadow. +
    + Implicit Version Example + The following Put will be implicitly versioned by HBase with the current + time. + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... +Put put = new Put(Bytes.toBytes(row)); +put.add(CF, ATTR, Bytes.toBytes( data)); +table.put(put); + +
    +
    + Explicit Version Example + The following Put has the version timestamp explicitly set. + +public static final byte[] CF = "cf".getBytes(); +public static final byte[] ATTR = "attr".getBytes(); +... +Put put = new Put( Bytes.toBytes(row)); +long explicitTimeInMs = 555; // just an example +put.add(CF, ATTR, explicitTimeInMs, Bytes.toBytes(data)); +table.put(put); + + Caution: the version timestamp is internally by HBase for things like time-to-live + calculations. It's usually best to avoid setting this timestamp yourself. Prefer using + a separate timestamp attribute of the row, or have the timestamp a part of the rowkey, + or both. +
    + +
    + +
    + Delete + + There are three different types of internal delete markers. See Lars Hofhansl's blog + for discussion of his attempt adding another, Scanning + in HBase: Prefix Delete Marker. + + + Delete: for a specific version of a column. + + + Delete column: for all versions of a column. + + + Delete family: for all columns of a particular ColumnFamily + + + When deleting an entire row, HBase will internally create a tombstone for each + ColumnFamily (i.e., not each individual column). + Deletes work by creating tombstone markers. For example, let's + suppose we want to delete a row. For this you can specify a version, or else by default + the currentTimeMillis is used. What this means is delete all + cells where the version is less than or equal to this version. HBase never + modifies data in place, so for example a delete will not immediately delete (or mark as + deleted) the entries in the storage file that correspond to the delete condition. + Rather, a so-called tombstone is written, which will mask the + deleted values. When HBase does a major compaction, the tombstones are processed to + actually remove the dead values, together with the tombstones themselves. If the version + you specified when deleting a row is larger than the version of any value in the row, + then you can consider the complete row to be deleted. + For an informative discussion on how deletes and versioning interact, see the thread Put w/ + timestamp -> Deleteall -> Put w/ timestamp fails up on the user mailing + list. + Also see for more information on the internal KeyValue format. + Delete markers are purged during the next major compaction of the store, unless the + option is set in the column family. To keep the + deletes for a configurable amount of time, you can set the delete TTL via the + property in + hbase-site.xml. If + is not set, or set to 0, all + delete markers, including those with timestamps in the future, are purged during the + next major compaction. Otherwise, a delete marker with a timestamp in the future is kept + until the major compaction which occurs after the time represented by the marker's + timestamp plus the value of , in + milliseconds. + + This behavior represents a fix for an unexpected change that was introduced in + HBase 0.94, and was fixed in HBASE-10118. + The change has been backported to HBase 0.94 and newer branches. + +
    +
    + +
    + Current Limitations + +
    + Deletes mask Puts + + Deletes mask puts, even puts that happened after the delete + was entered. See HBASE-2256. Remember that a delete writes a tombstone, which only + disappears after then next major compaction has run. Suppose you do + a delete of everything <= T. After this you do a new put with a + timestamp <= T. This put, even if it happened after the delete, + will be masked by the delete tombstone. Performing the put will not + fail, but when you do a get you will notice the put did have no + effect. It will start working again after the major compaction has + run. These issues should not be a problem if you use + always-increasing versions for new puts to a row. But they can occur + even if you do not care about time: just do delete and put + immediately after each other, and there is some chance they happen + within the same millisecond. +
    + +
    + Major compactions change query results + + ...create three cell versions at t1, t2 and t3, with a maximum-versions + setting of 2. So when getting all versions, only the values at t2 and t3 will be + returned. But if you delete the version at t2 or t3, the one at t1 will appear again. + Obviously, once a major compaction has run, such behavior will not be the case + anymore... (See Garbage Collection in Bending time in + HBase.) +
    +
    +
    +
    + Sort Order + All data model operations HBase return data in sorted order. First by row, + then by ColumnFamily, followed by column qualifier, and finally timestamp (sorted + in reverse, so newest records are returned first). + +
    +
    + Column Metadata + There is no store of column metadata outside of the internal KeyValue instances for a ColumnFamily. + Thus, while HBase can support not only a wide number of columns per row, but a heterogenous set of columns + between rows as well, it is your responsibility to keep track of the column names. + + The only way to get a complete set of columns that exist for a ColumnFamily is to process all the rows. + For more information about how HBase stores data internally, see . + +
    +
    Joins + Whether HBase supports joins is a common question on the dist-list, and there is a simple answer: it doesn't, + at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL). As has been illustrated + in this chapter, the read data model operations in HBase are Get and Scan. + + However, that doesn't mean that equivalent join functionality can't be supported in your application, but + you have to do it yourself. The two primary strategies are either denormalizing the data upon writing to HBase, + or to have lookup tables and do the join between HBase tables in your application or MapReduce code (and as RDBMS' + demonstrate, there are several strategies for this depending on the size of the tables, e.g., nested loops vs. + hash-joins). So which is the best approach? It depends on what you are trying to do, and as such there isn't a single + answer that works for every use case. + +
    +
    ACID + See ACID Semantics. + Lars Hofhansl has also written a note on + ACID in HBase. +
    + diff --git src/main/docbkx/developer.xml src/main/docbkx/developer.xml index a6b5dc2..47c78b4 100644 --- src/main/docbkx/developer.xml +++ src/main/docbkx/developer.xml @@ -743,8 +743,10 @@ $ mvn deploy -DskipTests -Papache-release You can always delete it if the build goes haywire. - Sign and upload your version directory to <link - xlink:href="http://people.apache.org">people.apache.org</link>. + Sign, upload, and 'stage' your version directory to <link + xlink:href="http://people.apache.org">people.apache.org</link> (TODO: + There is a new location to stage releases using svnpubsub. See + (<link xlink:href="https://issues.apache.org/jira/browse/HBASE-10554">HBASE-10554 Please delete old releases from mirroring system</link>). If all checks out, next put the version directory up on people.apache.org. You will need to sign and fingerprint them before you push them up. In the @@ -874,12 +876,13 @@ $ rsync -av 0.96.0RC0 people.apache.org:public_html Alternatively, you may limit the shell tests that run using the system variable - shell.test. This value may specify a particular test case by name. For - example, the tests that cover the shell commands for altering tables are contained in the test - case AdminAlterTableTest and you can run them with: + shell.test. This value should specify the ruby literal equivalent of a + particular test case by name. For example, the tests that cover the shell commands for + altering tables are contained in the test case AdminAlterTableTest + and you can run them with: - mvn clean test -Dtest=TestShell -Dshell.test=AdminAlterTableTest + mvn clean test -Dtest=TestShell -Dshell.test=/AdminAlterTableTest/ You may also use a + + + FAQ + + General + + When should I use HBase? + + See the in the Architecture chapter. + + + + + Are there other HBase FAQs? + + + See the FAQ that is up on the wiki, HBase Wiki FAQ. + + + + + Does HBase support SQL? + + + Not really. SQL-ish support for HBase via Hive is in development, however Hive is based on MapReduce which is not generally suitable for low-latency requests. + See the section for examples on the HBase client. + + + + + How can I find examples of NoSQL/HBase? + + See the link to the BigTable paper in in the appendix, as + well as the other papers. + + + + + What is the history of HBase? + + See . + + + + + + Upgrading + + + How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+? + + + In HBase 0.96, the project moved to a modular structure. Adjust your project's + dependencies to rely upon the hbase-client module or another + module as appropriate, rather than a single JAR. You can model your Maven depency + after one of the following, depending on your targeted version of HBase. See or for more + information. + + Maven Dependency for HBase 0.98 + + org.apache.hbase + hbase-client + 0.98.5-hadoop2 + + ]]> + + + Maven Dependency for HBase 0.96 + + org.apache.hbase + hbase-client + 0.96.2-hadoop2 + + ]]> + + + Maven Dependency for HBase 0.94 + + org.apache.hbase + hbase + 0.94.3 + + ]]> + + + + + Architecture + + How does HBase handle Region-RegionServer assignment and locality? + + + See . + + + + + Configuration + + How can I get started with my first cluster? + + + See . + + + + + Where can I learn about the rest of the configuration options? + + + See . + + + + + Schema Design / Data Access + + How should I design my schema in HBase? + + + See and + + + + + + How can I store (fill in the blank) in HBase? + + + + See . + + + + + + How can I handle secondary indexes in HBase? + + + + See + + + + + Can I change a table's rowkeys? + + This is a very common question. You can't. See . + + + + What APIs does HBase support? + + + See , and . + + + + + MapReduce + + How can I use MapReduce with HBase? + + + See + + + + + Performance and Troubleshooting + + + How can I improve HBase cluster performance? + + + + See . + + + + + + How can I troubleshoot my HBase cluster? + + + + See . + + + + + Amazon EC2 + + + I am running HBase on Amazon EC2 and... + + + + EC2 issues are a special case. See Troubleshooting and Performance sections. + + + + + Operations + + + How do I manage my HBase cluster? + + + + See + + + + + + How do I back up my HBase cluster? + + + + See + + + + + HBase in Action + + Where can I find interesting videos and presentations on HBase? + + + See + + + + + + + diff --git src/main/docbkx/hbase_history.xml src/main/docbkx/hbase_history.xml new file mode 100644 index 0000000..f7b9064 --- /dev/null +++ src/main/docbkx/hbase_history.xml @@ -0,0 +1,41 @@ + + + + HBase History + + 2006: BigTable paper published by Google. + + 2006 (end of year): HBase development starts. + + 2008: HBase becomes Hadoop sub-project. + + 2010: HBase becomes Apache top-level project. + + + diff --git src/main/docbkx/hbck_in_depth.xml src/main/docbkx/hbck_in_depth.xml new file mode 100644 index 0000000..e2ee34f --- /dev/null +++ src/main/docbkx/hbck_in_depth.xml @@ -0,0 +1,237 @@ + + + + + hbck In Depth + HBaseFsck (hbck) is a tool for checking for region consistency and table integrity problems + and repairing a corrupted HBase. It works in two basic modes -- a read-only inconsistency + identifying mode and a multi-phase read-write repair mode. + +
    + Running hbck to identify inconsistencies + To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster: + +$ ./bin/hbase hbck + + + At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES + present. You may also want to run run hbck a few times because some inconsistencies can be + transient (e.g. cluster is starting up or a region is splitting). Operationally you may want to run + hbck regularly and setup alert (e.g. via nagios) if it repeatedly reports inconsistencies . + A run of hbck will report a list of inconsistencies along with a brief description of the regions and + tables affected. The using the -details option will report more details including a representative + listing of all the splits present in all the tables. + + +$ ./bin/hbase hbck -details + + If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies + in only specific tables. For example the following command would only attempt to check table + TableFoo and TableBar. The benefit is that hbck will run in less time. + +$ ./bin/hbase hbck TableFoo TableBar + +
    +
    Inconsistencies + + If after several runs, inconsistencies continue to be reported, you may have encountered a + corruption. These should be rare, but in the event they occur newer versions of HBase include + the hbck tool enabled with automatic repair options. + + + There are two invariants that when violated create inconsistencies in HBase: + + + HBase’s region consistency invariant is satisfied if every region is assigned and + deployed on exactly one region server, and all places where this state kept is in + accordance. + + HBase’s table integrity invariant is satisfied if for each table, every possible row key + resolves to exactly one region. + + + + Repairs generally work in three phases -- a read-only information gathering phase that identifies + inconsistencies, a table integrity repair phase that restores the table integrity invariant, and then + finally a region consistency repair phase that restores the region consistency invariant. + Starting from version 0.90.0, hbck could detect region consistency problems report on a subset + of possible table integrity problems. It also included the ability to automatically fix the most + common inconsistency, region assignment and deployment consistency problems. This repair + could be done by using the -fix command line option. These problems close regions if they are + open on the wrong server or on multiple region servers and also assigns regions to region + servers if they are not open. + + + Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are + introduced to aid repairing a corrupted HBase. This hbck sometimes goes by the nickname + “uberhbck”. Each particular version of uber hbck is compatible with the HBase’s of the same + major version (0.90.7 uberhbck can repair a 0.90.4). However, versions <=0.90.6 and versions + <=0.92.1 may require restarting the master or failing over to a backup master. + +
    +
    Localized repairs + + When repairing a corrupted HBase, it is best to repair the lowest risk inconsistencies first. + These are generally region consistency repairs -- localized single region repairs, that only modify + in-memory data, ephemeral zookeeper data, or patch holes in the META table. + Region consistency requires that the HBase instance has the state of the region’s data in HDFS + (.regioninfo files), the region’s row in the hbase:meta table., and region’s deployment/assignments on + region servers and the master in accordance. Options for repairing region consistency include: + + -fixAssignments (equivalent to the 0.90 -fix option) repairs unassigned, incorrectly + assigned or multiply assigned regions. + + -fixMeta which removes meta rows when corresponding regions are not present in + HDFS and adds new meta rows if they regions are present in HDFS while not in META. + + + To fix deployment and assignment problems you can run this command: + + +$ ./bin/hbase hbck -fixAssignments + + To fix deployment and assignment problems as well as repairing incorrect meta rows you can + run this command: + +$ ./bin/hbase hbck -fixAssignments -fixMeta + + There are a few classes of table integrity problems that are low risk repairs. The first two are + degenerate (startkey == endkey) regions and backwards regions (startkey > endkey). These are + automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). + The third low-risk class is hdfs region holes. This can be repaired by using the: + + -fixHdfsHoles option for fabricating new empty regions on the file system. + If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent. + + + +$ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles + + Since this is a common operation, we’ve added a the -repairHoles flag that is equivalent to the + previous command: + +$ ./bin/hbase hbck -repairHoles + + If inconsistencies still remain after these steps, you most likely have table integrity problems + related to orphaned or overlapping regions. +
    +
    Region Overlap Repairs + Table integrity problems can require repairs that deal with overlaps. This is a riskier operation + because it requires modifications to the file system, requires some decision making, and may + require some manual steps. For these repairs it is best to analyze the output of a hbck -details + run so that you isolate repairs attempts only upon problems the checks identify. Because this is + riskier, there are safeguard that should be used to limit the scope of the repairs. + WARNING: This is a relatively new and have only been tested on online but idle HBase instances + (no reads/writes). Use at your own risk in an active production environment! + The options for repairing table integrity violations include: + + -fixHdfsOrphans option for “adopting” a region directory that is missing a region + metadata file (the .regioninfo file). + + -fixHdfsOverlaps ability for fixing overlapping regions + + + When repairing overlapping regions, a region’s data can be modified on the file system in two + ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to + “sideline” directory where data could be restored later. Merging a large number of regions is + technically correct but could result in an extremely large region that requires series of costly + compactions and splitting operations. In these cases, it is probably better to sideline the regions + that overlap with the most other regions (likely the largest ranges) so that merges can happen on + a more reasonable scale. Since these sidelined regions are already laid out in HBase’s native + directory and HFile format, they can be restored by using HBase’s bulk load mechanism. + The default safeguard thresholds are conservative. These options let you override the default + thresholds and to enable the large region sidelining feature. + + -maxMerge <n> maximum number of overlapping regions to merge + + -sidelineBigOverlaps if more than maxMerge regions are overlapping, sideline attempt + to sideline the regions overlapping with the most other regions. + + -maxOverlapsToSideline <n> if sidelining large overlapping regions, sideline at most n + regions. + + + + Since often times you would just want to get the tables repaired, you can use this option to turn + on all repair options: + + -repair includes all the region consistency options and only the hole repairing table + integrity options. + + + Finally, there are safeguards to limit repairs to only specific tables. For example the following + command would only attempt to check and repair table TableFoo and TableBar. + +$ ./bin/hbase hbck -repair TableFoo TableBar + +
    Special cases: Meta is not properly assigned + There are a few special cases that hbck can handle as well. + Sometimes the meta table’s only region is inconsistently assigned or deployed. In this case + there is a special -fixMetaOnly option that can try to fix meta assignments. + +$ ./bin/hbase hbck -fixMetaOnly -fixAssignments + +
    +
    Special cases: HBase version file is missing + HBase’s data on the file system requires a version file in order to start. If this flie is missing, you + can use the -fixVersionFile option to fabricating a new HBase version file. This assumes that + the version of hbck you are running is the appropriate version for the HBase cluster. +
    +
    Special case: Root and META are corrupt. + The most drastic corruption scenario is the case where the ROOT or META is corrupted and + HBase will not start. In this case you can use the OfflineMetaRepair tool create new ROOT + and META regions and tables. + This tool assumes that HBase is offline. It then marches through the existing HBase home + directory, loads as much information from region metadata files (.regioninfo files) as possible + from the file system. If the region metadata has proper table integrity, it sidelines the original root + and meta table directories, and builds new ones with pointers to the region directories and their + data. + +$ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair + + NOTE: This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck + can complete. + If the tool succeeds you should be able to start hbase and run online repairs if necessary. +
    +
    Special cases: Offline split parent + + Once a region is split, the offline parent will be cleaned up automatically. Sometimes, daughter regions + are split again before their parents are cleaned up. HBase can clean up parents in the right order. However, + there could be some lingering offline split parents sometimes. They are in META, in HDFS, and not deployed. + But HBase can't clean them up. In this case, you can use the -fixSplitParents option to reset + them in META to be online and not split. Therefore, hbck can merge them with other regions if fixing + overlapping regions option is used. + + + This option should not normally be used, and it is not in -fixAll. + +
    +
    + +
    diff --git src/main/docbkx/mapreduce.xml src/main/docbkx/mapreduce.xml new file mode 100644 index 0000000..9e9e474 --- /dev/null +++ src/main/docbkx/mapreduce.xml @@ -0,0 +1,630 @@ + + + + + HBase and MapReduce + Apache MapReduce is a software framework used to analyze large amounts of data, and is + the framework used most often with Apache Hadoop. MapReduce itself is out of the + scope of this document. A good place to get started with MapReduce is . MapReduce version + 2 (MR2)is now part of YARN. + + This chapter discusses specific configuration steps you need to take to use MapReduce on + data within HBase. In addition, it discusses other interactions and issues between HBase and + MapReduce jobs. + + mapred and mapreduce + There are two mapreduce packages in HBase as in MapReduce itself: org.apache.hadoop.hbase.mapred + and org.apache.hadoop.hbase.mapreduce. The former does old-style API and the latter + the new style. The latter has more facility though you can usually find an equivalent in the older + package. Pick the package that goes with your mapreduce deploy. When in doubt or starting over, pick the + org.apache.hadoop.hbase.mapreduce. In the notes below, we refer to + o.a.h.h.mapreduce but replace with the o.a.h.h.mapred if that is what you are using. + + + + +
    + HBase, MapReduce, and the CLASSPATH + By default, MapReduce jobs deployed to a MapReduce cluster do not have access to either + the HBase configuration under $HBASE_CONF_DIR or the HBase classes. + To give the MapReduce jobs the access they need, you could add + hbase-site.xml to the + $HADOOP_HOME/conf/ directory and add the + HBase JARs to the HADOOP_HOME/conf/ + directory, then copy these changes across your cluster. You could add hbase-site.xml to + $HADOOP_HOME/conf and add HBase jars to the $HADOOP_HOME/lib. You would then need to copy + these changes across your cluster or edit + $HADOOP_HOMEconf/hadoop-env.sh and add + them to the HADOOP_CLASSPATH variable. However, this approach is not + recommended because it will pollute your Hadoop install with HBase references. It also + requires you to restart the Hadoop cluster before Hadoop can use the HBase data. + Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration itself. The + dependencies only need to be available on the local CLASSPATH. The following example runs + the bundled HBase RowCounter + MapReduce job against a table named usertable If you have not set + the environment variables expected in the command (the parts prefixed by a + $ sign and curly braces), you can use the actual system paths instead. + Be sure to use the correct version of the HBase JAR for your system. The backticks + (` symbols) cause ths shell to execute the sub-commands, setting the + CLASSPATH as part of the command. This example assumes you use a BASH-compatible shell. + $ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter usertable + When the command runs, internally, the HBase JAR finds the dependencies it needs for + zookeeper, guava, and its other dependencies on the passed HADOOP_CLASSPATH + and adds the JARs to the MapReduce job configuration. See the source at + TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job) for how this is done. + + The example may not work if you are running HBase from its build directory rather + than an installed location. You may see an error like the following: + java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper + If this occurs, try modifying the command as follows, so that it uses the HBase JARs + from the target/ directory within the build environment. + $ HADOOP_CLASSPATH=${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar rowcounter usertable + + + Notice to Mapreduce users of HBase 0.96.1 and above + Some mapreduce jobs that use HBase fail to launch. The symptom is an exception similar + to the following: + +Exception in thread "main" java.lang.IllegalAccessError: class + com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass + com.google.protobuf.LiteralByteString + at java.lang.ClassLoader.defineClass1(Native Method) + at java.lang.ClassLoader.defineClass(ClassLoader.java:792) + at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) + at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) + at java.net.URLClassLoader.access$100(URLClassLoader.java:71) + at java.net.URLClassLoader$1.run(URLClassLoader.java:361) + at java.net.URLClassLoader$1.run(URLClassLoader.java:355) + at java.security.AccessController.doPrivileged(Native Method) + at java.net.URLClassLoader.findClass(URLClassLoader.java:354) + at java.lang.ClassLoader.loadClass(ClassLoader.java:424) + at java.lang.ClassLoader.loadClass(ClassLoader.java:357) + at + org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818) + at + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433) + at + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186) + at + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147) + at + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270) + at + org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100) +... + + This is caused by an optimization introduced in HBASE-9867 that + inadvertently introduced a classloader dependency. + This affects both jobs using the -libjars option and "fat jar," those + which package their runtime dependencies in a nested lib folder. + In order to satisfy the new classloader requirements, hbase-protocol.jar must be + included in Hadoop's classpath. See for current recommendations for resolving + classpath errors. The following is included for historical purposes. + This can be resolved system-wide by including a reference to the hbase-protocol.jar in + hadoop's lib directory, via a symlink or by copying the jar into the new location. + This can also be achieved on a per-job launch basis by including it in the + HADOOP_CLASSPATH environment variable at job submission time. When + launching jobs that package their dependencies, all three of the following job launching + commands satisfy this requirement: + +$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass +$ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass +$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass + + For jars that do not package their dependencies, the following command structure is + necessary: + +$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ... + + See also HBASE-10304 for + further discussion of this issue. + +
    + +
    + MapReduce Scan Caching + TableMapReduceUtil now restores the option to set scanner caching (the number of rows + which are cached before returning the result to the client) on the Scan object that is + passed in. This functionality was lost due to a bug in HBase 0.95 (HBASE-11558), which + is fixed for HBase 0.98.5 and 0.96.3. The priority order for choosing the scanner caching is + as follows: + + + Caching settings which are set on the scan object. + + + Caching settings which are specified via the configuration option + , which can either be set manually in + hbase-site.xml or via the helper method + TableMapReduceUtil.setScannerCaching(). + + + The default value HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING, which is set to + 100. + + + Optimizing the caching settings is a balance between the time the client waits for a + result and the number of sets of results the client needs to receive. If the caching setting + is too large, the client could end up waiting for a long time or the request could even time + out. If the setting is too small, the scan needs to return results in several pieces. + If you think of the scan as a shovel, a bigger cache setting is analogous to a bigger + shovel, and a smaller cache setting is equivalent to more shoveling in order to fill the + bucket. + The list of priorities mentioned above allows you to set a reasonable default, and + override it for specific operations. + See the API documentation for Scan for more details. +
    + +
    + Bundled HBase MapReduce Jobs + The HBase JAR also serves as a Driver for some bundled mapreduce jobs. To learn about + the bundled MapReduce jobs, run the following command. + + $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar +An example program must be given as the first argument. +Valid program names are: + copytable: Export a table from local cluster to peer cluster + completebulkload: Complete a bulk data load. + export: Write table data to HDFS. + import: Import data written by Export. + importtsv: Import data in TSV format. + rowcounter: Count rows in HBase table + + Each of the valid program names are bundled MapReduce jobs. To run one of the jobs, + model your command after the following example. + $ ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter myTable +
    + +
    + HBase as a MapReduce Job Data Source and Data Sink + HBase can be used as a data source, TableInputFormat, + and data sink, TableOutputFormat + or MultiTableOutputFormat, + for MapReduce jobs. Writing MapReduce jobs that read or write HBase, it is advisable to + subclass TableMapper + and/or TableReducer. + See the do-nothing pass-through classes IdentityTableMapper + and IdentityTableReducer + for basic usage. For a more involved example, see RowCounter + or review the org.apache.hadoop.hbase.mapreduce.TestTableMapReduce unit test. + If you run MapReduce jobs that use HBase as source or sink, need to specify source and + sink table and column names in your configuration. + + When you read from HBase, the TableInputFormat requests the list of regions + from HBase and makes a map, which is either a map-per-region or + mapreduce.job.maps map, whichever is smaller. If your job only has two maps, + raise mapreduce.job.maps to a number greater than the number of regions. Maps + will run on the adjacent TaskTracker if you are running a TaskTracer and RegionServer per + node. When writing to HBase, it may make sense to avoid the Reduce step and write back into + HBase from within your map. This approach works when your job does not need the sort and + collation that MapReduce does on the map-emitted data. On insert, HBase 'sorts' so there is + no point double-sorting (and shuffling data around your MapReduce cluster) unless you need + to. If you do not need the Reduce, you myour map might emit counts of records processed for + reporting at the end of the jobj, or set the number of Reduces to zero and use + TableOutputFormat. If running the Reduce step makes sense in your case, you should typically + use multiple reducers so that load is spread across the HBase cluster. + + A new HBase partitioner, the HRegionPartitioner, + can run as many reducers the number of existing regions. The HRegionPartitioner is suitable + when your table is large and your upload will not greatly alter the number of existing + regions upon completion. Otherwise use the default partitioner. +
    + +
    + Writing HFiles Directly During Bulk Import + If you are importing into a new table, you can bypass the HBase API and write your + content directly to the filesystem, formatted into HBase data files (HFiles). Your import + will run faster, perhaps an order of magnitude faster. For more on how this mechanism works, + see . +
    + +
    + RowCounter Example + The included RowCounter + MapReduce job uses TableInputFormat and does a count of all rows in the specified + table. To run it, use the following command: + $ ./bin/hadoop jar hbase-X.X.X.jar + This will + invoke the HBase MapReduce Driver class. Select rowcounter from the choice of jobs + offered. This will print rowcouner usage advice to standard output. Specify the tablename, + column to count, and output + directory. If you have classpath errors, see . +
    + +
    + Map-Task Splitting +
    + The Default HBase MapReduce Splitter + When TableInputFormat + is used to source an HBase table in a MapReduce job, its splitter will make a map task for + each region of the table. Thus, if there are 100 regions in the table, there will be 100 + map-tasks for the job - regardless of how many column families are selected in the + Scan. +
    +
    + Custom Splitters + For those interested in implementing custom splitters, see the method + getSplits in TableInputFormatBase. + That is where the logic for map-task assignment resides. +
    +
    +
    + HBase MapReduce Examples +
    + HBase MapReduce Read Example + The following is an example of using HBase as a MapReduce source in read-only manner. + Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from + the Mapper. There job would be defined as follows... + +Configuration config = HBaseConfiguration.create(); +Job job = new Job(config, "ExampleRead"); +job.setJarByClass(MyReadJob.class); // class that contains mapper + +Scan scan = new Scan(); +scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs +scan.setCacheBlocks(false); // don't set to true for MR jobs +// set other scan attrs +... + +TableMapReduceUtil.initTableMapperJob( + tableName, // input HBase table name + scan, // Scan instance to control CF and attribute selection + MyMapper.class, // mapper + null, // mapper output key + null, // mapper output value + job); +job.setOutputFormatClass(NullOutputFormat.class); // because we aren't emitting anything from mapper + +boolean b = job.waitForCompletion(true); +if (!b) { + throw new IOException("error with job!"); +} + + ...and the mapper instance would extend TableMapper... + +public static class MyMapper extends TableMapper<Text, Text> { + + public void map(ImmutableBytesWritable row, Result value, Context context) throws InterruptedException, IOException { + // process data for the row from the Result instance. + } +} + +
    +
    + HBase MapReduce Read/Write Example + The following is an example of using HBase both as a source and as a sink with + MapReduce. This example will simply copy data from one table to another. + +Configuration config = HBaseConfiguration.create(); +Job job = new Job(config,"ExampleReadWrite"); +job.setJarByClass(MyReadWriteJob.class); // class that contains mapper + +Scan scan = new Scan(); +scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs +scan.setCacheBlocks(false); // don't set to true for MR jobs +// set other scan attrs + +TableMapReduceUtil.initTableMapperJob( + sourceTable, // input table + scan, // Scan instance to control CF and attribute selection + MyMapper.class, // mapper class + null, // mapper output key + null, // mapper output value + job); +TableMapReduceUtil.initTableReducerJob( + targetTable, // output table + null, // reducer class + job); +job.setNumReduceTasks(0); + +boolean b = job.waitForCompletion(true); +if (!b) { + throw new IOException("error with job!"); +} + + An explanation is required of what TableMapReduceUtil is doing, + especially with the reducer. TableOutputFormat + is being used as the outputFormat class, and several parameters are being set on the + config (e.g., TableOutputFormat.OUTPUT_TABLE), as well as setting the reducer output key + to ImmutableBytesWritable and reducer value to + Writable. These could be set by the programmer on the job and + conf, but TableMapReduceUtil tries to make things easier. + The following is the example mapper, which will create a Put + and matching the input Result and emit it. Note: this is what the + CopyTable utility does. + +public static class MyMapper extends TableMapper<ImmutableBytesWritable, Put> { + + public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { + // this example is just copying the data from the source table... + context.write(row, resultToPut(row,value)); + } + + private static Put resultToPut(ImmutableBytesWritable key, Result result) throws IOException { + Put put = new Put(key.get()); + for (KeyValue kv : result.raw()) { + put.add(kv); + } + return put; + } +} + + There isn't actually a reducer step, so TableOutputFormat takes + care of sending the Put to the target table. + This is just an example, developers could choose not to use + TableOutputFormat and connect to the target table themselves. + +
    +
    + HBase MapReduce Read/Write Example With Multi-Table Output + TODO: example for MultiTableOutputFormat. +
    +
    + HBase MapReduce Summary to HBase Example + The following example uses HBase as a MapReduce source and sink with a summarization + step. This example will count the number of distinct instances of a value in a table and + write those summarized counts in another table. + +Configuration config = HBaseConfiguration.create(); +Job job = new Job(config,"ExampleSummary"); +job.setJarByClass(MySummaryJob.class); // class that contains mapper and reducer + +Scan scan = new Scan(); +scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs +scan.setCacheBlocks(false); // don't set to true for MR jobs +// set other scan attrs + +TableMapReduceUtil.initTableMapperJob( + sourceTable, // input table + scan, // Scan instance to control CF and attribute selection + MyMapper.class, // mapper class + Text.class, // mapper output key + IntWritable.class, // mapper output value + job); +TableMapReduceUtil.initTableReducerJob( + targetTable, // output table + MyTableReducer.class, // reducer class + job); +job.setNumReduceTasks(1); // at least one, adjust as required + +boolean b = job.waitForCompletion(true); +if (!b) { + throw new IOException("error with job!"); +} + + In this example mapper a column with a String-value is chosen as the value to summarize + upon. This value is used as the key to emit from the mapper, and an + IntWritable represents an instance counter. + +public static class MyMapper extends TableMapper<Text, IntWritable> { + public static final byte[] CF = "cf".getBytes(); + public static final byte[] ATTR1 = "attr1".getBytes(); + + private final IntWritable ONE = new IntWritable(1); + private Text text = new Text(); + + public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { + String val = new String(value.getValue(CF, ATTR1)); + text.set(val); // we can only emit Writables... + + context.write(text, ONE); + } +} + + In the reducer, the "ones" are counted (just like any other MR example that does this), + and then emits a Put. + +public static class MyTableReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable> { + public static final byte[] CF = "cf".getBytes(); + public static final byte[] COUNT = "count".getBytes(); + + public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { + int i = 0; + for (IntWritable val : values) { + i += val.get(); + } + Put put = new Put(Bytes.toBytes(key.toString())); + put.add(CF, COUNT, Bytes.toBytes(i)); + + context.write(null, put); + } +} + + +
    +
    + HBase MapReduce Summary to File Example + This very similar to the summary example above, with exception that this is using + HBase as a MapReduce source but HDFS as the sink. The differences are in the job setup and + in the reducer. The mapper remains the same. + +Configuration config = HBaseConfiguration.create(); +Job job = new Job(config,"ExampleSummaryToFile"); +job.setJarByClass(MySummaryFileJob.class); // class that contains mapper and reducer + +Scan scan = new Scan(); +scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs +scan.setCacheBlocks(false); // don't set to true for MR jobs +// set other scan attrs + +TableMapReduceUtil.initTableMapperJob( + sourceTable, // input table + scan, // Scan instance to control CF and attribute selection + MyMapper.class, // mapper class + Text.class, // mapper output key + IntWritable.class, // mapper output value + job); +job.setReducerClass(MyReducer.class); // reducer class +job.setNumReduceTasks(1); // at least one, adjust as required +FileOutputFormat.setOutputPath(job, new Path("/tmp/mr/mySummaryFile")); // adjust directories as required + +boolean b = job.waitForCompletion(true); +if (!b) { + throw new IOException("error with job!"); +} + + As stated above, the previous Mapper can run unchanged with this example. As for the + Reducer, it is a "generic" Reducer instead of extending TableMapper and emitting + Puts. + + public static class MyReducer extends Reducer<Text, IntWritable, Text, IntWritable> { + + public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { + int i = 0; + for (IntWritable val : values) { + i += val.get(); + } + context.write(key, new IntWritable(i)); + } +} + +
    +
    + HBase MapReduce Summary to HBase Without Reducer + It is also possible to perform summaries without a reducer - if you use HBase as the + reducer. + An HBase target table would need to exist for the job summary. The Table method + incrementColumnValue would be used to atomically increment values. From a + performance perspective, it might make sense to keep a Map of values with their values to + be incremeneted for each map-task, and make one update per key at during the + cleanup method of the mapper. However, your milage may vary depending on the + number of rows to be processed and unique keys. + In the end, the summary results are in HBase. +
    +
    + HBase MapReduce Summary to RDBMS + Sometimes it is more appropriate to generate summaries to an RDBMS. For these cases, + it is possible to generate summaries directly to an RDBMS via a custom reducer. The + setup method can connect to an RDBMS (the connection information can be + passed via custom parameters in the context) and the cleanup method can close the + connection. + It is critical to understand that number of reducers for the job affects the + summarization implementation, and you'll have to design this into your reducer. + Specifically, whether it is designed to run as a singleton (one reducer) or multiple + reducers. Neither is right or wrong, it depends on your use-case. Recognize that the more + reducers that are assigned to the job, the more simultaneous connections to the RDBMS will + be created - this will scale, but only to a point. + + public static class MyRdbmsReducer extends Reducer<Text, IntWritable, Text, IntWritable> { + + private Connection c = null; + + public void setup(Context context) { + // create DB connection... + } + + public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { + // do summarization + // in this example the keys are Text, but this is just an example + } + + public void cleanup(Context context) { + // close db connection + } + +} + + In the end, the summary results are written to your RDBMS table/s. +
    + +
    + +
    + Accessing Other HBase Tables in a MapReduce Job + Although the framework currently allows one HBase table as input to a MapReduce job, + other HBase tables can be accessed as lookup tables, etc., in a MapReduce job via creating + an Table instance in the setup method of the Mapper. + public class MyMapper extends TableMapper<Text, LongWritable> { + private Table myOtherTable; + + public void setup(Context context) { + // In here create a Connection to the cluster and save it or use the Connection + // from the existing table + myOtherTable = connection.getTable("myOtherTable"); + } + + public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException { + // process Result... + // use 'myOtherTable' for lookups + } + + + +
    +
    + Speculative Execution + It is generally advisable to turn off speculative execution for MapReduce jobs that use + HBase as a source. This can either be done on a per-Job basis through properties, on on the + entire cluster. Especially for longer running jobs, speculative execution will create + duplicate map-tasks which will double-write your data to HBase; this is probably not what + you want. + See for more information. +
    + +
    diff --git src/main/docbkx/ops_mgt.xml src/main/docbkx/ops_mgt.xml index 0af8f02..20788bf 100644 --- src/main/docbkx/ops_mgt.xml +++ src/main/docbkx/ops_mgt.xml @@ -215,6 +215,54 @@ private static final int ERROR_EXIT_CODE = 4; $ ${HBASE_HOME}/bin/hbase orghapache.hadoop.hbase.tool.Canary -t 600000 +
    + Running Canary in a Kerberos-enabled Cluster + To run Canary in a Kerberos-enabled cluster, configure the following two properties in + hbase-site.xml: + + + hbase.client.keytab.file + + + hbase.client.kerberos.principal + + + Kerberos credentials are refreshed every 30 seconds when Canary runs in daemon + mode. + To configure the DNS interface for the client, configure the following optional + properties in hbase-site.xml. + + + hbase.client.dns.interface + + + hbase.client.dns.nameserver + + + + Canary in a Kerberos-Enabled Cluster + This example shows each of the properties with valid values. + + hbase.client.kerberos.principal + hbase/_HOST@YOUR-REALM.COM + + + hbase.client.keytab.file + /etc/hbase/conf/keytab.krb5 + + +property> + hbase.client.dns.interface + default + + + hbase.client.dns.nameserver + default + + ]]> + +
    This information was previously available at Cluster Replication. - HBase provides a replication mechanism to copy data between HBase - clusters. Replication can be used as a disaster recovery solution and as a mechanism for high - availability. You can also use replication to separate web-facing operations from back-end - jobs such as MapReduce. - - In terms of architecture, HBase replication is master-push. This takes advantage of the - fact that each region server has its own write-ahead log (WAL). One master cluster can - replicate to any number of slave clusters, and each region server replicates its own stream of - edits. For more information on the different properties of master/slave replication and other - types of replication, see the article How - Google Serves Data From Multiple Datacenters. - - Replication is asynchronous, allowing clusters to be geographically distant or to have - some gaps in availability. This also means that data between master and slave clusters will - not be instantly consistent. Rows inserted on the master are not immediately available or - consistent with rows on the slave clusters. rows inserted on the master cluster won’t be - available at the same time on the slave clusters. The goal is eventual consistency. - - The replication format used in this design is conceptually the same as the statement-based - replication design used by MySQL. Instead of SQL statements, entire - WALEdits (consisting of multiple cell inserts coming from Put and Delete operations on the - clients) are replicated in order to maintain atomicity. - + HBase provides a cluster replication mechanism which allows you to keep one cluster's + state synchronized with that of another cluster, using the write-ahead log (WAL) of the source + cluster to propagate the changes. Some use cases for cluster replication include: + + Backup and disaster recovery + Data aggregation + Geographic data distribution + Online data ingestion combined with offline data analytics + + Replication is enabled at the granularity of the column family. Before enabling + replication for a column family, create the table and all column families to be replicated, on + the destination cluster. + Cluster replication uses a source-push methodology. An HBase cluster can be a source (also + called master or active, meaning that it is the originator of new data), a destination (also + called slave or passive, meaning that it receives data via replication), or can fulfill both + roles at once. Replication is asynchronous, and the goal of replication is eventual + consistency. When the source receives an edit to a column family with replication enabled, + that edit is propagated to all destination clusters using the WAL for that for that column + family on the RegionServer managing the relevant region. + When data is replicated from one cluster to another, the original source of the data is + tracked via a cluster ID which is part of the metadata. In HBase 0.96 and newer (HBASE-7709), all + clusters which have already consumed the data are also tracked. This prevents replication + loops. The WALs for each region server must be kept in HDFS as long as they are needed to replicate data to any slave cluster. Each region server reads from the oldest log it needs to - replicate and keeps track of the current position inside ZooKeeper to simplify failure - recovery. That position, as well as the queue of WALs to process, may be different for every - slave cluster. - - The clusters participating in replication can be of different sizes. The master - cluster relies on randomization to attempt to balance the stream of replication on the slave clusters - - HBase supports master/master and cyclic replication as well as replication to multiple - slaves. - + replicate and keeps track of its progress processing WALs inside ZooKeeper to simplify failure + recovery. The position marker which indicates a slave cluster's progress, as well as the queue + of WALs to process, may be different for every slave cluster. + The clusters participating in replication can be of different sizes. The master cluster + relies on randomization to attempt to balance the stream of replication on the slave clusters. + It is expected that the slave cluster has storage capacity to hold the replicated data, as + well as any data it is responsible for ingesting. If a slave cluster does run out of room, or + is inaccessible for other reasons, it throws an error and the master retains the WAL and + retries the replication at intervals. + + Terminology Changes + Previously, terms such as master-master, + master-slave, and cyclical were used to + describe replication relationships in HBase. These terms added confusion, and have been + abandoned in favor of discussions about cluster topologies appropriate for different + scenarios. + + + Cluster Topologies + + A central source cluster might propagate changes out to multiple destination clusters, + for failover or due to geographic distribution. + + + A source cluster might push changes to a destination cluster, which might also push + its own changes back to the original cluster. + + Many different low-latency clusters might push changes to one centralized cluster for + backup or resource-intensive data analytics jobs. The processed data might then be + replicated back to the low-latency clusters. + + + Multiple levels of replication may be chained together to suit your organization's needs. + The following diagram shows a hypothetical scenario. Use the arrows to follow the data + paths.
    - Replication Architecture Overview + Example of a Complex Cluster Replication Configuration - - - -
    + + - - Enabling and Configuring Replication - See the - API documentation for replication for information on enabling and configuring - replication. - + HBase replication borrows many concepts from the statement-based replication design used by MySQL. Instead of SQL + statements, entire WALEdits (consisting of multiple cell inserts coming from Put and Delete + operations on the clients) are replicated in order to maintain atomicity.
    - Life of a WAL Edit - A single WAL edit goes through several steps in order to be replicated to a slave - cluster. - - - When the slave responds correctly: - - A HBase client uses a Put or Delete operation to manipulate data in HBase. - - - The region server writes the request to the WAL in a way that would allow it to be - replayed if it were not written successfully. - - - If the changed cell corresponds to a column family that is scoped for replication, - the edit is added to the queue for replication. - - - In a separate thread, the edit is read from the log, as part of a batch process. - Only the KeyValues that are eligible for replication are kept. Replicable KeyValues are - part of a column family whose schema is scoped GLOBAL, are not part of a catalog such as - hbase:meta, and did not originate from the target slave cluster, in the - case of cyclic replication. - - - The edit is tagged with the master's UUID and added to a buffer. When the buffer is - filled, or the reader reaches the end of the file, the buffer is sent to a random region - server on the slave cluster. - - - The region server reads the edits sequentially and separates them into buffers, one - buffer per table. After all edits are read, each buffer is flushed using HTable, HBase's normal client. The master's UUID is preserved in the edits - they are applied, in order to allow for cyclic replication. - - - In the master, the offset for the WAL that is currently being replicated is - registered in ZooKeeper. - - - - When the slave does not respond: - - The first three steps, where the edit is inserted, are identical. - + Configuring Cluster Replication + The following is a simplified procedure for configuring cluster replication. It may not + cover every edge case. For more information, see the API documentation for replication. + - Again in a separate thread, the region server reads, filters, and edits the log - edits in the same way as above. The slave region server does not answer the RPC - call. + Configure and start the source and destination clusters. Create tables with the same + names and column families on both the source and destination clusters, so that the + destination cluster knows where to store data it will receive. All hosts in the source + and destination clusters should be reachable to each other. - The master sleeps and tries again a configurable number of times. + On the source cluster, enable replication by setting hbase.replication + to true in hbase-site.xml. - If the slave region server is still not available, the master selects a new subset - of region server to replicate to, and tries again to send the buffer of edits. + On the source cluster, in HBase Shell, add the destination cluster as a peer, using + the add_peer command. The syntax is as follows: + hbase> add_peer 'ID' 'CLUSTER_KEY' + The ID is a string (prior to HBASE-11367, it + was a short integer), which must not contain a hyphen (see HBASE-11394). To + compose the CLUSTER_KEY, use the following template: + hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent + If both clusters use the same ZooKeeper cluster, you must use a different + zookeeper.znode.parent, because they cannot write in the same folder. - Meanwhile, the WALs are rolled and stored in a queue in ZooKeeper. Logs that are - archived by their region server, by moving them from the region - server's log directory to a central log directory, will update their paths in the - in-memory queue of the replicating thread. + On the source cluster, configure each column family to be replicated by setting its + REPLICATION_SCOPE to 1, using commands such as the following in HBase Shell. + hbase> disable 'example_table' +hbase> alter 'example_table', {NAME => 'example_family', REPLICATION_SCOPE => '1'} +hbase> enable 'example_table' + You can verify that replication is taking place by examining the logs on the + source cluster for messages such as the following. + Considering 1 rs, with ratio 0.1 +Getting 1 rs from peer cluster # 0 +Choosing peer 10.10.1.49:62020 + - When the slave cluster is finally available, the buffer is applied in the same way - as during normal processing. The master region server will then replicate the backlog of - logs that accumulated during the outage. + To verify the validity of replicated data, you can use the included + VerifyReplication MapReduce job on the source cluster, providing it with + the ID of the replication peer and table name to verify. Other options are possible, + such as a time range or specific families to verify. + The command has the following form: + hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication [--starttime=timestamp1] [--stoptime=timestamp [--families=comma separated list of families] <peerId><tablename> + The VerifyReplication command prints out GOODROWS + and BADROWS counters to indicate rows that did and did not replicate + correctly. - - - - Spreading Queue Failover Load - When replication is active, a subset of RegionServers in the source cluster are - responsible for shipping edits to the sink. This function must be failed over like all - other RegionServer functions should a process or node crash. The following configuration - settings are recommended for maintaining an even distribution of replication activity - over the remaining live servers in the source cluster: Set - replication.source.maxretriesmultiplier to - 300 (5 minutes), and - replication.sleep.before.failover to - 30000 (30 seconds) in the source cluster site configuration. - - - - - Preserving Tags During Replication - By default, the codec used for replication between clusters strips tags, such as - cell-level ACLs, from cells. To prevent the tags from being stripped, you can use a - different codec which does not strip them. Configure - hbase.replication.rpc.codec to use - org.apache.hadoop.hbase.codec.KeyValueCodecWithTags, on both the - source and sink RegionServers involved in the replication. This option was introduced in - HBASE-10322. - +
    - Replication Internals - - - Replication State in ZooKeeper + Detailed Information About Cluster Replication + +
    + Replication Architecture Overview + + + + +
    + + + + +
    + Life of a WAL Edit + A single WAL edit goes through several steps in order to be replicated to a slave + cluster. + + + When the slave responds correctly: - HBase replication maintains its state in ZooKeeper. By default, the state is - contained in the base node /hbase/replication. This node contains - two child nodes, the Peers znode and the RS znode. - - Replication may be disrupted and data loss may occur if you delete the - replication tree (/hbase/replication/) from ZooKeeper. This is - despite the information about invariants at . Follow progress on this issue at HBASE-10295. - + An HBase client uses a Put or Delete operation to manipulate data in HBase. + + + The region server writes the request to the WAL in a way allows it to be replayed + if it is not written successfully. + + + If the changed cell corresponds to a column family that is scoped for replication, + the edit is added to the queue for replication. + + + In a separate thread, the edit is read from the log, as part of a batch process. + Only the KeyValues that are eligible for replication are kept. Replicable KeyValues + are part of a column family whose schema is scoped GLOBAL, are not part of a catalog + such as hbase:meta, did not originate from the target slave cluster, and + have not already been consumed by the target slave cluster. + + + The edit is tagged with the master's UUID and added to a buffer. When the buffer + is filled, or the reader reaches the end of the file, the buffer is sent to a random + region server on the slave cluster. + + + The region server reads the edits sequentially and separates them into buffers, + one buffer per table. After all edits are read, each buffer is flushed using HTable, HBase's normal client. The master's UUID and the UUIDs of slaves + which have already consumed the data are preserved in the edits they are applied, in + order to prevent replication loops. + + + In the master, the offset for the WAL that is currently being replicated is + registered in ZooKeeper. + + + + When the slave does not respond: + + The first three steps, where the edit is inserted, are identical. + + + Again in a separate thread, the region server reads, filters, and edits the log + edits in the same way as above. The slave region server does not answer the RPC + call. + + + The master sleeps and tries again a configurable number of times. + + + If the slave region server is still not available, the master selects a new subset + of region server to replicate to, and tries again to send the buffer of edits. + + + Meanwhile, the WALs are rolled and stored in a queue in ZooKeeper. Logs that are + archived by their region server, by moving them from the + region server's log directory to a central log directory, will update their paths in + the in-memory queue of the replicating thread. + + + When the slave cluster is finally available, the buffer is applied in the same way + as during normal processing. The master region server will then replicate the backlog + of logs that accumulated during the outage. + + + + + Spreading Queue Failover Load + When replication is active, a subset of region servers in the source cluster is + responsible for shipping edits to the sink. This responsibility must be failed over like + all other region server functions should a process or node crash. The following + configuration settings are recommended for maintaining an even distribution of + replication activity over the remaining live servers in the source cluster: + + + + Set replication.source.maxretriesmultiplier to + 300. + + + Set replication.source.sleepforretries to 1 (1 + second). This value, combined with the value of + replication.source.maxretriesmultiplier, causes the retry cycle to last + about 5 minutes. - - - The Peers Znode - The peers znode is stored in - /hbase/replication/peers by default. It consists of a list of - all peer replication clusters, along with the status of each of them. The value of - each peer is its cluster key, which is provided in the HBase Shell. The cluster key - contains a list of ZooKeeper nodes in the cluster's quorum, the client port for the - ZooKeeper quorum, and the base znode for HBase in HDFS on that cluster. - + Set replication.sleep.before.failover to 30000 (30 + seconds) in the source cluster site configuration. + + + + + Preserving Tags During Replication + By default, the codec used for replication between clusters strips tags, such as + cell-level ACLs, from cells. To prevent the tags from being stripped, you can use a + different codec which does not strip them. Configure + hbase.replication.rpc.codec to use + org.apache.hadoop.hbase.codec.KeyValueCodecWithTags, on both the + source and sink RegionServers involved in the replication. This option was introduced in + HBASE-10322. + +
    + +
    + Replication Internals + + + Replication State in ZooKeeper + + HBase replication maintains its state in ZooKeeper. By default, the state is + contained in the base node /hbase/replication. This node + contains two child nodes, the Peers znode and the RS + znode. + + Replication may be disrupted and data loss may occur if you delete the + replication tree (/hbase/replication/) from ZooKeeper. This + is despite the information about invariants at . Follow progress on this issue at HBASE-10295. + + + + + The Peers Znode + + The peers znode is stored in + /hbase/replication/peers by default. It consists of a list of + all peer replication clusters, along with the status of each of them. The value of + each peer is its cluster key, which is provided in the HBase Shell. The cluster key + contains a list of ZooKeeper nodes in the cluster's quorum, the client port for the + ZooKeeper quorum, and the base znode for HBase in HDFS on that cluster. + /hbase/replication/peers /1 [Value: zk1.host.com,zk2.host.com,zk3.host.com:2181:/hbase] /2 [Value: zk5.host.com,zk6.host.com,zk7.host.com:2181:/hbase] - - Each peer has a child znode which indicates whether or not replication is enabled - on that cluster. These peer-state znodes do not contain any child znodes, but only - contain a Boolean value. This value is read and maintained by the - ReplicationPeer.PeerStateTracker class. - + + Each peer has a child znode which indicates whether or not replication is + enabled on that cluster. These peer-state znodes do not contain any child znodes, + but only contain a Boolean value. This value is read and maintained by the + ReplicationPeer.PeerStateTracker class. + /hbase/replication/peers /1/peer-state [Value: ENABLED] /2/peer-state [Value: DISABLED] - - - - - The RS Znode - - The rs znode contains a list of WAL logs which need to be replicated. - This list is divided into a set of queues organized by region server and the peer - cluster the region server is shipping the logs to. The rs znode has one child znode - for each region server in the cluster. The child znode name is the region server's - hostname, client port, and start code. This list includes both live and dead region - servers. - + + + + + The RS Znode + + The rs znode contains a list of WAL logs which need to be + replicated. This list is divided into a set of queues organized by region server and + the peer cluster the region server is shipping the logs to. The rs znode has one + child znode for each region server in the cluster. The child znode name is the + region server's hostname, client port, and start code. This list includes both live + and dead region servers. + /hbase/replication/rs /hostname.example.org,6020,1234 /hostname2.example.org,6020,2856 - - Each rs znode contains a list of WAL replication queues, one queue - for each peer cluster it replicates to. These queues are represented by child znodes - named by the cluster ID of the peer cluster they represent. - + + Each rs znode contains a list of WAL replication queues, one queue + for each peer cluster it replicates to. These queues are represented by child znodes + named by the cluster ID of the peer cluster they represent. + /hbase/replication/rs /hostname.example.org,6020,1234 /1 /2 - - Each queue has one child znode for each WAL log that still needs to be replicated. - the value of these child znodes is the last position that was replicated. This - position is updated each time a WAL log is replicated. - + + Each queue has one child znode for each WAL log that still needs to be + replicated. the value of these child znodes is the last position that was + replicated. This position is updated each time a WAL log is replicated. + /hbase/replication/rs /hostname.example.org,6020,1234 /1 23522342.23422 [VALUE: 254] 12340993.22342 [VALUE: 0] - - - - -
    -
    - Replication Configuration Options - - -
    - - Option - Description - Default - - - - - zookeeper.znode.parent - The name of the base ZooKeeper znode used for HBase - /hbase - - - zookeeper.znode.replication - The name of the base znode used for replication - replication - - - zookeeper.znode.replication.peers - The name of the peer znode - peers - - - zookeeper.znode.replication.peers.state - The name of peer-state znode - peer-state - - - zookeeper.znode.replication.rs - The name of the rs znode - rs - - - hbase.replication - Whether replication is enabled or disabled on a given cluster - false - - - eplication.sleep.before.failover - How many milliseconds a worker should sleep before attempting to replicate - a dead region server's WAL queues. - - - - replication.executor.workers - The number of region servers a given region server should attempt to - failover simultaneously. - 1 - - - - - - -
    - Replication Implementation Details - + + + + +
    +
    Choosing Region Servers to Replicate To When a master cluster region server initiates a replication source to a slave cluster, it first connects to the slave's ZooKeeper ensemble using the provided cluster key . It @@ -1802,17 +1904,16 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- high, and this method works for clusters of any size. For example, a master cluster of 10 machines replicating to a slave cluster of 5 machines with a ratio of 10% causes the master cluster region servers to choose one machine each at random. - - A ZooKeeper watcher is placed on the - ${zookeeper.znode.parent}/rs node of the - slave cluster by each of the master cluster's region servers. This watch is used to monitor - changes in the composition of the slave cluster. When nodes are removed from the slave - cluster, or if nodes go down or come back up, the master cluster's region servers will - respond by selecting a new pool of slave region servers to replicate to. + A ZooKeeper watcher is placed on the + ${zookeeper.znode.parent}/rs node of + the slave cluster by each of the master cluster's region servers. This watch is used to + monitor changes in the composition of the slave cluster. When nodes are removed from the + slave cluster, or if nodes go down or come back up, the master cluster's region servers + will respond by selecting a new pool of slave region servers to replicate to. +
    - +
    Keeping Track of Logs - Each master cluster region server has its own znode in the replication znodes hierarchy. It contains one znode per peer cluster (if 5 slave clusters, 5 znodes are created), and each of these contain a queue of WALs to process. Each of these queues will @@ -1820,26 +1921,26 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- one slave cluster becomes unavailable for some time, the WALs should not be deleted, so they need to stay in the queue while the others are processed. See for an example. - - When a source is instantiated, it contains the current WAL that the region server is - writing to. During log rolling, the new file is added to the queue of each slave cluster's - znode just before it is made available. This ensures that all the sources are aware that a - new log exists before the region server is able to append edits into it, but this operations - is now more expensive. The queue items are discarded when the replication thread cannot read - more entries from a file (because it reached the end of the last block) and there are other - files in the queue. This means that if a source is up to date and replicates from the log - that the region server writes to, reading up to the "end" of the current file will not - delete the item in the queue. - A log can be archived if it is no longer used or if the number of logs exceeds - hbase.regionserver.maxlogs because the insertion rate is faster than regions - are flushed. When a log is archived, the source threads are notified that the path for that - log changed. If a particular source has already finished with an archived log, it will just - ignore the message. If the log is in the queue, the path will be updated in memory. If the - log is currently being replicated, the change will be done atomically so that the reader - doesn't attempt to open the file when has already been moved. Because moving a file is a - NameNode operation , if the reader is currently reading the log, it won't generate any - exception. - + When a source is instantiated, it contains the current WAL that the region server is + writing to. During log rolling, the new file is added to the queue of each slave cluster's + znode just before it is made available. This ensures that all the sources are aware that a + new log exists before the region server is able to append edits into it, but this + operations is now more expensive. The queue items are discarded when the replication + thread cannot read more entries from a file (because it reached the end of the last block) + and there are other files in the queue. This means that if a source is up to date and + replicates from the log that the region server writes to, reading up to the "end" of the + current file will not delete the item in the queue. + A log can be archived if it is no longer used or if the number of logs exceeds + hbase.regionserver.maxlogs because the insertion rate is faster than + regions are flushed. When a log is archived, the source threads are notified that the path + for that log changed. If a particular source has already finished with an archived log, it + will just ignore the message. If the log is in the queue, the path will be updated in + memory. If the log is currently being replicated, the change will be done atomically so + that the reader doesn't attempt to open the file when has already been moved. Because + moving a file is a NameNode operation , if the reader is currently reading the log, it + won't generate any exception. +
    +
    Reading, Filtering and Sending Edits By default, a source attempts to read from a WAL and ship log entries to a sink as quickly as possible. Speed is limited by the filtering of log entries Only KeyValues that @@ -1848,16 +1949,17 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- MB by default. With this configuration, a master cluster region server with three slaves would use at most 192 MB to store data to replicate. This does not account for the data which was filtered but not garbage collected. - - Once the maximum size of edits has been buffered or the reader reaces the end of the - WAL, the source thread stops reading and chooses at random a sink to replicate to (from the - list that was generated by keeping only a subset of slave region servers). It directly - issues a RPC to the chosen region server and waits for the method to return. If the RPC was - successful, the source determines whether the current file has been emptied or it contains - more data which needs to be read. If the file has been emptied, the source deletes the znode - in the queue. Otherwise, it registers the new offset in the log's znode. If the RPC threw an - exception, the source will retry 10 times before trying to find a different sink. - + Once the maximum size of edits has been buffered or the reader reaces the end of the + WAL, the source thread stops reading and chooses at random a sink to replicate to (from + the list that was generated by keeping only a subset of slave region servers). It directly + issues a RPC to the chosen region server and waits for the method to return. If the RPC + was successful, the source determines whether the current file has been emptied or it + contains more data which needs to be read. If the file has been emptied, the source + deletes the znode in the queue. Otherwise, it registers the new offset in the log's znode. + If the RPC threw an exception, the source will retry 10 times before trying to find a + different sink. +
    +
    Cleaning Logs If replication is not enabled, the master's log-cleaning thread deletes old logs using a configured TTL. This TTL-based method does not work well with replication, because @@ -1866,33 +1968,32 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- until it finds the log, while caching queues it has found. If the log is not found in any queues, the log will be deleted. The next time the cleaning process needs to look for a log, it starts by using its cached list. - - +
    +
    Region Server Failover When no region servers are failing, keeping track of the logs in ZooKeeper adds no value. Unfortunately, region servers do fail, and since ZooKeeper is highly available, it is useful for managing the transfer of the queues in the event of a failure. - - Each of the master cluster region servers keeps a watcher on every other region server, - in order to be notified when one dies (just as the master does). When a failure happens, - they all race to create a znode called lock inside the dead region - server's znode that contains its queues. The region server that creates it successfully then - transfers all the queues to its own znode, one at a time since ZooKeeper does not support - renaming queues. After queues are all transferred, they are deleted from the old location. - The znodes that were recovered are renamed with the ID of the slave cluster appended with - the name of the dead server. - Next, the master cluster region server creates one new source thread per copied queue, - and each of the source threads follows the read/filter/ship pattern. The main difference is - that those queues will never receive new data, since they do not belong to their new region - server. When the reader hits the end of the last log, the queue's znode is deleted and the - master cluster region server closes that replication source. - Given a master cluster with 3 region servers replicating to a single slave with id - 2, the following hierarchy represents what the znodes layout could be - at some point in time. The region servers' znodes all contain a peers - znode which contains a single queue. The znode names in the queues represent the actual file - names on HDFS in the form - address,port.timestamp. - + Each of the master cluster region servers keeps a watcher on every other region + server, in order to be notified when one dies (just as the master does). When a failure + happens, they all race to create a znode called lock inside the dead + region server's znode that contains its queues. The region server that creates it + successfully then transfers all the queues to its own znode, one at a time since ZooKeeper + does not support renaming queues. After queues are all transferred, they are deleted from + the old location. The znodes that were recovered are renamed with the ID of the slave + cluster appended with the name of the dead server. + Next, the master cluster region server creates one new source thread per copied queue, + and each of the source threads follows the read/filter/ship pattern. The main difference + is that those queues will never receive new data, since they do not belong to their new + region server. When the reader hits the end of the last log, the queue's znode is deleted + and the master cluster region server closes that replication source. + Given a master cluster with 3 region servers replicating to a single slave with id + 2, the following hierarchy represents what the znodes layout could be + at some point in time. The region servers' znodes all contain a peers + znode which contains a single queue. The znode names in the queues represent the actual + file names on HDFS in the form + address,port.timestamp. + /hbase/replication/rs/ 1.1.1.1,60020,123456780/ 2/ @@ -1907,11 +2008,11 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- 2/ 1.1.1.3,60020.1280 (Contains a position) - Assume that 1.1.1.2 loses its ZooKeeper session. The survivors will race to create a - lock, and, arbitrarily, 1.1.1.3 wins. It will then start transferring all the queues to its - local peers znode by appending the name of the dead server. Right before 1.1.1.3 is able to - clean up the old znodes, the layout will look like the following: - + Assume that 1.1.1.2 loses its ZooKeeper session. The survivors will race to create a + lock, and, arbitrarily, 1.1.1.3 wins. It will then start transferring all the queues to + its local peers znode by appending the name of the dead server. Right before 1.1.1.3 is + able to clean up the old znodes, the layout will look like the following: + /hbase/replication/rs/ 1.1.1.1,60020,123456780/ 2/ @@ -1932,11 +2033,11 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- 1.1.1.2,60020.1248 1.1.1.2,60020.1312 - Some time later, but before 1.1.1.3 is able to finish replicating the last WAL from - 1.1.1.2, it dies too. Some new logs were also created in the normal queues. The last region - server will then try to lock 1.1.1.3's znode and will begin transferring all the queues. The - new layout will be: - + Some time later, but before 1.1.1.3 is able to finish replicating the last WAL from + 1.1.1.2, it dies too. Some new logs were also created in the normal queues. The last + region server will then try to lock 1.1.1.3's znode and will begin transferring all the + queues. The new layout will be: + /hbase/replication/rs/ 1.1.1.1,60020,123456780/ 2/ @@ -1957,11 +2058,12 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- 2-1.1.1.2,60020,123456790/ 1.1.1.2,60020.1312 (Contains a position) - - Replication Metrics - The following metrics are exposed at the global region server level and (since HBase - 0.95) at the peer level: - +
    + +
    + Replication Metrics + The following metrics are exposed at the global region server level and (since HBase + 0.95) at the peer level: source.sizeOfLogQueue @@ -1989,7 +2091,65 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart -- - +
    +
    + Replication Configuration Options + + +
    + + Option + Description + Default + + + + + zookeeper.znode.parent + The name of the base ZooKeeper znode used for HBase + /hbase + + + zookeeper.znode.replication + The name of the base znode used for replication + replication + + + zookeeper.znode.replication.peers + The name of the peer znode + peers + + + zookeeper.znode.replication.peers.state + The name of peer-state znode + peer-state + + + zookeeper.znode.replication.rs + The name of the rs znode + rs + + + hbase.replication + Whether replication is enabled or disabled on a given + cluster + false + + + eplication.sleep.before.failover + How many milliseconds a worker should sleep before attempting to replicate + a dead region server's WAL queues. + + + + replication.executor.workers + The number of region servers a given region server should attempt to + failover simultaneously. + 1 + + + +
    + + + Apache HBase Orca +
    + Apache HBase Orca + + + + + +
    + An Orca is the Apache + HBase mascot. + See NOTICES.txt. Our Orca logo we got here: http://www.vectorfree.com/jumping-orca + It is licensed Creative Commons Attribution 3.0. See https://creativecommons.org/licenses/by/3.0/us/ + We changed the logo by stripping the colored background, inverting + it and then rotating it some. + +
    diff --git src/main/docbkx/other_info.xml src/main/docbkx/other_info.xml new file mode 100644 index 0000000..72ff274 --- /dev/null +++ src/main/docbkx/other_info.xml @@ -0,0 +1,83 @@ + + + + Other Information About HBase +
    HBase Videos + Introduction to HBase + + Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). + + Introduction to HBase by Todd Lipcon (2010). + + + + Building Real Time Services at Facebook with HBase by Jonathan Gray (Hadoop World 2011). + + HBase and Hadoop, Mixing Real-Time and Batch Processing at StumbleUpon by JD Cryans (Hadoop World 2010). + +
    +
    HBase Presentations (Slides) + Advanced HBase Schema Design by Lars George (Hadoop World 2011). + + Introduction to HBase by Todd Lipcon (Chicago Data Summit 2011). + + Getting The Most From Your HBase Install by Ryan Rawson, Jonathan Gray (Hadoop World 2009). + +
    +
    HBase Papers + BigTable by Google (2006). + + HBase and HDFS Locality by Lars George (2010). + + No Relation: The Mixed Blessings of Non-Relational Databases by Ian Varley (2009). + +
    +
    HBase Sites + Cloudera's HBase Blog has a lot of links to useful HBase information. + + CAP Confusion is a relevant entry for background information on + distributed storage systems. + + + + HBase Wiki has a page with a number of presentations. + + HBase RefCard from DZone. + +
    +
    HBase Books + HBase: The Definitive Guide by Lars George. + +
    +
    Hadoop Books + Hadoop: The Definitive Guide by Tom White. + +
    + +
    diff --git src/main/docbkx/performance.xml src/main/docbkx/performance.xml index 1757d3f..42ed79b 100644 --- src/main/docbkx/performance.xml +++ src/main/docbkx/performance.xml @@ -273,7 +273,7 @@ tableDesc.addFamily(cfDesc); If there is enough RAM, increasing this can help.
    -
    +
    <varname>hbase.regionserver.checksum.verify</varname> Have HBase write the checksum into the datablock and save having to do the checksum seek whenever you read. diff --git src/main/docbkx/schema_design.xml src/main/docbkx/schema_design.xml index 65e64b0..e4632ec 100644 --- src/main/docbkx/schema_design.xml +++ src/main/docbkx/schema_design.xml @@ -509,6 +509,21 @@ public static byte[][] getHexSplits(String startKey, String endKey, int numRegio See HColumnDescriptor for more information. + Recent versions of HBase also support setting time to live on a per cell basis. See HBASE-10560 for more + information. Cell TTLs are submitted as an attribute on mutation requests (Appends, + Increments, Puts, etc.) using Mutation#setTTL. If the TTL attribute is set, it will be applied + to all cells updated on the server by the operation. There are two notable differences + between cell TTL handling and ColumnFamily TTLs: + + + Cell TTLs are expressed in units of milliseconds instead of seconds. + + + A cell TTLs cannot extend the effective lifetime of a cell beyond a ColumnFamily level + TTL setting. + +
    diff --git src/main/docbkx/security.xml src/main/docbkx/security.xml index d649f95..c9db37a 100644 --- src/main/docbkx/security.xml +++ src/main/docbkx/security.xml @@ -28,7 +28,37 @@ * limitations under the License. */ --> - Secure Apache HBase + Securing Apache HBase + HBase provides mechanisms to secure various components and aspects of HBase and how it + relates to the rest of the Hadoop infrastructure, as well as clients and resources outside + Hadoop. +
    + Using Secure HTTP (HTTPS) for the Web UI + A default HBase install uses insecure HTTP connections for web UIs for the master and + region servers. To enable secure HTTP (HTTPS) connections instead, set + hadoop.ssl.enabled to true in + hbase-site.xml. This does not change the port used by the Web UI. To + change the port for the web UI for a given HBase component, configure that port's setting in + hbase-site.xml. These settings are: + + hbase.master.info.port + hbase.regionserver.info.port + + + If you enable HTTPS, clients should avoid using the non-secure HTTP connection. + If you enable secure HTTP, clients should connect to HBase using the + https:// URL. Clients using the http:// URL will receive an HTTP + response of 200, but will not receive any data. The following exception is logged: + javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? + This is because the same port is used for HTTP and HTTPS. + HBase uses Jetty for the Web UI. Without modifying Jetty itself, it does not seem + possible to configure Jetty to redirect one port to another on the same host. See Nick + Dimiduk's contribution on this Stack Overflow thread for more information. If you know how to fix this without + opening a second port for HTTPS, patches are appreciated. + +
    Secure Client Access to Apache HBase @@ -1255,7 +1285,7 @@ public static void verifyAllowed(User user, AccessTestAction action, int count) Define the List of Visibility Labels HBase Shell - hbase< add_labels [ 'admin', 'service', 'developer', 'test' ] + hbase> add_labels [ 'admin', 'service', 'developer', 'test' ] Java API @@ -1283,9 +1313,9 @@ public static void addLabels() throws Exception { Associate Labels with Users HBase Shell - hbase< set_auths 'service', [ 'service' ] - hbase< set_auths 'testuser', [ 'test' ] - hbase< set_auths 'qa', [ 'test', 'developer' ] + hbase> set_auths 'service', [ 'service' ] + gbase> set_auths 'testuser', [ 'test' ] + hbase> set_auths 'qa', [ 'test', 'developer' ] Java API @@ -1309,9 +1339,9 @@ public void testSetAndGetUserAuths() throws Throwable { Clear Labels From Users HBase Shell - hbase< clear_auths 'service', [ 'service' ] - hbase< clear_auths 'testuser', [ 'test' ] - hbase< clear_auths 'qa', [ 'test', 'developer' ] + hbase> clear_auths 'service', [ 'service' ] + hbase> clear_auths 'testuser', [ 'test' ] + hbase> clear_auths 'qa', [ 'test', 'developer' ] Java API @@ -1333,11 +1363,11 @@ try { given version of the cell. HBase Shell - hbase< set_visibility 'user', 'admin|service|developer', \ + hbase> set_visibility 'user', 'admin|service|developer', \ { COLUMNS => 'i' } - hbase< set_visibility 'user', 'admin|service', \ + hbase> set_visibility 'user', 'admin|service', \ { COLUMNS => ' pii' } - hbase< COLUMNS => [ 'i', 'pii' ], \ + hbase> COLUMNS => [ 'i', 'pii' ], \ FILTER => "(PrefixFilter ('test'))" } diff --git src/main/docbkx/sql.xml src/main/docbkx/sql.xml new file mode 100644 index 0000000..40f43d6 --- /dev/null +++ src/main/docbkx/sql.xml @@ -0,0 +1,40 @@ + + + + SQL over HBase +
    + Apache Phoenix + Apache Phoenix +
    +
    + Trafodion + Trafodion: Transactional SQL-on-HBase +
    + +
    diff --git src/main/docbkx/upgrading.xml src/main/docbkx/upgrading.xml index d5708a4..5d71e0f 100644 --- src/main/docbkx/upgrading.xml +++ src/main/docbkx/upgrading.xml @@ -240,7 +240,7 @@
    - Illustration of the replication architecture in HBase, as described in the prior - text. - + At the top of the diagram, the San Jose and Tokyo clusters, shown in red, + replicate changes to each other, and each also replicates changes to a User Data and a + Payment Data cluster. + Each cluster in the second row, shown in blue, replicates its changes to the All Data + Backup 1 cluster, shown in grey. The All Data Backup 1 cluster replicates changes to the + All Data Backup 2 cluster (also shown in grey), as well as the Data Analysis cluster + (shown in green). All Data Backup 2 also propagates any of its own changes back to All + Data Backup 1. + The Data Analysis cluster runs MapReduce jobs on its data, and then pushes the + processed data back to the San Jose and Tokyo clusters. + + Illustration of the replication architecture in HBase, as described in the prior + text. +
    -
    +
    HBase API surface HBase has a lot of API points, but for the compatibility matrix above, we differentiate between Client API, Limited Private API, and Private API. HBase uses a version of Hadoop's Interface classification. HBase's Interface classification classes can be found here. diff --git src/main/docbkx/ycsb.xml src/main/docbkx/ycsb.xml new file mode 100644 index 0000000..695614c --- /dev/null +++ src/main/docbkx/ycsb.xml @@ -0,0 +1,36 @@ + + + + YCSB + YCSB: The + Yahoo! Cloud Serving Benchmark and HBase + TODO: Describe how YCSB is poor for putting up a decent cluster load. + TODO: Describe setup of YCSB for HBase. In particular, presplit your tables before you + start a run. See HBASE-4163 Create Split Strategy for YCSB Benchmark for why and a little shell + command for how to do it. + Ted Dunning redid YCSB so it's mavenized and added facility for verifying workloads. See + Ted Dunning's YCSB. + + + diff --git src/main/site/resources/css/site.css src/main/site/resources/css/site.css index f26d03c..17f0ff0 100644 --- src/main/site/resources/css/site.css +++ src/main/site/resources/css/site.css @@ -72,8 +72,10 @@ h4 { #banner { background: none; + padding: 10px; } +/* #banner img { padding: 10px; margin: auto; @@ -82,6 +84,7 @@ h4 { float: center; height:; } + */ #breadcrumbs { background-image: url(); diff --git src/main/site/resources/images/bc_basic.png src/main/site/resources/images/bc_basic.png new file mode 100644 index 0000000..231de93 Binary files /dev/null and src/main/site/resources/images/bc_basic.png differ diff --git src/main/site/resources/images/bc_config.png src/main/site/resources/images/bc_config.png new file mode 100644 index 0000000..53250cf Binary files /dev/null and src/main/site/resources/images/bc_config.png differ diff --git src/main/site/resources/images/bc_l1.png src/main/site/resources/images/bc_l1.png new file mode 100644 index 0000000..36d7e55 Binary files /dev/null and src/main/site/resources/images/bc_l1.png differ diff --git src/main/site/resources/images/bc_l2_buckets.png src/main/site/resources/images/bc_l2_buckets.png new file mode 100644 index 0000000..5163928 Binary files /dev/null and src/main/site/resources/images/bc_l2_buckets.png differ diff --git src/main/site/resources/images/bc_stats.png src/main/site/resources/images/bc_stats.png new file mode 100644 index 0000000..d8c6384 Binary files /dev/null and src/main/site/resources/images/bc_stats.png differ diff --git src/main/site/resources/images/coprocessor_stats.png src/main/site/resources/images/coprocessor_stats.png new file mode 100644 index 0000000..2fc8703 Binary files /dev/null and src/main/site/resources/images/coprocessor_stats.png differ diff --git src/main/site/resources/images/data_block_diff_encoding.png src/main/site/resources/images/data_block_diff_encoding.png new file mode 100644 index 0000000..0bd03a4 Binary files /dev/null and src/main/site/resources/images/data_block_diff_encoding.png differ diff --git src/main/site/resources/images/data_block_no_encoding.png src/main/site/resources/images/data_block_no_encoding.png new file mode 100644 index 0000000..56498b4 Binary files /dev/null and src/main/site/resources/images/data_block_no_encoding.png differ diff --git src/main/site/resources/images/data_block_prefix_encoding.png src/main/site/resources/images/data_block_prefix_encoding.png new file mode 100644 index 0000000..4271847 Binary files /dev/null and src/main/site/resources/images/data_block_prefix_encoding.png differ diff --git src/main/site/resources/images/hbase_replication_diagram.jpg src/main/site/resources/images/hbase_replication_diagram.jpg new file mode 100644 index 0000000..c110309 Binary files /dev/null and src/main/site/resources/images/hbase_replication_diagram.jpg differ diff --git src/main/site/resources/images/jumping-orca_rotated.png src/main/site/resources/images/jumping-orca_rotated.png new file mode 100644 index 0000000..4c2c72e Binary files /dev/null and src/main/site/resources/images/jumping-orca_rotated.png differ diff --git src/main/site/resources/images/jumping-orca_rotated.xcf src/main/site/resources/images/jumping-orca_rotated.xcf new file mode 100644 index 0000000..01be6ff Binary files /dev/null and src/main/site/resources/images/jumping-orca_rotated.xcf differ diff --git src/main/site/resources/images/jumping-orca_rotated_12percent.png src/main/site/resources/images/jumping-orca_rotated_12percent.png new file mode 100644 index 0000000..1942f9a Binary files /dev/null and src/main/site/resources/images/jumping-orca_rotated_12percent.png differ diff --git src/main/site/resources/images/jumping-orca_rotated_25percent.png src/main/site/resources/images/jumping-orca_rotated_25percent.png new file mode 100644 index 0000000..219c657 Binary files /dev/null and src/main/site/resources/images/jumping-orca_rotated_25percent.png differ diff --git src/main/site/resources/images/region_states.png src/main/site/resources/images/region_states.png new file mode 100644 index 0000000..ba69e97 Binary files /dev/null and src/main/site/resources/images/region_states.png differ diff --git src/main/site/site.vm src/main/site/site.vm deleted file mode 100644 index 0e25195..0000000 --- src/main/site/site.vm +++ /dev/null @@ -1,547 +0,0 @@ - -#* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. -*# - -#macro ( link $href $name $target $img $position $alt $border $width $height ) - #set ( $linkTitle = ' title="' + $name + '"' ) - #if( $target ) - #set ( $linkTarget = ' target="' + $target + '"' ) - #else - #set ( $linkTarget = "" ) - #end - #if ( ( $href.toLowerCase().startsWith("http") || $href.toLowerCase().startsWith("https") ) ) - #set ( $linkClass = ' class="externalLink"' ) - #else - #set ( $linkClass = "" ) - #end - #if ( $img ) - #if ( $position == "left" ) - #image($img $alt $border $width $height)$name - #else - $name #image($img $alt $border $width $height) - #end - #else - $name - #end -#end -## -#macro ( image $img $alt $border $width $height ) - #if( $img ) - #if ( ! ( $img.toLowerCase().startsWith("http") || $img.toLowerCase().startsWith("https") ) ) - #set ( $imgSrc = $PathTool.calculateLink( $img, $relativePath ) ) - #set ( $imgSrc = $imgSrc.replaceAll( "\\", "/" ) ) - #set ( $imgSrc = ' src="' + $imgSrc + '"' ) - #else - #set ( $imgSrc = ' src="' + $img + '"' ) - #end - #if( $alt ) - #set ( $imgAlt = ' alt="' + $alt + '"' ) - #else - #set ( $imgAlt = ' alt=""' ) - #end - #if( $border ) - #set ( $imgBorder = ' border="' + $border + '"' ) - #else - #set ( $imgBorder = "" ) - #end - #if( $width ) - #set ( $imgWidth = ' width="' + $width + '"' ) - #else - #set ( $imgWidth = "" ) - #end - #if( $height ) - #set ( $imgHeight = ' height="' + $height + '"' ) - #else - #set ( $imgHeight = "" ) - #end - - #end -#end -#macro ( banner $banner $id ) - #if ( $banner ) - #if( $banner.href ) - - #else - - #end - #end -#end -## -#macro ( links $links ) - #set ( $counter = 0 ) - #foreach( $item in $links ) - #set ( $counter = $counter + 1 ) - #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) ) - #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) ) - #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height ) - #if ( $links.size() > $counter ) - | - #end - #end -#end -## -#macro ( breadcrumbs $breadcrumbs ) - #set ( $counter = 0 ) - #foreach( $item in $breadcrumbs ) - #set ( $counter = $counter + 1 ) - #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) ) - #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) ) -## - #if ( $currentItemHref == $alignedFileName || $currentItemHref == "" ) - $item.name - #else - #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height ) - #end - #if ( $breadcrumbs.size() > $counter ) - > - #end - #end -#end -## -#macro ( displayTree $display $item ) - #if ( $item && $item.items && $item.items.size() > 0 ) - #foreach( $subitem in $item.items ) - #set ( $subitemHref = $PathTool.calculateLink( $subitem.href, $relativePath ) ) - #set ( $subitemHref = $subitemHref.replaceAll( "\\", "/" ) ) - #if ( $alignedFileName == $subitemHref ) - #set ( $display = true ) - #end -## - #displayTree( $display $subitem ) - #end - #end -#end -## -#macro ( menuItem $item ) - #set ( $collapse = "none" ) - #set ( $currentItemHref = $PathTool.calculateLink( $item.href, $relativePath ) ) - #set ( $currentItemHref = $currentItemHref.replaceAll( "\\", "/" ) ) -## - #if ( $item && $item.items && $item.items.size() > 0 ) - #if ( $item.collapse == false ) - #set ( $collapse = "expanded" ) - #else - ## By default collapsed - #set ( $collapse = "collapsed" ) - #end -## - #set ( $display = false ) - #displayTree( $display $item ) -## - #if ( $alignedFileName == $currentItemHref || $display ) - #set ( $collapse = "expanded" ) - #end - #end -
  • - #if ( $item.img ) - #if ( $item.position == "left" ) - #if ( $alignedFileName == $currentItemHref ) - #image($item.img $item.alt $item.border $item.width $item.height) $item.name - #else - #link($currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height) - #end - #else - #if ( $alignedFileName == $currentItemHref ) - $item.name #image($item.img $item.alt $item.border $item.width $item.height) - #else - #link($currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height) - #end - #end - #else - #if ( $alignedFileName == $currentItemHref ) - $item.name - #else - #link( $currentItemHref $item.name $item.target $item.img $item.position $item.alt $item.border $item.width $item.height ) - #end - #end - #if ( $item && $item.items && $item.items.size() > 0 ) - #if ( $collapse == "expanded" ) -
      - #foreach( $subitem in $item.items ) - #menuItem( $subitem ) - #end -
    - #end - #end -
  • -#end -## -#macro ( mainMenu $menus ) - #foreach( $menu in $menus ) - #if ( $menu.name ) - #if ( $menu.img ) - #if( $menu.position ) - #set ( $position = $menu.position ) - #else - #set ( $position = "left" ) - #end -## - #if ( ! ( $menu.img.toLowerCase().startsWith("http") || $menu.img.toLowerCase().startsWith("https") ) ) - #set ( $src = $PathTool.calculateLink( $menu.img, $relativePath ) ) - #set ( $src = $src.replaceAll( "\\", "/" ) ) - #set ( $src = ' src="' + $src + '"' ) - #else - #set ( $src = ' src="' + $menu.img + '"' ) - #end -## - #if( $menu.alt ) - #set ( $alt = ' alt="' + $menu.alt + '"' ) - #else - #set ( $alt = ' alt="' + $menu.name + '"' ) - #end -## - #if( $menu.border ) - #set ( $border = ' border="' + $menu.border + '"' ) - #else - #set ( $border = ' border="0"' ) - #end -## - #if( $menu.width ) - #set ( $width = ' width="' + $menu.width + '"' ) - #else - #set ( $width = "" ) - #end - #if( $menu.height ) - #set ( $height = ' height="' + $menu.height + '"' ) - #else - #set ( $height = "" ) - #end -## - #set ( $img = '" ) -## - #if ( $position == "left" ) -
    $img $menu.name
    - #else -
    $menu.name $img
    - #end - #else -
    $menu.name
    - #end - #end - #if ( $menu.items && $menu.items.size() > 0 ) -
      - #foreach( $item in $menu.items ) - #menuItem( $item ) - #end -
    - #end - #end -#end -## -#macro ( copyright ) - #if ( $project ) - #if ( ${project.organization} && ${project.organization.name} ) - #set ( $period = "" ) - #else - #set ( $period = "." ) - #end -## - #set ( $currentYear = ${currentDate.year} + 1900 ) -## - #if ( ${project.inceptionYear} && ( ${project.inceptionYear} != ${currentYear.toString()} ) ) - ${project.inceptionYear}-${currentYear}${period} - #else - ${currentYear}${period} - #end -## - #if ( ${project.organization} ) - #if ( ${project.organization.name} && ${project.organization.url} ) - ${project.organization.name}. - #elseif ( ${project.organization.name} ) - ${project.organization.name}. - #end - #end - #end -#end -## -#macro ( publishDate $position $publishDate $version ) - #if ( $publishDate && $publishDate.format ) - #set ( $format = $publishDate.format ) - #else - #set ( $format = "yyyy-MM-dd" ) - #end -## - $dateFormat.applyPattern( $format ) -## - #set ( $dateToday = $dateFormat.format( $currentDate ) ) -## - #if ( $publishDate && $publishDate.position ) - #set ( $datePosition = $publishDate.position ) - #else - #set ( $datePosition = "left" ) - #end -## - #if ( $version ) - #if ( $version.position ) - #set ( $versionPosition = $version.position ) - #else - #set ( $versionPosition = "left" ) - #end - #else - #set ( $version = "" ) - #set ( $versionPosition = "left" ) - #end -## - #set ( $breadcrumbs = $decoration.body.breadcrumbs ) - #set ( $links = $decoration.body.links ) - - #if ( $datePosition.equalsIgnoreCase( "right" ) && $links && $links.size() > 0 ) - #set ( $prefix = " |" ) - #else - #set ( $prefix = "" ) - #end -## - #if ( $datePosition.equalsIgnoreCase( $position ) ) - #if ( ( $datePosition.equalsIgnoreCase( "right" ) ) || ( $datePosition.equalsIgnoreCase( "bottom" ) ) ) - $prefix $i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday - #if ( $versionPosition.equalsIgnoreCase( $position ) ) -  | $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} - #end - #elseif ( ( $datePosition.equalsIgnoreCase( "navigation-bottom" ) ) || ( $datePosition.equalsIgnoreCase( "navigation-top" ) ) ) -
    - $i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday - #if ( $versionPosition.equalsIgnoreCase( $position ) ) -  | $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} - #end -
    - #elseif ( $datePosition.equalsIgnoreCase("left") ) -
    - $i18n.getString( "site-renderer", $locale, "template.lastpublished" ): $dateToday - #if ( $versionPosition.equalsIgnoreCase( $position ) ) -  | $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} - #end - #if ( $breadcrumbs && $breadcrumbs.size() > 0 ) - | #breadcrumbs( $breadcrumbs ) - #end -
    - #end - #elseif ( $versionPosition.equalsIgnoreCase( $position ) ) - #if ( ( $versionPosition.equalsIgnoreCase( "right" ) ) || ( $versionPosition.equalsIgnoreCase( "bottom" ) ) ) - $prefix $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} - #elseif ( ( $versionPosition.equalsIgnoreCase( "navigation-bottom" ) ) || ( $versionPosition.equalsIgnoreCase( "navigation-top" ) ) ) -
    - $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} -
    - #elseif ( $versionPosition.equalsIgnoreCase("left") ) -
    - $i18n.getString( "site-renderer", $locale, "template.version" ): ${project.version} - #if ( $breadcrumbs && $breadcrumbs.size() > 0 ) - | #breadcrumbs( $breadcrumbs ) - #end -
    - #end - #elseif ( $position.equalsIgnoreCase( "left" ) ) - #if ( $breadcrumbs && $breadcrumbs.size() > 0 ) -
    - #breadcrumbs( $breadcrumbs ) -
    - #end - #end -#end -## -#macro ( poweredByLogo $poweredBy ) - #if( $poweredBy ) - #foreach ($item in $poweredBy) - #if( $item.href ) - #set ( $href = $PathTool.calculateLink( $item.href, $relativePath ) ) - #set ( $href = $href.replaceAll( "\\", "/" ) ) - #else - #set ( $href="http://maven.apache.org/" ) - #end -## - #if( $item.name ) - #set ( $name = $item.name ) - #else - #set ( $name = $i18n.getString( "site-renderer", $locale, "template.builtby" ) ) - #set ( $name = "${name} Maven" ) - #end -## - #if( $item.img ) - #set ( $img = $item.img ) - #else - #set ( $img = "images/logos/maven-feather.png" ) - #end -## - #if ( ! ( $img.toLowerCase().startsWith("http") || $img.toLowerCase().startsWith("https") ) ) - #set ( $img = $PathTool.calculateLink( $img, $relativePath ) ) - #set ( $img = $src.replaceAll( "\\", "/" ) ) - #end -## - #if( $item.alt ) - #set ( $alt = ' alt="' + $item.alt + '"' ) - #else - #set ( $alt = ' alt="' + $name + '"' ) - #end -## - #if( $item.border ) - #set ( $border = ' border="' + $item.border + '"' ) - #else - #set ( $border = "" ) - #end -## - #if( $item.width ) - #set ( $width = ' width="' + $item.width + '"' ) - #else - #set ( $width = "" ) - #end - #if( $item.height ) - #set ( $height = ' height="' + $item.height + '"' ) - #else - #set ( $height = "" ) - #end -## - - - - #end - #if( $poweredBy.isEmpty() ) - - $i18n.getString( - - #end - #else - - $i18n.getString( - - #end -#end -## - - - - $title - - - -#foreach( $author in $authors ) - -#end -#if ( $dateCreation ) - -#end -#if ( $dateRevision ) - -#end -#if ( $locale ) - -#end - #if ( $decoration.body.head ) - #foreach( $item in $decoration.body.head.getChildren() ) - ## Workaround for DOXIA-150 due to a non-desired behaviour in p-u - ## @see org.codehaus.plexus.util.xml.Xpp3Dom#toString() - ## @see org.codehaus.plexus.util.xml.Xpp3Dom#toUnescapedString() - #set ( $documentHeader = "" ) - #set ( $documentHeader = $documentHeader.replaceAll( "\\", "" ) ) - #if ( $item.name == "script" ) - $StringUtils.replace( $item.toUnescapedString(), $documentHeader, "" ) - #else - $StringUtils.replace( $item.toString(), $documentHeader, "" ) - #end - #end - #end - ## $headContent - - - - - - -
    - -
    -
    -
    - $bodyContent -
    -
    -
    -
    -
    - - - diff --git src/main/site/site.xml src/main/site/site.xml index 0d60e00..14303ba 100644 --- src/main/site/site.xml +++ src/main/site/site.xml @@ -19,18 +19,34 @@ */ --> - + + lt.velykis.maven.skins + reflow-maven-skin + 1.1.1 + + + + bootswatch-spacelab + + Apache HBase Project + ^Documentation + 0.94 Documentation|ASF + + + Apache HBase images/hbase_logo.png http://hbase.apache.org/ - - - + + Apache HBase Orca + images/jumping-orca_rotated_25percent.png + http://hbase.apache.org/ + @@ -47,35 +63,31 @@ - + - - + + - - - + + + - - - + + + - - - + + + - - org.apache.maven.skins - maven-stylus-skin - diff --git src/main/site/xdoc/index.xml src/main/site/xdoc/index.xml index 964d887..2956e09 100644 --- src/main/site/xdoc/index.xml +++ src/main/site/xdoc/index.xml @@ -68,6 +68,13 @@ Apache HBase is an open-source, distributed, versioned, non-relational database

    +

    February 17th, 2015 HBase meetup around Strata+Hadoop World in San Jose

    +

    January 15th, 2015 HBase meetup @ AppDynamics in San Francisco

    +

    November 20th, 2014 HBase meetup @ WANdisco in San Ramon

    +

    October 27th, 2014 HBase Meetup @ Apple in Cupertino

    +

    October 15th, 2014 HBase Meetup @ Google on the night before Strata/HW in NYC

    +

    September 25th, 2014 HBase Meetup @ Continuuity in Palo Alto

    +

    August 28th, 2014 HBase Meetup @ Sift Science in San Francisco

    July 17th, 2014 HBase Meetup @ HP in Sunnyvale

    June 5th, 2014 HBase BOF at Hadoop Summit, San Jose Convention Center

    May 5th, 2014 HBaseCon2014 at the Hilton San Francisco on Union Square

    diff --git src/main/site/xdoc/replication.xml src/main/site/xdoc/replication.xml index 97aaf51..2633f08 100644 --- src/main/site/xdoc/replication.xml +++ src/main/site/xdoc/replication.xml @@ -26,520 +26,6 @@ -
    -

    - The replication feature of Apache HBase (TM) provides a way to copy data between HBase deployments. It - can serve as a disaster recovery solution and can contribute to provide - higher availability at the HBase layer. It can also serve more practically; - for example, as a way to easily copy edits from a web-facing cluster to a "MapReduce" - cluster which will process old and new data and ship back the results - automatically. -

    -

    - The basic architecture pattern used for Apache HBase replication is (HBase cluster) master-push; - it is much easier to keep track of what’s currently being replicated since - each region server has its own write-ahead-log (aka WAL or HLog), just like - other well known solutions like MySQL master/slave replication where - there’s only one bin log to keep track of. One master cluster can - replicate to any number of slave clusters, and each region server will - participate to replicate their own stream of edits. For more information - on the different properties of master/slave replication and other types - of replication, please consult - How Google Serves Data From Multiple Datacenters. -

    -

    - The replication is done asynchronously, meaning that the clusters can - be geographically distant, the links between them can be offline for - some time, and rows inserted on the master cluster won’t be - available at the same time on the slave clusters (eventual consistency). -

    -

    - The replication format used in this design is conceptually the same as - - MySQL’s statement-based replication . Instead of SQL statements, whole - WALEdits (consisting of multiple cell inserts coming from the clients' - Put and Delete) are replicated in order to maintain atomicity. -

    -

    - The HLogs from each region server are the basis of HBase replication, - and must be kept in HDFS as long as they are needed to replicate data - to any slave cluster. Each RS reads from the oldest log it needs to - replicate and keeps the current position inside ZooKeeper to simplify - failure recovery. That position can be different for every slave - cluster, same for the queue of HLogs to process. -

    -

    - The clusters participating in replication can be of asymmetric sizes - and the master cluster will do its “best effort” to balance the stream - of replication on the slave clusters by relying on randomization. -

    -

    - As of version 0.92, Apache HBase supports master/master and cyclic - replication as well as replication to multiple slaves. -

    - -
    -
    -

    - The guide on enabling and using cluster replication is contained - in the API documentation shipped with your Apache HBase distribution. -

    -

    - The most up-to-date documentation is - - available at this address. -

    -
    -
    -

    - The following sections describe the life of a single edit going from a - client that communicates with a master cluster all the way to a single - slave cluster. -

    -
    -

    - The client uses an API that sends a Put, Delete or ICV to a region - server. The key values are transformed into a WALEdit by the region - server and is inspected by the replication code that, for each family - that is scoped for replication, adds the scope to the edit. The edit - is appended to the current WAL and is then applied to its MemStore. -

    -

    - In a separate thread, the edit is read from the log (as part of a batch) - and only the KVs that are replicable are kept (that is, that they are part - of a family scoped GLOBAL in the family's schema, non-catalog so not - hbase:meta or -ROOT-, and did not originate in the target slave cluster - in - case of cyclic replication). -

    -

    - The edit is then tagged with the master's cluster UUID. - When the buffer is filled, or the reader hits the end of the file, - the buffer is sent to a random region server on the slave cluster. -

    -

    - Synchronously, the region server that receives the edits reads them - sequentially and separates each of them into buffers, one per table. - Once all edits are read, each buffer is flushed using HTable, the normal - HBase client.The master's cluster UUID is retained in the edits applied at - the slave cluster in order to allow cyclic replication. -

    -

    - Back in the master cluster's region server, the offset for the current - WAL that's being replicated is registered in ZooKeeper. -

    -
    -
    -

    - The edit is inserted in the same way. -

    -

    - In the separate thread, the region server reads, filters and buffers - the log edits the same way as during normal processing. The slave - region server that's contacted doesn't answer to the RPC, so the master - region server will sleep and retry up to a configured number of times. - If the slave RS still isn't available, the master cluster RS will select a - new subset of RS to replicate to and will retry sending the buffer of - edits. -

    -

    - In the mean time, the WALs will be rolled and stored in a queue in - ZooKeeper. Logs that are archived by their region server (archiving is - basically moving a log from the region server's logs directory to a - central logs archive directory) will update their paths in the in-memory - queue of the replicating thread. -

    -

    - When the slave cluster is finally available, the buffer will be applied - the same way as during normal processing. The master cluster RS will then - replicate the backlog of logs. -

    -
    -
    -
    -

    - This section describes in depth how each of replication's internal - features operate. -

    -
    -

    - HBase replication maintains all of its state in Zookeeper. By default, this state is - contained in the base znode: -

    -
    -                /hbase/replication
    -        
    -

    - There are two major child znodes in the base replication znode: -

      -
    • Peers znode: /hbase/replication/peers
    • -
    • RS znode: /hbase/replication/rs
    • -
    -

    -
    -

    - The peers znode contains a list of all peer replication clusters and the - current replication state of those clusters. It has one child peer znode - for each peer cluster. The peer znode is named with the cluster id provided - by the user in the HBase shell. The value of the peer znode contains - the peers cluster key provided by the user in the HBase Shell. The cluster key - contains a list of zookeeper nodes in the clusters quorum, the client port for the - zookeeper quorum, and the base znode for HBase - (i.e. “zk1.host.com,zk2.host.com,zk3.host.com:2181:/hbase”). -

    -
    -                /hbase/replication/peers
    -                    /1 [Value: zk1.host.com,zk2.host.com,zk3.host.com:2181:/hbase]
    -                    /2 [Value: zk5.host.com,zk6.host.com,zk7.host.com:2181:/hbase]
    -            
    -

    - Each of these peer znodes has a child znode that indicates whether or not - replication is enabled on that peer cluster. These peer-state znodes do not - have child znodes and simply contain a boolean value (i.e. ENABLED or DISABLED). - This value is read/maintained by the ReplicationPeer.PeerStateTracker class. -

    -
    -                /hbase/replication/peers
    -                    /1/peer-state [Value: ENABLED]
    -                    /2/peer-state [Value: DISABLED]
    -            
    -
    -
    -

    - The rs znode contains a list of all outstanding HLog files in the cluster - that need to be replicated. The list is divided into a set of queues organized by - region server and the peer cluster the region server is shipping the HLogs to. The - rs znode has one child znode for each region server in the cluster. The child - znode name is simply the regionserver name (a concatenation of the region server’s - hostname, client port and start code). These region servers could either be dead or alive. -

    -
    -                /hbase/replication/rs
    -                    /hostname.example.org,6020,1234
    -                    /hostname2.example.org,6020,2856
    -            
    -

    - Within each region server znode, the region server maintains a set of HLog replication - queues. Each region server has one queue for every peer cluster it replicates to. - These queues are represented by child znodes named using the cluster id of the peer - cluster they represent (see the peer znode section). -

    -
    -                /hbase/replication/rs
    -                    /hostname.example.org,6020,1234
    -                        /1
    -                        /2
    -            
    -

    - Each queue has one child znode for every HLog that still needs to be replicated. - The value of these HLog child znodes is the latest position that has been replicated. - This position is updated every time a HLog entry is replicated. -

    -
    -                /hbase/replication/rs
    -                    /hostname.example.org,6020,1234
    -                        /1
    -                            23522342.23422 [VALUE: 254]
    -                            12340993.22342 [VALUE: 0]
    -            
    -
    -
    -
    -
    -

    - All of the base znode names are configurable through parameters: -

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    ParameterDefault Value
    zookeeper.znode.parent/hbase
    zookeeper.znode.replicationreplication
    zookeeper.znode.replication.peerspeers
    zookeeper.znode.replication.peers.statepeer-state
    zookeeper.znode.replication.rsrs
    -

    - The default replication znode structure looks like the following: -

    -
    -                /hbase/replication/peers/{peerId}/peer-state
    -                /hbase/replication/rs
    -            
    -
    -
    -
      -
    • hbase.replication (Default: false) - Controls whether replication is enabled - or disabled for the cluster.
    • -
    • replication.sleep.before.failover (Default: 2000) - The amount of time a failover - worker waits before attempting to replicate a dead region server’s HLog queues.
    • -
    • replication.executor.workers (Default: 1) - The number of dead region servers - one region server should attempt to failover simultaneously.
    • -
    -
    -
    -
    -

    - When a master cluster RS initiates a replication source to a slave cluster, - it first connects to the slave's ZooKeeper ensemble using the provided - cluster key (that key is composed of the value of hbase.zookeeper.quorum, - zookeeper.znode.parent and hbase.zookeeper.property.clientPort). It - then scans the "rs" directory to discover all the available sinks - (region servers that are accepting incoming streams of edits to replicate) - and will randomly choose a subset of them using a configured - ratio (which has a default value of 10%). For example, if a slave - cluster has 150 machines, 15 will be chosen as potential recipient for - edits that this master cluster RS will be sending. Since this is done by all - master cluster RSs, the probability that all slave RSs are used is very high, - and this method works for clusters of any size. For example, a master cluster - of 10 machines replicating to a slave cluster of 5 machines with a ratio - of 10% means that the master cluster RSs will choose one machine each - at random, thus the chance of overlapping and full usage of the slave - cluster is higher. -

    -

    - A ZK watcher is placed on the ${zookeeper.znode.parent}/rs node of - the slave cluster by each of the master cluster's region servers. - This watch is used to monitor changes in the composition of the - slave cluster. When nodes are removed from the slave cluster (or - if nodes go down and/or come back up), the master cluster's region - servers will respond by selecting a new pool of slave region servers - to replicate to. -

    -
    -
    -

    - Every master cluster RS has its own znode in the replication znodes hierarchy. - It contains one znode per peer cluster (if 5 slave clusters, 5 znodes - are created), and each of these contain a queue - of HLogs to process. Each of these queues will track the HLogs created - by that RS, but they can differ in size. For example, if one slave - cluster becomes unavailable for some time then the HLogs should not be deleted, - thus they need to stay in the queue (while the others are processed). - See the section named "Region server failover" for an example. -

    -

    - When a source is instantiated, it contains the current HLog that the - region server is writing to. During log rolling, the new file is added - to the queue of each slave cluster's znode just before it's made available. - This ensures that all the sources are aware that a new log exists - before HLog is able to append edits into it, but this operations is - now more expensive. - The queue items are discarded when the replication thread cannot read - more entries from a file (because it reached the end of the last block) - and that there are other files in the queue. - This means that if a source is up-to-date and replicates from the log - that the region server writes to, reading up to the "end" of the - current file won't delete the item in the queue. -

    -

    - When a log is archived (because it's not used anymore or because there's - too many of them per hbase.regionserver.maxlogs typically because insertion - rate is faster than region flushing), it will notify the source threads that the path - for that log changed. If the a particular source was already done with - it, it will just ignore the message. If it's in the queue, the path - will be updated in memory. If the log is currently being replicated, - the change will be done atomically so that the reader doesn't try to - open the file when it's already moved. Also, moving a file is a NameNode - operation so, if the reader is currently reading the log, it won't - generate any exception. -

    -
    -
    -

    - By default, a source will try to read from a log file and ship log - entries as fast as possible to a sink. This is first limited by the - filtering of log entries; only KeyValues that are scoped GLOBAL and - that don't belong to catalog tables will be retained. A second limit - is imposed on the total size of the list of edits to replicate per slave, - which by default is 64MB. This means that a master cluster RS with 3 slaves - will use at most 192MB to store data to replicate. This doesn't account - the data filtered that wasn't garbage collected. -

    -

    - Once the maximum size of edits was buffered or the reader hits the end - of the log file, the source thread will stop reading and will choose - at random a sink to replicate to (from the list that was generated by - keeping only a subset of slave RSs). It will directly issue a RPC to - the chosen machine and will wait for the method to return. If it's - successful, the source will determine if the current file is emptied - or if it should continue to read from it. If the former, it will delete - the znode in the queue. If the latter, it will register the new offset - in the log's znode. If the RPC threw an exception, the source will retry - 10 times until trying to find a different sink. -

    -
    -
    -

    - If replication isn't enabled, the master's logs cleaning thread will - delete old logs using a configured TTL. This doesn't work well with - replication since archived logs passed their TTL may still be in a - queue. Thus, the default behavior is augmented so that if a log is - passed its TTL, the cleaning thread will lookup every queue until it - finds the log (while caching the ones it finds). If it's not found, - the log will be deleted. The next time it has to look for a log, - it will first use its cache. -

    -
    -
    -

    - As long as region servers don't fail, keeping track of the logs in ZK - doesn't add any value. Unfortunately, they do fail, so since ZooKeeper - is highly available we can count on it and its semantics to help us - managing the transfer of the queues. -

    -

    - All the master cluster RSs keep a watcher on every other one of them to be - notified when one dies (just like the master does). When it happens, - they all race to create a znode called "lock" inside the dead RS' znode - that contains its queues. The one that creates it successfully will - proceed by transferring all the queues to its own znode (one by one - since ZK doesn't support the rename operation) and will delete all the - old ones when it's done. The recovered queues' znodes will be named - with the id of the slave cluster appended with the name of the dead - server. -

    -

    - Once that is done, the master cluster RS will create one new source thread per - copied queue, and each of them will follow the read/filter/ship pattern. - The main difference is that those queues will never have new data since - they don't belong to their new region server, which means that when - the reader hits the end of the last log, the queue's znode will be - deleted and the master cluster RS will close that replication source. -

    -

    - For example, consider a master cluster with 3 region servers that's - replicating to a single slave with id '2'. The following hierarchy - represents what the znodes layout could be at some point in time. We - can see the RSs' znodes all contain a "peers" znode that contains a - single queue. The znode names in the queues represent the actual file - names on HDFS in the form "address,port.timestamp". -

    -
    -/hbase/replication/rs/
    -                      1.1.1.1,60020,123456780/
    -                          2/
    -                              1.1.1.1,60020.1234  (Contains a position)
    -                              1.1.1.1,60020.1265
    -                      1.1.1.2,60020,123456790/
    -                          2/
    -                              1.1.1.2,60020.1214  (Contains a position)
    -                              1.1.1.2,60020.1248
    -                              1.1.1.2,60020.1312
    -                      1.1.1.3,60020,    123456630/
    -                          2/
    -                              1.1.1.3,60020.1280  (Contains a position)
    -        
    -

    - Now let's say that 1.1.1.2 loses its ZK session. The survivors will race - to create a lock, and for some reasons 1.1.1.3 wins. It will then start - transferring all the queues to its local peers znode by appending the - name of the dead server. Right before 1.1.1.3 is able to clean up the - old znodes, the layout will look like the following: -

    -
    -/hbase/replication/rs/
    -                      1.1.1.1,60020,123456780/
    -                          2/
    -                              1.1.1.1,60020.1234  (Contains a position)
    -                              1.1.1.1,60020.1265
    -                      1.1.1.2,60020,123456790/
    -                          lock
    -                          2/
    -                              1.1.1.2,60020.1214  (Contains a position)
    -                              1.1.1.2,60020.1248
    -                              1.1.1.2,60020.1312
    -                      1.1.1.3,60020,123456630/
    -                          2/
    -                              1.1.1.3,60020.1280  (Contains a position)
    -
    -                          2-1.1.1.2,60020,123456790/
    -                              1.1.1.2,60020.1214  (Contains a position)
    -                              1.1.1.2,60020.1248
    -                              1.1.1.2,60020.1312
    -        
    -

    - Some time later, but before 1.1.1.3 is able to finish replicating the - last HLog from 1.1.1.2, let's say that it dies too (also some new logs - were created in the normal queues). The last RS will then try to lock - 1.1.1.3's znode and will begin transferring all the queues. The new - layout will be: -

    -
    -/hbase/replication/rs/
    -                      1.1.1.1,60020,123456780/
    -                          2/
    -                              1.1.1.1,60020.1378  (Contains a position)
    -
    -                          2-1.1.1.3,60020,123456630/
    -                              1.1.1.3,60020.1325  (Contains a position)
    -                              1.1.1.3,60020.1401
    -
    -                          2-1.1.1.2,60020,123456790-1.1.1.3,60020,123456630/
    -                              1.1.1.2,60020.1312  (Contains a position)
    -                      1.1.1.3,60020,123456630/
    -                          lock
    -                          2/
    -                              1.1.1.3,60020.1325  (Contains a position)
    -                              1.1.1.3,60020.1401
    -
    -                          2-1.1.1.2,60020,123456790/
    -                              1.1.1.2,60020.1312  (Contains a position)
    -        
    -
    -
    -
    - Following the some useful metrics which can be used to check the replication progress: -
      -
    • source.sizeOfLogQueue: number of HLogs to process (excludes the one which is being - processed) at the Replication source
    • -
    • source.shippedOps: number of mutations shipped
    • -
    • source.logEditsRead: number of mutations read from HLogs at the replication source
    • -
    • source.ageOfLastShippedOp: age of last batch that was shipped by the replication source
    • -
    - Please note that the above metrics are at the global level at this regionserver. In 0.95.0 and onwards, these - metrics are also exposed per peer level. -
    - -
    -
    -

    - Yes, this is for much later. -

    -
    -
    -

    - You can use the HBase-provided utility called CopyTable from the package - org.apache.hadoop.hbase.mapreduce in order to have a discp-like tool to - bulk copy data. -

    -
    -
    -

    - Yes, this behavior would help a lot but it's not currently available - in HBase (BatchUpdate had that, but it was lost in the new API). -

    -
    -
    -

    - Yes. See HDFS-2757. -

    -
    -
    +

    This information has been moved to the Cluster Replication section of the Apache HBase Reference Guide.