./hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/MultiRowResource.java: * Constructor nn * @throws java.io.IOException
./hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java: * @return string representing the rest api's version n * if the endpoint does not exist, there is
./hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java: * @return string representing the cluster's version n * if the endpoint does not exist, there is
./hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/RemoteAdmin.java: * @return string representing the cluster's version n * if the endpoint does not exist, there is
./hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/AssignmentVerificationReport.java: * their favored nodes n * @return the number of regions
./hbase-balancer/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodeAssignmentHelper.java: * n * @return PB'ed bytes of {@link FavoredNodes} generated by the server list.
./hbase-balancer/src/main/java/org/apache/hadoop/hbase/favored/FavoredNodeAssignmentHelper.java: * for generating new assignments for the primary/secondary/tertiary RegionServers n * @return the
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java: * {@link Region#startRegionOperation()}. n * @param operation The operation is about to be taken
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java: * need a Cell reference for later use, copy the cell and use that. nn * @param success true if
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java: * @param size Full size of the file n * @param r original reference file. This will be not null
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java: * @param size Full size of the file n * @param r original reference file. This will be not null
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called before remove a replication peer n * @param peerId a short name that identifies the peer
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called after remove a replication peer n * @param peerId a short name that identifies the peer
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called before enable a replication peer n * @param peerId a short name that identifies the peer
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called after enable a replication peer n * @param peerId a short name that identifies the peer
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called before disable a replication peer n * @param peerId a short name that identifies the
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called after disable a replication peer n * @param peerId a short name that identifies the peer
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called before get the configured ReplicationPeerConfig for the specified peer n * @param peerId
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called after get the configured ReplicationPeerConfig for the specified peer n * @param peerId
./hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java: * Called before update peerConfig for the specified peer n * @param peerId a short name that
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java: * Delete the specified snapshot n * @throws SnapshotDoesNotExistException If the specified
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java: * Check if the specified snapshot is done n * @return true if snapshot is ready to be restored,
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java: * Take a snapshot based on the enabled/disabled state of the table. n * @throws
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java: * Exposed for TESTING n * @param handler handler the master should use TODO get rid of this if
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java: * Restore or Clone the specified snapshot n * @param nonceKey unique identifier to prevent
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java: * Generate the assignment plan for the existing table nnnn * @param
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java: * @param splitKeys Starting row keys for the initial table regions. If null nn * a single region
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterServices.java: * n * @return the timestamp of the last successful major compaction for the passed region or 0 if
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java: * Utility for constructing an instance of the passed HMaster class. n * @return HMaster instance.
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java: * @param serverName Incoming servers's name n * @throws ClockOutOfSyncException if the skew
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java: * n * @return ServerMetrics if serverName is known else null
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java: * Add the server to the drain list. n * @return True if the server is added or the server is
./hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java: * nn * @return null on success, existing task on error
./hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java: * n * @return the passed-in version int as a version String (e.g. 0x0103004 is
./hbase-server/src/main/java/org/apache/hadoop/hbase/backup/HFileArchiver.java: * Move the file to the given destination n * @return true on success n
./hbase-server/src/main/java/org/apache/hadoop/hbase/coordination/ZKSplitLogManagerCoordination.java: * nn * @return DONE if task completed successfully, ERR otherwise
./hbase-server/src/main/java/org/apache/hadoop/hbase/LocalHBaseCluster.java: * n * @return region server
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java: * region in HBase. It's internally used only. n * @return A dummy mob region info.
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * n * The start key. n * The string of the latest timestamp of cells in this file, the format is
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * yyyymmdd. n * The uuid
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * n * The md5 hex string of the start key. n * The string of the latest timestamp of cells in
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * this file, the format is yyyymmdd. n * The uuid
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * Creates an instance of MobFileName n * The md5 hex string of the start key. n * The string of
./hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java: * Creates an instance of MobFileName n * The md5 hex string of the start key. n * The string of
./hbase-server/src/main/java/org/apache/hadoop/hbase/errorhandling/ForeignException.java: * Takes a series of bytes and tries to generate an ForeignException instance for it. n * @return
./hbase-server/src/main/java/org/apache/hadoop/hbase/errorhandling/ForeignExceptionSnare.java: * exception this is a no-op n * all exceptions from remote sources are procedure exceptions
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/util/MemorySizeUtil.java: * 'hbase.regionserver.global.memstore.size'. n * @return the onheap global memstore limt
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpIndexBlockEncoder.java: * The indexed key at the ith position in the nonRootIndex. The position starts at 0. n * @param
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheUtil.java: * n * @return The block content as String.
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheUtil.java: * n * @return The block content of bc as a String minus the filename.
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheUtil.java: * n * @return True if full.... if we won't be adding any more.
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java: * that case and load the previous block as appropriate. n * the key to find n * find the key
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java: * in a Scanner. Letting go of your references to the scanner is sufficient. n * Store
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java: * configuration. n * True if we should cache blocks read in by this scanner. n * Use positional
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java: * scanning). n * is scanner being used for a compaction?
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CompoundBloomFilterWriter.java: * n * each chunk's size in bytes. The real chunk size might be different as required by the fold
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CompoundBloomFilterWriter.java: * factor. n * target false positive rate n * hash function type to use n * maximum degree of
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CompoundBloomFilterWriter.java: * folding allowed n * the bloom type
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InlineBlockWriter.java: * Determines whether there is a new block to be written out. n * whether the file is being
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/InlineBlockWriter.java: * {@link #shouldWriteBlock(boolean)} returned true. n * a stream (usually a compressing stream)
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: * {@link #appendFileInfo(byte[], byte[])} n * name of the block n * will call readFields to get
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java: * construction. n * Cell to add. Cannot be empty nor null.
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * n * from 0 to {@link #getRootBlockCount() - 1}
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * n * from 0 to {@link #getRootBlockCount() - 1}
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * n * from 0 to {@link #getRootBlockCount() - 1}
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * @param currentBlock the current block, to avoid re-reading the same block nnn * @param
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * Finds the root-level index block containing the given key. n * Key to find n * the comparator
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * Finds the root-level index block containing the given key. n * Key to find
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * Finds the root-level index block containing the given key. n * Key to find
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * The indexed key at the ith position in the nonRootIndex. The position starts at 0. n * @param
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * which records the offsets of (offset, onDiskSize, firstKey) tuples of all entries. n * the
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * key we are searching for offsets to individual entries in the blockIndex buffer n * the
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * on-disk-size can be read. n * a non-root block without header. Initial position does not
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * matter. n * the byte array containing the key
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java: * intermediate-level blocks. n * @param currentLevel the current level of the block index, such
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java: * c[0] .. c[n], where there are n cells in the file. n * @return -1, if cell <
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java: * the cells in the file, c[0] .. c[n], where there are n cellc in the file after
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileScanner.java: * are n cells in the file.
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/FileIOEngine.java: * Get the absolute offset in given file with the relative global offset. nn * @return the
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java: * Get the IOEngine from the IO engine name nnn * @return the IOEngine n
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/ByteBufferIOEngine.java: * Construct the ByteBufferIOEngine with the given capacity n * @throws IOException ideally here
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketAllocator.java: * @param blockSize size of block nn * @return the offset in the IOEngine
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java: * n * @return A {@link Reference} that points at top half of a an hfile
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java: * n * @return A {@link Reference} that points at the bottom half of a an hfile
./hbase-server/src/main/java/org/apache/hadoop/hbase/io/Reference.java: * Read a Reference from FileSystem. nn * @return New Reference made from passed p n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * Open all Stores. nn * @return Highest sequenceId found out in a Store. n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * @param compaction Compaction details, obtained by requestCompaction() n * @return whether the
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * Check the collection of families for valid timestamps n * @param now current timestamp n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * n * @return True if size is over the flush threshold
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * bulk loaded n * @return Map from family to List of store file paths if
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java: * @param familyPaths List of Pair<byte[] column family, String hfilePath> n * @param
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java: * @param limit the maximum number of results to return n * @return 'has more' indication to
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java: * @param size Full size of the file n * @param r original reference file. This will be not null
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java: * @param size Full size of the file n * @param r original reference file. This will be not null
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/RegionServerSnapshotManager.java: * snapshot verification step. n * @return Subprocedure to submit to the ProcedureMember.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/RegionServerSnapshotManager.java: * explicitly fail the snapshot. n * @return the list of online regions. Empty list is returned if
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/snapshot/RegionServerSnapshotManager.java: * @return true on success, false otherwise n * @throws
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedStoreScanner.java: * @param store who we scan n * @param scan the spec
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionScanner.java: * current position. Always seeks to the beginning of a row boundary. nn * if row is null
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/InternalScanner.java: * @param result return output array n * @return true if more rows exist after this one, false if
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java: * Perform one or more append operations on a row. n * @return result of the operation n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java: * @param scan configured {@link Scan} n * @throws IOException read exceptions
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java: * @param additionalScanners Any additional scanners to be used n * @throws IOException read
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java: * Perform one or more increment operations on a row. n * @return result of the operation n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SecureBulkLoadManager.java: * Check if the path is referencing a file. This is mainly needed to avoid symlinks. n * @return
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * new state, thus preserving the immutability of {@link NoLimitScannerContext} n * @return The
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true if the batch limit can be enforced in the checker's scope
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true if the size limit can be enforced in the checker's scope
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true if the time limit can be enforced in the checker's scope
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true if any limit can be enforced within the checker's scope
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true when the limit can be enforced from the scope of the checker
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true when the limit can be enforced from the scope of the checker
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScannerContext.java: * n * @return true when the limit can be enforced from the scope of the checker
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java: * Clears the current snapshot of the Memstore. nn * @see #snapshot()
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java: * Write an update n * @param memstoreSizing The delta in memstore size will be passed back via
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java: * Write the updates n * @param memstoreSizing The delta in memstore size will be passed back via
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStore.java: * only see each KeyValue update as atomic. n * @param readpoint readpoint below which we can
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/SortedCompactionPolicy.java: * n * @return When to run next major compaction
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MultiVersionConcurrencyControl.java: * write of S. n *
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java: * concepts. n * should we cache the blocks? n * use pread (for concurrent small
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java: * readers) n * is scanner being used for compaction?
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileReader.java: * multi-column query. n * the cell to check if present in BloomFilter
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java: * n * @return The operation(Mutation) at the specified position.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java: * n * @return Gets the status code for the operation(Mutation) at the specified position.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MiniBatchOperationInProgress.java: * n * @return Gets the walEdit for the operation(Mutation) at the specified position.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java: * later. n * @return true if the region was successfully flushed, false otherwise. If false,
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java: * Unregister the listener from MemstoreFlushListeners n * @return true when passed listener is
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java: * n * @return True if we have been delayed > maximumWait milliseconds.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.java: * Create the RegionSplitPolicy configured for the given table. nn * @return a RegionSplitPolicy n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java: * nn * @return Return lowest of a or b or null if both a and b are null
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java: * nn * @return false if not found or if k is after the end. n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java: * n * @return Content of the file we write out to the filesystem under a region n
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java: * checked for this directory existence. nnn * @return the result of fs.mkdirs(). In case
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java: * yanked out from under hbase or we OOME. n * the reason we are aborting n * the exception that
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/ColumnTracker.java: * out. n * @return true to early out based on timestamp.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/ScanQueryMatcher.java: * nn * @return true if the cell is expired
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FavoredNodesForRegion.java: * in favored nodes hints for new region files. n * @return array containing the favored nodes'
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanInfo.java: * n * @param family {@link ColumnFamilyDescriptor} describing the column family
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanInfo.java: * n * @param family Name of this store's column family
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ReversedKeyValueHeap.java: * Compares rows of two KeyValue nn * @return less than 0 if left is smaller, 0 if equal etc..
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java: * Seek the specified scanners with the given key nn * @param isLazy true if using lazy seek
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java: * Get the next row of values from this Store. nn * @return true if there are more rows, false if
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreScanner.java: * Do a reseek in a normal StoreScanner(scan forward) n * @return true if scanner has values left,
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HeapMemoryTuner.java: * Perform the heap memory tuning operation. n * @return TunerResult including the
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnlineRegions.java: * not serializable. n * @return Region for the passed encoded encodedRegionName or
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/OnlineRegions.java: * Get all online regions of a table in this RS. n * @return List of Region
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceIdAccounting.java: * older. nnn * @param lowest Whether to keep running account of oldest sequence id.
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SequenceIdAccounting.java: * n * @return New Map that has same keys as src but instead of a Map for a value, it
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/KeyValueHeap.java: * Compares two KeyValue nn * @return less than 0 if left is smaller, 0 if equal etc..
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ShutdownHook.java: * false. This configuration value is checked when the hook code runs. n * @param fs
./hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushRequester.java: * Unregister the given FlushRequestListener n * @return true when passed listener is unregistered
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/DirectMemoryUtils.java: * explicitly calls the Cleaner method of a DirectByteBuffer. n * The DirectByteBuffer that will
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilter.java: * number of bits in the Bloom filter (bitSize) n denotes the number of elements inserted into the
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilter.java: * of entries, then the optimal bloom size m = -(n * ln(err) / (ln(2)^2) ~= n * ln(err) / ln(0.6185)
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * @param pathToSearch Path we will be trying to match. n * @return True if pathTail
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * nn * @return All the table directories under rootdir. Ignore non table hbase
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * the tool {@link org.apache.hadoop.hbase.master.RegionPlacementMaintainer} n * the configuration
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * @return the mapping from region encoded name to a map of server names to locality fraction n *
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * each region on each of the servers having at least one block of that region. n * the
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * configuration to use n * the table you wish to scan locality for n * the thread pool size to
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * @return the mapping from region encoded name to a map of server names to locality fraction n *
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * both optional. n * the configuration to use n * the table you wish to scan locality for n * the
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * thread pool size to use n * the map into which to put the locality degree mapping or null, must
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * be a thread-safe implementation n * in case of file system errors or interrupts
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/FSUtils.java: * n * @return The DFSClient DFSHedgedReadMetrics instance or null if can't be found or not on
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitCalculator.java: * 0 entries if empty or at most n+1 values where n == number of added ranges.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Checks a path to see if it is a valid hfile. n * full Path to an HFile n * This is a
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * trailer). n * Path to a corrupt hfile (assumes that it is HBASE_DIR/ table /region/cf/file)
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Check all files in a column family dir. n * column family directory n
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Check all files in a mob column family dir. n * mob column family directory n
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Checks a path to see if it is a valid mob file. n * full Path to a mob file. n * This is a
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Check all column families in a region dir. n * region directory n
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/HFileCorruptionChecker.java: * Check all the regiondirs in the specified tableDir n * path to a table n
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java: * Constructor n * Configuration object n * if the master is not running n * if unable to connect
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java: * To get the column family list according to the column family dirs nn * @return a set of column
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java: * nn * @return the number of bits for a Bloom filter than can hold the given number of keys and
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java: * have to be an integer (hence the "ideal" in the function name). nn * @return maximum number of
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java: * error rate, with the given number of hash functions. nnn * @return the maximum number of keys
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java: * Bloom filter article. nnn * @return the actual error rate
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java: * @param hashType Bloom filter hash function type nn * @return the new Bloom filter of the
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/MunkresAssignment.java: * O(n^2) auxiliary space where n is the number of jobs or workers, whichever is greater.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterFactory.java: * {@link org.apache.hadoop.hbase.regionserver.HStoreFile} writing. nnn * @param maxKeys an
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterFactory.java: * {@link org.apache.hadoop.hbase.regionserver.HStoreFile} writing. nn * @param maxKeys an
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionMover.java: * n * @return RegionMoverBuilder object
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * Split a pre-existing region into 2 regions. n * first row (inclusive) n * last row
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * Split an entire table. n * number of regions to split the table into n * user input is
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * split code understand how to evenly divide the first region. n * raw user input (may throw
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * inclusive for all rows sharing the same prefix. n * raw user input (may throw
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * n * user or file input for row
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * n * byte array representing a row in HBase
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * source code for details. n * Usage: RegionSplitter <TABLE> <SPLITALGORITHM> <-c
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * <conf.param=value>] n * HBase IO problem n * user requested exit n * problem parsing user
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * Alternative getCurrentNrHRS which is no longer available. n * @return Rough count of
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RegionSplitter.java: * nn * @return A Pair where first item is table dir and second is the split file.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/ZKDataMigrator.java: * found in znode. n * @deprecated Since 2.0.0. To be removed in hbase-3.0.0.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/RollingStatCalculator.java: * n * @return an array of given size initialized with zeros
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java: * @param index Used distinguishing the object returned. n * @return Region server added.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java: * @param index Used distinguishing the object returned. n * @return Master added.
./hbase-server/src/main/java/org/apache/hadoop/hbase/util/JVMClusterUtil.java: * nn * @return Address to use contacting primary master.
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureCoordinator.java: * Default thread pool for the procedure n * @param opThreads the maximum number of threads to
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureCoordinator.java: * Default thread pool for the procedure n * @param opThreads the maximum number of threads to
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureCoordinator.java: * Exposed for hooking with unit tests. nnn * @return the newly created procedure
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureMember.java: * Default thread pool for the procedure n * @param procThreads the maximum number of threads to
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureMember.java: * Default thread pool for the procedure n * @param procThreads the maximum number of threads to
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/RegionServerFlushTableProcedureManager.java: * of a race where regions may be missed. nn * @return Subprocedure to submit to the
./hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/flush/RegionServerFlushTableProcedureManager.java: * moves somewhere between the calls we'll miss the region. n * @return the list of online
./hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java: * Report that a {@code SinkPeer} successfully replicated a chunk of data. n * The SinkPeer that
./hbase-server/src/main/java/org/apache/hadoop/hbase/SplitLogTask.java: * @return An SplitLogTaskState instance made of the passed data n * @see
./hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/ReplicationProtobufUtil.java: * n * @return cells packaged as a CellScanner
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java: * @param bindAddress Where to listen nn * @param reservoirEnabled Enable ByteBufferPool or not.
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java: * @param bindAddress Where to listen nn * @param reservoirEnabled Enable ByteBufferPool or not.
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java: * n * Has the request header and the request param and optionally encoded data buffer all in this
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/PriorityFunction.java: * select the dispatch queue. nnn * @return Priority of this request.
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/PriorityFunction.java: * queue. nn * @return Deadline of this request. 0 now, otherwise msec of 'delay'
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java: * n * @param handlerCount the number of handler threads that will be used to process calls
./hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcScheduler.java: * @param replicationHandlerCount How many threads for replication handling. n * @param priority
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WAL.java: * given time. n * If true, force creation of a new writer even if no entries have been written to
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALKeyImpl.java: * HRegionInfo#getEncodedNameAsBytes(). n * @param now Time
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * sets the region by which output will be filtered n * when nonnegative, serves as a filter; only
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * sets the region by which output will be filtered n * when not null, serves as a filter; only
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * sets the row key by which output will be filtered n * when not null, serves as a filter; only
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * sets the rowPrefix key prefix by which output will be filtered n * when not null, serves as a
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * sets the position to start seeking the WAL file n * initial position to start seeking the given
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * currently configured options n * the HBase configuration relevant to this log file n * the path
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * of the log file to be read n * may be unable to access the configured filesystem or requested
./hbase-server/src/main/java/org/apache/hadoop/hbase/wal/WALPrettyPrinter.java: * the contents on stdout. n * Command line arguments n * Thrown upon file system
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java: * n * @return NameValuePair of the exception name to stringified version os exception.
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/ZKVisibilityLabelWatcher.java: * Write a labels mirror or user auths mirror into zookeeper n * @param labelsOrUserAuths true for
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelsCache.java: * @return Singleton instance of VisibilityLabelsCache n * when this is called before calling
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelsCache.java: * Returns the list of ordinals of labels associated with the groups n * @return the list of
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * any initialization logic. n * the region coprocessor env
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * Adds the set of labels into the system. n * Labels to add to the system.
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * Sets given labels globally authorized for the user. n * The authorizing user n * Labels which
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * Removes given labels from user's globally authorized list of labels. n * The user whose
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * authorization to be removed n * Labels which are getting removed from authorization set
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * Retrieve the visibility labels for the user. n * Name of the user whose authorization to be
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * retrieved n * Whether a system or user originated call.
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * Retrieve the visibility labels for the groups. n * Name of the groups whose authorization to be
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * retrieved n * Whether a system or user originated call.
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * care of thread safety. n * Authorizations for the read request
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * user has system auth, he can view all the data irrespective of its labels. n * User for whom
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * care of thread safety. n * The visibility tags present in the Put mutation n * The
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * means the tags are written with unsorted label ordinals n * - The visibility tags in the delete
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * mutation (the specified Cell Visibility) n * The serialization format for the Delete visibility
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * API to provide an opportunity to modify the visibility tags before replicating. n * the
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelService.java: * visibility tags associated with the cell n * the serialization format associated with the tag
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * Creates the labels data to be written to zookeeper. n * @return Bytes form of labels and their
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * Creates the user auth data to be written to zookeeper. n * @return Bytes form of user auths
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * writeToZooKeeper(Map<byte[], Integer> entries). n * @return Labels and their ordinal
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * Reads back User auth data written to zookeeper. n * @return User auth details n
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * "hbase.regionserver.scan.visibility.label.generator.class" n * when any of the
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * visibility tags n * - all the visibilty tags of type
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityUtils.java: * the passed stream. n * Unsorted label ordinals n * Stream where to write the labels. n * When
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java: * others in the order. nn * @return whether we need a ZK update or not.
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/DefaultVisibilityLabelServiceImpl.java: * n * - all the visibility tags associated with the current Cell
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/ScanLabelGenerator.java: * Helps to get a list of lables associated with an UGI nn * @return The labels
./hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityLabelServiceManager.java: * n * @return singleton instance of {@link VisibilityLabelService}. The FQCN of the
./hbase-server/src/main/java/org/apache/hadoop/hbase/namespace/NamespaceStateManager.java: * Check if adding a region violates namespace quota, if not update namespace cache. nnn * @return
./hbase-server/src/test/java/org/apache/hadoop/hbase/snapshot/SnapshotTestingUtils.java: * Assert that we don't have any snapshots lists n * if the admin operation fails
./hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java: * n * @return Null if not the wanted ProcessRegionClose, else op cast as a
./hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterTransitions.java: * n * @return Start key for hri (If start key is '', then return 'aaa'.
./hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailoverBalancerPersistence.java: * Kill the master and wait for a new active master to show up n * @return the new active master n
./hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java: * @param sn Name of this mock regionserver n * @throws
./hbase-server/src/test/java/org/apache/hadoop/hbase/master/janitor/TestCatalogJanitorInMemoryStates.java: * happen. Caller should check. n * @return Daughter regions; caller needs to check table actually
./hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestSizeFailures.java: * Count the number of rows and the number of entries from a scanner n * The Scanner
./hbase-server/src/test/java/org/apache/hadoop/hbase/client/FromClientSideBase.java: * n * @return Scan with RowFilter that does LESS than passed key.
./hbase-server/src/test/java/org/apache/hadoop/hbase/client/FromClientSideBase.java: * nnn * @return Scan with RowFilter that does CompareOp op on passed key.
./hbase-server/src/test/java/org/apache/hadoop/hbase/client/FromClientSideBase.java: * happen. Caller should check. n * @return Map of table regions; caller needs to check table
./hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java: * Write a test HFile with the given codec & cipher nnn * @param codec "none", "lzo", "gz",
./hbase-server/src/test/java/org/apache/hadoop/hbase/HFilePerformanceEvaluation.java: * Run all the read benchmarks for the test HFile nnn * @param codec "none", "lzo", "gz", "snappy"
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * @param hosts hostnames DNs to run on. n * @see #shutdownMiniDFSCluster()
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * @param hosts hostnames DNs to run on. n * @see #shutdownMiniDFSCluster()
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table with multiple regions. nnn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table with multiple regions. nn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table with multiple regions. n * @param replicaCount replica count. n * @return A
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * Create a table. nnn * @return A Table instance for the created table. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtil.java: * nnnnn * @return A region on which you must call
./hbase-server/src/test/java/org/apache/hadoop/hbase/mob/MobTestUtil.java: * Writes HStoreKey and ImmutableBytes data to passed writer and then closes it. n * n
./hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/NanoTimer.java: * Constructor n * Start the timer upon construction.
./hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/NanoTimer.java: * n * Time duration in nano seconds.
./hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/RandomKeyValueUtil.java: * @param rand random number generator to use n * @return the random key
./hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java: * Test data block encoding of empty KeyValue. n * On test failure.
./hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java: * Test KeyValues with negative timestamp. n * On test failure.
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java: * right and wait till it is done. nnn * @param singleFamily otherwise, run compaction on all cfs
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDeleteMobTable.java: * Generate the mob value. n * the size of the value
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java: * the returned results are always of the same or later update as the previous results. n * scan /
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java: * compact n * thread join
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java: * every now and then to keep things realistic. n * by flush / scan / compaction n * when joining
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java: * aggressivly to catch issues. n * by flush / scan / compaction n * when joining threads
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java: * Assert first value in the passed region is firstValue. n * n * n * n
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java: * a scan out-of-order assertion error before HBASE-16931 n * if error occurs during the test
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java: * @return Index of the server hosting the single table region nn * @throws
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java: * since crashed servers occupy an index. nn * @return A regionserver that is not
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestJoinedScanners.java: * Command line interface: n * @throws IOException if there is a bug while reading from disk
./hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/AbstractTestFSWAL.java: * all of its appends had made it out to the WAL (HBASE-11109). n * @see
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/BaseTestHBaseFsck.java: * legitimate hfile and return it. nn * @return Path of a flushed hfile. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/test/LoadTestDataGenerator.java: * initialize the LoadTestDataGenerator n * init args
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/test/LoadTestDataGenerator.java: * Giving a chance for the LoadTestDataGenerator to change the Mutation load. nn * @return updated
./hbase-server/src/test/java/org/apache/hadoop/hbase/util/test/LoadTestDataGenerator.java: * Giving a chance for the LoadTestDataGenerator to change the Get load. nn * @return updated Get
./hbase-server/src/test/java/org/apache/hadoop/hbase/TestPartialResultsFromClientSide.java: * n * @return the result size that should be used in {@link Scan#setMaxResultSize(long)} if you
./hbase-server/src/test/java/org/apache/hadoop/hbase/TimestampTestBase.java: * Assert that the scan returns only values < timestamp. nn * @return Count of items scanned. n
./hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestZKProcedureControllers.java: * n * @return a mock {@link ProcedureCoordinator} that just counts down the prepared and
./hbase-server/src/test/java/org/apache/hadoop/hbase/wal/WALPerformanceEvaluation.java: * @param wals may not be null n * @return Count of edits. n
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupAdmin.java: * @param n last n backup sessions
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/BackupAdmin.java: * @param n last n backup sessions
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java: * Get first n backup history records
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java: * @param n number of records, if n== -1 - max number is ignored
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java: * @param n max number of records, if n == -1 , then max number is ignored
./hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupManager.java: * @param bandwidth bandwidth per worker in MB per sec n * @throws BackupException exception
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * nn * @return Return new byte array that has ordinal as prefix on front taking
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * n * @return Type from the Counts enum of this row. Reads prefix added by
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * n * @return Row bytes minus the type flag.
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * Verify the values in the Counters against the expected number of entries written. n *
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * Expected number of referenced entrires n * The Job's Counters object
./hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestBigLinkedList.java: * Reducers. n * The Job's counters
./hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALPrettyPrinter.java: * the contents on stdout. n * Command line arguments n * Thrown upon file system
./hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/util/MetricSampleQuantiles.java: * rank can be. n * the index in the list of samples
./hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/util/MetricSampleQuantiles.java: * @return snapshot of the tracked quantiles n * if no items have been added to the estimator
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java: * Not recommended for usage as this is old-style API. nn * @return family:qualifier
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java: * n * @return CellScanner interface over cellIterables
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java: * n * @return CellScanner interface over cellIterable
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java: * n * @return CellScanner interface over cellIterable or null if cells
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java: * n * @return CellScanner interface over cellArray
./hbase-common/src/main/java/org/apache/hadoop/hbase/AuthUtil.java: * @param conf configuartion file n * @throws IOException login exception
./hbase-common/src/main/java/org/apache/hadoop/hbase/AuthUtil.java: * @param conf configuration file n * @throws IOException login exception
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueTestUtil.java: * made without distinguishing MVCC version of the KeyValues nn * @return true if KeyValues from
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * Sets this ByteBuff's position to the given value. n * @return this object
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * Copies from the given byte[] to this ByteBuff n * @return this ByteBuff
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * the next item and fetch the remaining bytes forming the short n * @return the short value at
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * the underlying ByteBuffers. n * @return the short value at the given index.
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * fetch the remaining bytes forming the long n * @return the long value at the given index
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/ByteBuff.java: * the underlying ByteBuffers. n * @return the long value at the given index.
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java: * Sets this MBB's position to the given value. n * @return this object
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java: * Marks the limit of this MBB. n * @return This MBB
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java: * Writes a byte to this MBB at the current position and increments the position n * @return this
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java: * Copy the content from this MBB to a byte[] based on the given offset and length n * the
./hbase-common/src/main/java/org/apache/hadoop/hbase/nio/MultiByteBuff.java: * position from where the copy should start n * the length upto which the copy has to be done
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/ByteBuffInputStream.java: * @param n the number of bytes to be skipped.
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/KeyProvider.java: * Retrieve the key for a given key aliase n * @return the keys corresponding to the supplied
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java: * Resolves a key for the given subject nn * @return a key for the given subject
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/StreamUtils.java: * Reads a varInt value stored in an array. n * Input array where the varInt is available n *
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/ByteBufferInputStream.java: * @param n the number of bytes to be skipped.
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/ImmutableBytesWritable.java: * Compares the bytes in this object to the specified byte array n * @return Positive if left is
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java: * compression algorithm. n * numBytes after block and encoding headers n * numBytes without
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java: * header required to store the block after decompressing (not decoding) n * ByteBuffer pointed
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDecodingContext.java: * after the header but before the data n * on disk data to be decoded
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java: * Creates a encoder specific encoding context n * store configuration n * encoding strategy used
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java: * n * header bytes to be written, put a dummy header here if the header is unknown n * HFile meta
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java: * decoding n * store configuration n * HFile meta data
./hbase-common/src/main/java/org/apache/hadoop/hbase/io/hadoopbackport/ThrottledInputStream.java: * {@link PositionedReadable}. nnnn * @return the number of bytes read
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/AbstractPositionedByteRange.java: * {@code bytes.length}. Resets {@code position} to 0. n * the new start of this range.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/AbstractPositionedByteRange.java: * {@code position} to {@code length}. n * The new length of this range.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/JenkinsHash.java: * If you are hashing n strings byte[][] k, do it like this: for (int i = 0, h = 0; i <
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java: * Compares the bytes in this object to the specified byte array n * @return Positive if left is
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java: * @param b byte array n * @see #toStringBinary(byte[], int, int)
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java: * Create a byte array which is multiple given bytes nn * @return byte array
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/Classes.java: * boolean, etc. n * The name of the class to retrieve. Can be either a normal class
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/Classes.java: * @return The class specified by className n * If the requested class can not be
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * range's offset and length are 0 and {@code capacity}, respectively. n * the size of the backing
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * Create a new {@code PositionedByteRange} over the provided {@code bytes}. n * The array to
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * Create a new {@code PositionedByteRange} over the provided {@code bytes}. n * The array to
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * wrap. n * The offset into {@code bytes} considered the beginning of this range. n * The length
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * {@code bytes.length}. Resets {@code position} to 0. n * the new start of this range.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java: * {@code position} to {@code length}. n * The new length of this range.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/ChecksumType.java: * Map a checksum name to a specific type. Do our own names. n * @return Type associated with
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java: * offset and length are 0 and {@code capacity}, respectively. n * the size of the backing array.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java: * Create a new {@code ByteRange} over the provided {@code bytes}. n * The array to wrap.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java: * Create a new {@code ByteRange} over the provided {@code bytes}. n * The array to wrap. n * The
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java: * offset into {@code bytes} considered the beginning of this range. n * The length of this range.
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/UnsafeAccess.java: * nn * @return int value at offset
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java: * @see org.apache.hadoop.hbase.util.PrettyPrinter#format(String, Unit) nn * @return the value
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/PrettyPrinter.java: * unit, it is assumed to be in seconds. n * @return value in seconds
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java: * n * ByteBuffer to hash n * offset to start from n * length to hash
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java: * @return the int value n * if there's not enough bytes left in the buffer after the given offset
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/MD5Hash.java: * Given a byte array, returns in MD5 hash as a hex string. n * @return SHA1 hash as a 32
./hbase-common/src/main/java/org/apache/hadoop/hbase/util/MD5Hash.java: * @param key the key to hash (variable length byte array) nn * @return MD5 hash as a 32 character
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java: * we don't validate that assumption here. n * the length of the common prefix of the two
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValue.java: * Compare two keys assuming that the first n bytes are the same.
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * sequenceid is an internal implementation detail not for general public use. nn * @throws
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Sets the given timestamp to the cell. nn * @throws IOException when the passed cell is not of
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Converts the rowkey bytes of the given cell into an int value n * @return rowkey as int
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Converts the value bytes of the given cell into a long value n * @return value as long
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Converts the value bytes of the given cell into a int value n * @return value as int
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Converts the value bytes of the given cell into a double value n * @return value as double
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * Converts the value bytes of the given cell into a BigDecimal n * @return value as BigDecimal
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * cell's are serialized in a contiguous format (For eg in RPCs). n * @return Estimate of the
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * and further, how to serialize the key for inclusion in hfile index. TODO. n * @return The key
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * passed qualifier. nnnn * @return Last possible Cell on passed Cell's rk:cf and passed
./hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java: * we already know is not in the file. n * @return Last possible Cell on passed Cell's rk:cf:q.
./hbase-common/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java: * Return the ZK Quorum servers string given the specified configuration n * @return Quorum
./hbase-common/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKConfig.java: * hbase.zookeeper.client.port and zookeeper.znode.parent n * @return the three configuration in
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * {@link KeyValue}. Key includes rowkey, family, qualifier, timestamp and type. n * @return the
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * The position will be set to the beginning of the new ByteBuffer n * @return the Bytebuffer
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * Copies the key to a new KeyValue n * @return the KeyValue that consists only the key part of
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * chronologically. n * @return previous key
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * reseeking. Should NEVER be returned to a client. n * row key n * row offset n * row length n *
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * family name n * family offset n * family length n * column qualifier n * qualifier offset n *
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * n * @return cell if it is an object of class {@link KeyValue} else we will return
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * * @return Length written on stream n * @see #create(DataInput) for the inverse function
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * n * @return A KeyValue made of a byte array that holds the key-only part. Needed to convert
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * n * @return A KeyValue made of a byte buffer that holds the key-only part. Needed to convert
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * nnn * @return A KeyValue made of a byte array that holds the key-only part. Needed to convert
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * n * Where to read bytes from. Creates a byte array to hold the KeyValue backing bytes copied
./hbase-common/src/main/java/org/apache/hadoop/hbase/KeyValueUtil.java: * Create a KeyValue reading length from in nn * @return Created
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java: * {{@link MetaCellComparator#META_COMPARATOR} should be used n * the cell to be compared n * the
./hbase-common/src/main/java/org/apache/hadoop/hbase/CellComparatorImpl.java: * kv serialized byte[] to be compared with n * the offset in the byte[] n * the length in the
./hbase-common/src/main/java/org/apache/hadoop/hbase/CompoundConfiguration.java: * ones if there are name collisions. n * Bytes map
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * n * The basic random number generator. n * Minimum integer n * maximum integer (exclusive).
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * Constructor n * The random number generator. n * minimum integer (inclusvie) n * maximum
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * integer (exclusive) n * parameter sigma. (sigma > 1.0)
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * Constructor. n * The random number generator. n * minimum integer (inclusvie) n * maximum
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * integer (exclusive) n * parameter sigma. (sigma > 1.0) n * Allowable error percentage (0 <
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * distribution. n * The basic random number generator. n * Minimum integer n * maximum integer
./hbase-common/src/test/java/org/apache/hadoop/hbase/util/RandomDistribution.java: * (exclusive). n * parameter.
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java: * Wrapper to fetch the configured {@code List}s. n * Configuration with
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java: * configured, the first will be used. n * Configuration for the CredentialProvider n *
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java: * CredentialEntry name (alias) n * The credential
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java: * credentialProvider argument must be an instance of Hadoop CredentialProvider. n * Instance of
./hbase-common/src/test/java/org/apache/hadoop/hbase/TestHBaseConfiguration.java: * CredentialProvider n * CredentialEntry name (alias) n * The credential to store
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * nnn * @return value of type T n
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * nn * @return sum or non null value among (if either of them is null); otherwise returns a null.
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * This method gets the PB message corresponding to the cell type n * @return the PB message for
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * This method gets the PB message corresponding to the cell type n * @return the cell-type
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * This method gets the PB message corresponding to the promoted type n * @return the PB message
./hbase-client/src/main/java/org/apache/hadoop/hbase/coprocessor/ColumnInterpreter.java: * This method gets the promoted type from the proto message n * @return the promoted-type
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfoDisplay.java: * Get the start key for display. Optionally hide the real start key. nn * @return the startkey
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfoDisplay.java: * Get the region name for display. Optionally hide the start key. nn * @return region name as
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/RegionInfoDisplay.java: * Get the region name for display. Optionally hide the start key. nn * @return region name bytes
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/OperationWithAttributes.java: * see where the slow query is coming from. n * id to set for the scan
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java: * Turn the cleaner chore on/off. n * @return Previous cleaner state wrapped by a
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java: * Turn the catalog janitor on/off. n * @return the previous state wrapped by a
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java: * @param family the family n * @return a list of Cells for this column or empty list if the
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java: * The Cell for the most recent timestamp for a given column. nn *
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java: * Get total size of raw cells n * @return Total size.
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/RowMutations.java: * @param mutations the mutations to send n * @throws IOException if any row in mutations is
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java: * array corresponds to the order of actions in the request list. n * @since 0.90.0
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Increment.java: * @param cell individual Cell n * @throws java.io.IOException e
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java: * @param cell individual cell n * @throws java.io.IOException e
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java: * DataBlockEncoding is been used, this is having no effect. n * @return this (for chained
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java: * enabled state for it to be disabled. n * @throws IOException There could be couple types of
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Append.java: * n * True (default) if the append operation should return the results. A client that is not
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java: * @param startRow row to start scanner at or after n * @throws IllegalArgumentException if
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java: * @param inclusive whether we should include the start row when scan n * @throws
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java: * @param stopRow row to end at (exclusive) n * @throws IllegalArgumentException if stopRow does
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java: * @param inclusive whether we should include the stop row when scan n * @throws
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java: * delivered to the caller. nn * @see Result#mayHaveMoreCellsInRow()
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java: * Create a new counter with the specified name n * @return {@link AtomicLong} instance for the
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java: * n * @return true if a counter exists with the counterName
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java: * n * @return {@link AtomicLong} instance for this counter name, null if counter does not exist.
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java: * @param className Full class name. n * @return the modifyable TD
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/TableDescriptorBuilder.java: * @param specStr The Coprocessor specification all in in one String n * @return the modifyable
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java: * Create a protocol buffer CellVisibility based on a client CellVisibility. n * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java: * Convert a protocol buffer CellVisibility to a client CellVisibility n * @return the converted
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java: * Convert a protocol buffer CellVisibility bytes to a client CellVisibility n * @return the
./hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java: * @param row Row to check nn * @throws IllegalArgumentException Thrown if row is
./hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java: * @return An instance of {@link ClusterId} made from bytes n * @see #toByteArray()
./hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterId.java: * n * @return A {@link ClusterId} made from the passed in cid
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Create a protocol buffer Mutate based on a client Mutation nn * @return a protobuf'd Mutation n
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Understanding is that the Cell will be transported other than via protobuf. nnn * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Understanding is that the Cell will be transported other than via protobuf. nn * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * {@link #toMutationNoData(MutationType, Mutation)} nn * @return A partly-filled out protobuf'd
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a delete KeyValue type to protocol buffer DeleteType. n * @return protocol buffer
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * A helper to warmup a region given a region name using admin protocol nn *
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * A helper to get the all the online regions on a region server using admin protocol. n * @return
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * judiciously. n * @return toString of passed m
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer CellVisibility to a client CellVisibility n * @return the converted
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer CellVisibility bytes to a client CellVisibility n * @return the
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Create a protocol buffer CellVisibility based on a client CellVisibility. n * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer Authorizations to a client Authorizations n * @return the converted
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer Authorizations bytes to a client Authorizations n * @return the
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Create a protocol buffer Authorizations based on a client Authorizations. n * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer TimeUnit to a client TimeUnit n * @return the converted client
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a client TimeUnit to a protocol buffer TimeUnit n * @return the converted protocol
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer ThrottleType to a client ThrottleType n * @return the converted
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a client ThrottleType to a protocol buffer ThrottleType n * @return the converted
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer QuotaScope to a client QuotaScope n * @return the converted client
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a client QuotaScope to a protocol buffer QuotaScope n * @return the converted protocol
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a protocol buffer QuotaType to a client QuotaType n * @return the converted client
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * Convert a client QuotaType to a protocol buffer QuotaType n * @return the converted protocol
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ProtobufUtil.java: * n * @return A String version of the passed in msg
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java: * Wrap a throwable to an action result. n * @return an action result builder
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java: * Wrap a throwable to an action result. n * @return an action result builder
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java: * n * @return NameValuePair of the exception name to stringified version os exception.
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java: * A utility to build a GetServerInfoResponse. nn * @return the response
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java: * A utility to build a GetOnlineRegionResponse. n * @return the response
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer MutateRequest for a put nn * @return a mutate request n
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer MutateRequest for an append nn * @return a mutate request n
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer MutateRequest for a client increment nn * @return a mutate request
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer MutateRequest for a delete nn * @return a mutate request n
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer ScanRequest for a client Scan nnnn * @return a scan request n
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer ScanRequest for a scanner id nnn * @return a scan request
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer ScanRequest for a scanner id nnnn * @return a scan request
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer bulk load request nnnnnn * @return a bulk load request
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * @param major indicator if it is a major compaction n * @return a CompactRegionRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer AddColumnRequest nn * @return an AddColumnRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer DeleteColumnRequest nn * @return a DeleteColumnRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer ModifyColumnRequest nn * @return an ModifyColumnRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer MoveRegionRequest nn * @return A MoveRegionRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Create a protocol buffer AssignRegionRequest n * @return an AssignRegionRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer UnassignRegionRequest n * @return an UnassignRegionRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer OfflineRegionRequest n * @return an OfflineRegionRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer DeleteTableRequest n * @return a DeleteTableRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer EnableTableRequest n * @return an EnableTableRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer DisableTableRequest n * @return a DisableTableRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer CreateTableRequest nn * @return a CreateTableRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer ModifyTableRequest nn * @return a ModifyTableRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer GetTableDescriptorsRequest n * @return a GetTableDescriptorsRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer SetBalancerRunningRequest nn * @return a SetBalancerRunningRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a request for querying the master the last flushed sequence Id for a region n * @return
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer SetNormalizerRunningRequest n * @return a SetNormalizerRunningRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer CreateNamespaceRequest n * @return a CreateNamespaceRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer ModifyNamespaceRequest n * @return a ModifyNamespaceRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer DeleteNamespaceRequest n * @return a DeleteNamespaceRequest
./hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java: * Creates a protocol buffer GetNamespaceDescriptorRequest n * @return a
./hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/CellBlockBuilder.java: * compressor. nnn * @return Null or byte buffer filled with a cellblock filled with
./hbase-client/src/main/java/org/apache/hadoop/hbase/filter/Filter.java: * {@link IOException}. n * @return true if and only if the fields of the filter that are
./hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java: * fixed positions n * @return mask array
./hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FuzzyRowFilter.java: * of fuzzyKeyMeta hint = '\x01\x01\x01\x00\x00' will skip valid row '\x01\x01\x01' nn * @param
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @param actions the permissions to be granted n * @deprecated Use
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @param actions the permissions to be granted n * @deprecated Use
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @param actions the permissions to be granted n * @deprecated Use
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @param t optional table name n * @deprecated Use
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @param namespace name of the namespace n * @deprecated Use
./hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlUtil.java: * @return true if access allowed, otherwise false n * @deprecated Use
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/MiniHBaseCluster.java: * n * @param currentfs We return this if we did not make a new one.
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/MiniHBaseCluster.java: * Starts a region server thread running n * @return New RegionServerThread
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/MiniHBaseCluster.java: * n * @return Name of region server that just went down.
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/MiniHBaseCluster.java: * Grab a numbered region server of your choice. n * @return region server
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * @param servers How many DNs to start. n * @see #shutdownMiniDFSCluster()
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * @param hosts hostnames DNs to run on. n * @see #shutdownMiniDFSCluster()
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * @param hosts hostnames DNs to run on. n * @see #shutdownMiniDFSCluster()
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table with multiple regions. nnn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table. nn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table with multiple regions. nn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table with multiple regions. n * @param replicaCount replica count. n * @return A
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create a table. nnn * @return A Table instance for the created table. n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * nnnnn * @return A region on which you must call
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * nnn * @return list of region info for regions added to meta n
./hbase-testing-util/src/main/java/org/apache/hadoop/hbase/HBaseTestingUtility.java: * Create region split keys between startkey and endKey nn * @param numRegions the number of
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java: * n * HBaseConfiguration to used n * whether to use write ahead logging. This can be turned off
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java: * n * the name of the table, as a string
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java: * @return the named mutator n * if there is a problem opening a table
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java: * Writes an action (Put or Delete) to the specified table. n * the table being updated. n * the
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/MultiTableOutputFormat.java: * update, either a put or a delete. n * if the action is not a put or a delete.
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java: * Create n splits for one InputSplit, For now only support uniform distribution
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java: * @param n Number of ranges after splitting. Pass 1 means no split for the range Pass 2 if
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableRecordReader.java: * @return The current key. n * @throws InterruptedException When the job is aborted.
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/ImportTsv.java: * Return starting position and length of row key from the specified line bytes. nn * @return
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java: * @return created Cell n * @deprecated since 0.98.9
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CellCreator.java: * @param vlength value length n * @return created Cell n
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.java: * n *
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java: * Allows subclasses to set the {@link TableRecordReader}. n * to provide other
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReader.java: * n *
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReader.java: * n *
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java: * n *
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableRecordReaderImpl.java: * n *
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/RowCounter.java: * n * @return the JobConf n
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/TableOutputFormat.java: * @param name Name of the job n * @return The newly created writer instance.
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java: * are not found. Override this method if you want to deal with nulls differently. n * @return
./hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapred/GroupingTableMap.java: * produce different types of keys. n * @return key generated by concatenating multiple column
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java: * Setup a table with two rows and values. n * @return A Table instance for the created table. n
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java: * Setup a table with two rows and values per column family. n * @return A Table instance for the
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java: * Run test assuming NotServingRegionException using newer mapreduce api n * @throws
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableInputFormat.java: * Run test assuming NotServingRegionException using newer mapreduce api n * @throws
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java: * necessary. This method is static to insure non-reliance on instance's util/conf facilities. n *
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithOperationAttributes.java: * Any arguments to pass BEFORE inputFile path is appended. n * @return The Tool instance used to
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java: * Runs an export job with the specified command line args n * @return true if job completed
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportExport.java: * Runs an import job with the specified command line args n * @return true if job completed
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableMapReduceBase.java: * @param table Table to scan. n * @throws NullPointerException if we failed to find a cell value
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestImportTSVWithVisibilityLabels.java: * necessary. This method is static to insure non-reliance on instance's util/conf facilities. n *
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultithreadedTableMapper.java: * @param table Table to scan. n * @throws NullPointerException if we failed to find a cell value
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java: * Format passed integer. n * @return Returns zero-prefixed ROW_LENGTH-byte wide decimal version
./hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapred/TestTableInputFormat.java: * Setup a table with two rows and values per column family. n * @return A Table instance for the
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKNodeTracker.java: * @param timeout maximum time to wait for the node data to be available, n milliseconds. Pass 0
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * retry the operations one-by-one (sequentially). n * - zk reference n * - if true when we get a
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * ZooKeeper exception that could retry the operations one-by-one (sequentially) n * - path of the
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * @throws KeeperException.NotEmptyException if node has children while deleting n * if unexpected
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * ZooKeeper exception n * if an invalid path is
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * retry the operations one-by-one (sequentially). n * - zk reference n * - if true when we get a
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * ZooKeeper exception that could retry the operations one-by-one (sequentially) n * - path of the
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * @throws KeeperException.NotEmptyException if node has children while deleting n * if unexpected
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * ZooKeeper exception n * if an invalid path is
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * as that of the traversal. Lists all the children without setting any watches. n * - zk
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * reference n * - path of node
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * @return list of children znodes under the path n * if unexpected ZooKeeper exception
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * as that of the traversal. Lists all the children and set watches on to them. n * - zk reference
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * n * - path of node
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java: * @return list of children znodes under the path n * if unexpected ZooKeeper exception
./hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/MetaTableLocator.java: * @param replicaId the ID of the replica n * @throws KeeperException if a ZooKeeper operation
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java: * @param in Thrift ColumnDescriptor object n * @throws IllegalArgument if the column name is
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java: * HColumnDescriptor object. n * Hbase HColumnDescriptor object
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java: * empty list is returned if the input is null. n * Hbase Cell object
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java: * object. The empty list is returned if the input is null. n * Hbase RowResult object n * This
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java: * RowResult objects. The empty list is returned if the input is null. n * Array of Hbase
./hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HBaseServiceHandler.java: * Creates and returns a Table instance from a given table name. n * name of table
./hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java: * @param n The number to pad.
./hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java: * Padding numbers to make comparison of sort order easier in a for loop n * The number to pad. n
./hbase-metrics-api/src/main/java/org/apache/hadoop/hbase/metrics/Counter.java: * @param n The amount to increment.
./hbase-metrics-api/src/main/java/org/apache/hadoop/hbase/metrics/Counter.java: * @param n The amount to decrement.
./hbase-http/src/main/java/org/apache/hadoop/hbase/http/jmx/JMXJsonServlet.java: * Process a GET request for the specified resource. n * The servlet request we are processing n *
./hbase-http/src/main/java/org/apache/hadoop/hbase/http/HttpServer.java: * Add an endpoint that the HTTP server should listen to. n * the endpoint of that the HTTP
./hbase-http/src/main/java/org/apache/hadoop/hbase/http/ProxyUserAuthenticationFilter.java: * n * @return doAs parameter if exists or null otherwise