diff --git a/CHANGES.txt b/CHANGES.txt
index 27e1f3a..f283803 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -239,20 +239,6 @@ Release 0.91.0 - Unreleased
HBASE-4027 Off Heap Cache never creates Slabs (Li Pi)
HBASE-4265 zookeeper.KeeperException$NodeExistsException if HMaster restarts
while table is being disabled (Ming Ma)
- HBASE-4338 Package build for rpm and deb are broken (Eric Yang)
- HBASE-4309 slow query log metrics spewing warnings (Riley Patterson)
- HBASE-4302 Only run Snappy compression tests if Snappy is available
- (Alejandro Abdelnur via todd)
- HBASE-4271 Clean up coprocessor handling of table operations
- (Ming Ma via garyh)
- HBASE-4341 HRS#closeAllRegions should take care of HRS#onlineRegions's
- weak consistency (Jieshan Bean)
- HBASE-4297 TableMapReduceUtil overwrites user supplied options
- (Jan Lukavsky)
- HBASE-4015 Refactor the TimeoutMonitor to make it less racy
- (ramkrishna.s.vasudevan)
-
-
IMPROVEMENTS
HBASE-3290 Max Compaction Size (Nicolas Spiegelberg via Stack)
@@ -457,14 +443,6 @@ Release 0.91.0 - Unreleased
HBASE-1730 Online Schema Changes
HBASE-4206 jenkins hash implementation uses longs unnecessarily
(Ron Yang)
- HBASE-3842 Refactor Coprocessor Compaction API
- HBASE-4312 Deploy new hbase logo
- HBASE-4327 Compile HBase against hadoop 0.22 (Joep Rottinghuis)
- HBASE-4339 Improve eclipse documentation and project file generation
- (Eric Charles)
- HBASE-4342 Update Thrift to 0.7.0 (Moaz Reyad)
- HBASE-4260 Expose a command to manually trigger an HLog roll
- (ramkrishna.s.vasudevan)
TASKS
HBASE-3559 Move report of split to master OFF the heartbeat channel
@@ -485,8 +463,6 @@ Release 0.91.0 - Unreleased
HBASE-4315 RS requestsPerSecond counter seems to be off (subramanian raghunathan)
HBASE-4289 Move spinlock to SingleSizeCache rather than the slab allocator
(Li Pi)
- HBASE-4296 Deprecate HTable[Interface].getRowOrBefore(...) (Lars Hofhansl)
- HBASE-2195 Support cyclic replication (Lars Hofhansl)
NEW FEATURES
HBASE-2001 Coprocessors: Colocate user code with regions (Mingjie Lai via
diff --git a/pom.xml b/pom.xml
index 6cb67e5..7c154d0 100644
--- a/pom.xml
+++ b/pom.xml
@@ -586,7 +586,7 @@
- ${project.build.directory}/generated-jamon
+ ${basedir}/target/jspc${project.build.directory}/generated-sources/java
@@ -663,7 +663,7 @@
2.4.0a1.5.81.0.1
- 0.7.0
+ 0.6.13.3.30.0.1-SNAPSHOT
@@ -674,7 +674,6 @@
10.91.0
- ${artifactId}-${version}
@@ -1123,12 +1122,12 @@
-
+
hadoop-0.20
- !hadoop.profile
+ !hadoop23
@@ -1177,161 +1176,14 @@
-
- hadoop-0.22
-
-
- hadoop.profile
- 22
-
-
-
- 0.22.0-SNAPSHOT
-
-
-
- org.apache.hadoop
- hadoop-common
- ${hadoop.version}
-
-
-
- hsqldb
- hsqldb
-
-
- net.sf.kosmosfs
- kfs
-
-
- org.eclipse.jdt
- core
-
-
- net.java.dev.jets3t
- jets3t
-
-
- oro
- oro
-
-
- jdiff
- jdiff
-
-
- org.apache.lucene
- lucene-core
-
-
-
-
- org.apache.hadoop
- hadoop-hdfs
- ${hadoop.version}
-
-
-
- hsqldb
- hsqldb
-
-
- net.sf.kosmosfs
- kfs
-
-
- org.eclipse.jdt
- core
-
-
- net.java.dev.jets3t
- jets3t
-
-
- oro
- oro
-
-
- jdiff
- jdiff
-
-
- org.apache.lucene
- lucene-core
-
-
-
-
- org.apache.hadoop
- hadoop-mapred
- ${hadoop.version}
-
-
-
- hsqldb
- hsqldb
-
-
- net.sf.kosmosfs
- kfs
-
-
- org.eclipse.jdt
- core
-
-
- net.java.dev.jets3t
- jets3t
-
-
- oro
- oro
-
-
- jdiff
- jdiff
-
-
- org.apache.lucene
- lucene-core
-
-
-
-
-
- org.apache.hadoop
- hadoop-common-test
- ${hadoop.version}
- test
-
-
- org.apache.hadoop
- hadoop-hdfs-test
- ${hadoop.version}
- test
-
-
- org.apache.hadoop
- hadoop-mapred-test
- ${hadoop.version}
- test
-
-
-
-
-
hadoop-0.23
- hadoop.profile
- 23
+ hadoop23
diff --git a/src/docbkx/book.xml b/src/docbkx/book.xml
index 5534acd..3bed801 100644
--- a/src/docbkx/book.xml
+++ b/src/docbkx/book.xml
@@ -28,14 +28,14 @@
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook">
-
+ The Apache
-
+
-
+ Book2011Apache Software FoundationThis is the official book of
diff --git a/src/docbkx/build.xml b/src/docbkx/build.xml
index 6541727..25dc0b2 100644
--- a/src/docbkx/build.xml
+++ b/src/docbkx/build.xml
@@ -12,8 +12,8 @@
Building in snappy compression support
- Pass -Dsnappy to trigger the snappy maven profile for building
- snappy native libs into hbase.
+
Pass -Dsnappy to trigger the snappy maven profile for building
+ snappy native libs into hbase.
diff --git a/src/docbkx/developer.xml b/src/docbkx/developer.xml
index 9f3d8bd..70c40f8 100644
--- a/src/docbkx/developer.xml
+++ b/src/docbkx/developer.xml
@@ -47,8 +47,7 @@ git clone git://git.apache.org/hbase.git
mvn eclipse:eclipse
- ... from your local HBase project directory in your workspace to generate some new .project
- and .classpathfiles. Then reopen Eclipse.
+ ... from your local HBase project directory in your workspace to generate a new .project file. Then reopen Eclipse.
Maven Plugin
@@ -67,16 +66,6 @@ Unbound classpath variable: 'M2_REPO/com/github/stephenc/high-scale-lib/high-sca
Unbound classpath variable: 'M2_REPO/com/google/guava/guava/r09/guava-r09.jar' in project 'hbase' hbase Build path Build Path Problem
Unbound classpath variable: 'M2_REPO/com/google/protobuf/protobuf-java/2.3.0/protobuf-java-2.3.0.jar' in project 'hbase' hbase Build path Build Path Problem Unbound classpath variable:
-
-
- Import via m2eclipse
- If you install the m2eclipse and import the HBase pom.xml in your workspace, you will have to fix your eclipse Build Path.
- Remove target folder, add target/generated-jamon
- and target/generated-sources/java folders. You may also remove from your Build Path
- the exclusions on the src/main/resources and src/test/resources
- to avoid error message in the console 'Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase:
- 'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml doesn't exist'. This will also
- reduce the eclipse build cycles and make your life easier when developing.Eclipse Known Issues
diff --git a/src/main/jamon/org/apache/hbase/tmpl/master/MasterStatusTmpl.jamon b/src/main/jamon/org/apache/hbase/tmpl/master/MasterStatusTmpl.jamon
index 38731bc..e3e13e7 100644
--- a/src/main/jamon/org/apache/hbase/tmpl/master/MasterStatusTmpl.jamon
+++ b/src/main/jamon/org/apache/hbase/tmpl/master/MasterStatusTmpl.jamon
@@ -49,7 +49,7 @@ org.apache.hadoop.hbase.HTableDescriptor;
-
+
Local logs,
diff --git a/src/main/jamon/org/apache/hbase/tmpl/regionserver/RSStatusTmpl.jamon b/src/main/jamon/org/apache/hbase/tmpl/regionserver/RSStatusTmpl.jamon
index d974e91..864e62c 100644
--- a/src/main/jamon/org/apache/hbase/tmpl/regionserver/RSStatusTmpl.jamon
+++ b/src/main/jamon/org/apache/hbase/tmpl/regionserver/RSStatusTmpl.jamon
@@ -54,7 +54,7 @@ org.apache.hadoop.hbase.HRegionInfo;
-
+
Region Server: <% serverInfo.getServerAddress().getHostname() %>:<% serverInfo.getServerAddress().getPort() %>
Local logs,
diff --git a/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java b/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
index 06fd29c..55cf55e 100644
--- a/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
+++ b/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java
@@ -54,7 +54,9 @@ public class HBaseConfiguration extends Configuration {
public HBaseConfiguration(final Configuration c) {
//TODO:replace with private constructor
this();
- merge(this, c);
+ for (Entrye: c) {
+ set(e.getKey(), e.getValue());
+ }
}
private static void checkDefaultsVersion(Configuration conf) {
@@ -107,19 +109,9 @@ public class HBaseConfiguration extends Configuration {
*/
public static Configuration create(final Configuration that) {
Configuration conf = create();
- merge(conf, that);
- return conf;
- }
-
- /**
- * Merge two configurations.
- * @param destConf the configuration that will be overwritten with items
- * from the srcConf
- * @param srcConf the source configuration
- **/
- public static void merge(Configuration destConf, Configuration srcConf) {
- for (Entry e : srcConf) {
- destConf.set(e.getKey(), e.getValue());
+ for (Entrye: that) {
+ conf.set(e.getKey(), e.getValue());
}
+ return conf;
}
}
diff --git a/src/main/java/org/apache/hadoop/hbase/HConstants.java b/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 45fefe4..3e3e902 100644
--- a/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -25,7 +25,6 @@ import org.apache.hadoop.hbase.util.Bytes;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
-import java.util.UUID;
import java.util.regex.Pattern;
/**
@@ -204,12 +203,6 @@ public final class HConstants {
/** Configuration key storing the cluster ID */
public static final String CLUSTER_ID = "hbase.cluster.id";
- /**
- * Attribute used in Puts and Gets to indicate the originating
- * cluster.
- */
- public static final String CLUSTER_ID_ATTR = "_c.id_";
-
// Always store the location of the root table's HRegion.
// This HRegion is never split.
@@ -371,7 +364,7 @@ public final class HConstants {
* Default cluster ID, cannot be used to identify a cluster so a key with
* this value means it wasn't meant for replication.
*/
- public static final UUID DEFAULT_CLUSTER_ID = new UUID(0L,0L);
+ public static final byte DEFAULT_CLUSTER_ID = 0;
/**
* Parameter name for maximum number of bytes returned when calling a
diff --git a/src/main/java/org/apache/hadoop/hbase/client/Delete.java b/src/main/java/org/apache/hadoop/hbase/client/Delete.java
index cb20b46..01b77c1 100644
--- a/src/main/java/org/apache/hadoop/hbase/client/Delete.java
+++ b/src/main/java/org/apache/hadoop/hbase/client/Delete.java
@@ -37,7 +37,6 @@ import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import java.util.TreeSet;
-import java.util.UUID;
/**
* Used to perform Delete operations on a single row.
@@ -514,26 +513,4 @@ public class Delete extends Operation
public void setWriteToWAL(boolean write) {
this.writeToWAL = write;
}
-
- /**
- * Set the replication custer id.
- * @param clusterId
- */
- public void setClusterId(UUID clusterId) {
- byte[] val = new byte[2*Bytes.SIZEOF_LONG];
- Bytes.putLong(val, 0, clusterId.getMostSignificantBits());
- Bytes.putLong(val, Bytes.SIZEOF_LONG, clusterId.getLeastSignificantBits());
- setAttribute(HConstants.CLUSTER_ID_ATTR, val);
- }
-
- /**
- * @return The replication cluster id.
- */
- public UUID getClusterId() {
- byte[] attr = getAttribute(HConstants.CLUSTER_ID_ATTR);
- if (attr == null) {
- return HConstants.DEFAULT_CLUSTER_ID;
- }
- return new UUID(Bytes.toLong(attr,0), Bytes.toLong(attr, Bytes.SIZEOF_LONG));
- }
}
diff --git a/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java b/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
index 29e267f..25c6662 100644
--- a/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
+++ b/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
@@ -56,7 +56,6 @@ import org.apache.hadoop.hbase.catalog.MetaReader;
import org.apache.hadoop.hbase.client.MetaScanner.MetaScannerVisitor;
import org.apache.hadoop.hbase.ipc.HMasterInterface;
import org.apache.hadoop.hbase.ipc.HRegionInterface;
-import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
import org.apache.hadoop.hbase.util.Addressing;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.Pair;
@@ -107,10 +106,10 @@ public class HBaseAdmin implements Abortable, Closeable {
break;
} catch (MasterNotRunningException mnre) {
HConnectionManager.deleteStaleConnection(this.connection);
- this.connection = HConnectionManager.getConnection(this.conf);
+ this.connection = HConnectionManager.getConnection(this.conf);
} catch (UndeclaredThrowableException ute) {
HConnectionManager.deleteStaleConnection(this.connection);
- this.connection = HConnectionManager.getConnection(this.conf);
+ this.connection = HConnectionManager.getConnection(this.conf);
}
try { // Sleep
Thread.sleep(getPauseTime(tries));
@@ -486,23 +485,8 @@ public class HBaseAdmin implements Abortable, Closeable {
firstMetaServer.getRegionInfo().getRegionName(), scan);
// Get a batch at a time.
Result values = server.next(scannerId);
-
- // let us wait until .META. table is updated and
- // HMaster removes the table from its HTableDescriptors
if (values == null) {
- boolean tableExists = false;
- HTableDescriptor[] htds = getMaster().getHTableDescriptors();
- if (htds != null && htds.length > 0) {
- for (HTableDescriptor htd: htds) {
- if (Bytes.equals(tableName, htd.getName())) {
- tableExists = true;
- break;
- }
- }
- }
- if (!tableExists) {
- break;
- }
+ break;
}
} catch (IOException ex) {
if(tries == numRetries - 1) { // no more tries left
@@ -1597,24 +1581,4 @@ public class HBaseAdmin implements Abortable, Closeable {
return this.connection.getHTableDescriptors(tableNames);
}
- /**
- * Roll the log writer. That is, start writing log messages to a new file.
- *
- * @param serverName
- * The servername of the regionserver. A server name is made of host,
- * port and startcode. This is mandatory. Here is an example:
- * host187.example.com,60020,1289493121758
- * @return If lots of logs, flush the returned regions so next time through
- * we can clean logs. Returns null if nothing to flush. Names are actual
- * region names as returned by {@link HRegionInfo#getEncodedName()}
- * @throws IOException if a remote or network exception occurs
- * @throws FailedLogCloseException
- */
- public synchronized byte[][] rollHLogWriter(String serverName)
- throws IOException, FailedLogCloseException {
- ServerName sn = new ServerName(serverName);
- HRegionInterface rs = this.connection.getHRegionConnection(
- sn.getHostname(), sn.getPort());
- return rs.rollHLogWriter();
- }
}
diff --git a/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java b/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
index 39ab770..6fec787 100644
--- a/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
+++ b/src/main/java/org/apache/hadoop/hbase/client/HTableInterface.java
@@ -136,12 +136,6 @@ public interface HTableInterface {
* @param family Column family to include in the {@link Result}.
* @throws IOException if a remote or network exception occurs.
* @since 0.20.0
- *
- * @deprecated As of version 0.92 this method is deprecated without
- * replacement.
- * getRowOrBefore is used internally to find entries in .META. and makes
- * various assumptions about the table (which are true for .META. but not
- * in general) to be efficient.
*/
Result getRowOrBefore(byte[] row, byte[] family) throws IOException;
diff --git a/src/main/java/org/apache/hadoop/hbase/client/Put.java b/src/main/java/org/apache/hadoop/hbase/client/Put.java
index c39626e..fa25c63 100644
--- a/src/main/java/org/apache/hadoop/hbase/client/Put.java
+++ b/src/main/java/org/apache/hadoop/hbase/client/Put.java
@@ -40,7 +40,6 @@ import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import java.util.TreeSet;
-import java.util.UUID;
/**
@@ -657,26 +656,4 @@ public class Put extends Operation
byte [][] parts = KeyValue.parseColumn(column);
return add(parts[0], parts[1], ts, value);
}
-
- /**
- * Set the replication custer id.
- * @param clusterId
- */
- public void setClusterId(UUID clusterId) {
- byte[] val = new byte[2*Bytes.SIZEOF_LONG];
- Bytes.putLong(val, 0, clusterId.getMostSignificantBits());
- Bytes.putLong(val, Bytes.SIZEOF_LONG, clusterId.getLeastSignificantBits());
- setAttribute(HConstants.CLUSTER_ID_ATTR, val);
- }
-
- /**
- * @return The replication cluster id.
- */
- public UUID getClusterId() {
- byte[] attr = getAttribute(HConstants.CLUSTER_ID_ATTR);
- if (attr == null) {
- return HConstants.DEFAULT_CLUSTER_ID;
- }
- return new UUID(Bytes.toLong(attr,0), Bytes.toLong(attr, Bytes.SIZEOF_LONG));
- }
}
diff --git a/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java b/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
index 81171ab..280d2d7 100644
--- a/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
+++ b/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationAdmin.java
@@ -67,6 +67,7 @@ import org.apache.zookeeper.KeeperException;
public class ReplicationAdmin implements Closeable {
private final ReplicationZookeeper replicationZk;
+ private final Configuration configuration;
private final HConnection connection;
/**
@@ -80,6 +81,7 @@ public class ReplicationAdmin implements Closeable {
throw new RuntimeException("hbase.replication isn't true, please " +
"enable it in order to use replication");
}
+ this.configuration = conf;
this.connection = HConnectionManager.getConnection(conf);
ZooKeeperWatcher zkw = this.connection.getZooKeeperWatcher();
try {
diff --git a/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java b/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java
index c6374a0..298b2e1 100644
--- a/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java
+++ b/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseMasterObserver.java
@@ -32,12 +32,12 @@ import java.io.IOException;
public class BaseMasterObserver implements MasterObserver {
@Override
public void preCreateTable(ObserverContext ctx,
- HTableDescriptor desc, HRegionInfo[] regions) throws IOException {
+ HTableDescriptor desc, byte[][] splitKeys) throws IOException {
}
@Override
public void postCreateTable(ObserverContext ctx,
- HTableDescriptor desc, HRegionInfo[] regions) throws IOException {
+ HRegionInfo[] regions, boolean sync) throws IOException {
}
@Override
@@ -112,17 +112,17 @@ public class BaseMasterObserver implements MasterObserver {
@Override
public void preAssign(ObserverContext ctx,
- HRegionInfo regionInfo, boolean force) throws IOException {
+ byte[] regionName, boolean force) throws IOException {
}
@Override
public void postAssign(ObserverContext ctx,
- HRegionInfo regionInfo, boolean force) throws IOException {
+ HRegionInfo regionInfo) throws IOException {
}
@Override
public void preUnassign(ObserverContext ctx,
- HRegionInfo regionInfo, boolean force) throws IOException {
+ byte[] regionName, boolean force) throws IOException {
}
@Override
diff --git a/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java b/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
index 54106f0..d473ba7 100644
--- a/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
+++ b/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hbase.coprocessor;
import java.util.List;
import java.util.Map;
-import com.google.common.collect.ImmutableList;
import org.apache.hadoop.hbase.CoprocessorEnvironment;
import org.apache.hadoop.hbase.HRegionInfo;
import org.apache.hadoop.hbase.KeyValue;
@@ -34,8 +33,6 @@ import org.apache.hadoop.hbase.filter.WritableByteArrayComparable;
import org.apache.hadoop.hbase.regionserver.HRegion;
import org.apache.hadoop.hbase.regionserver.InternalScanner;
import org.apache.hadoop.hbase.regionserver.RegionScanner;
-import org.apache.hadoop.hbase.regionserver.Store;
-import org.apache.hadoop.hbase.regionserver.StoreFile;
import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
@@ -81,22 +78,12 @@ public abstract class BaseRegionObserver implements RegionObserver {
HRegion l, HRegion r) { }
@Override
- public void preCompactSelection(final ObserverContext c,
- final Store store, final List candidates) { }
-
- @Override
- public void postCompactSelection(final ObserverContext c,
- final Store store, final ImmutableList selected) { }
-
- @Override
- public InternalScanner preCompact(ObserverContext e,
- final Store store, final InternalScanner scanner) {
- return scanner;
- }
+ public void preCompact(ObserverContext e,
+ boolean willSplit) { }
@Override
public void postCompact(ObserverContext e,
- final Store store, final StoreFile resultFile) { }
+ boolean willSplit) { }
@Override
public void preGetClosestRowBefore(final ObserverContext e,
diff --git a/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java b/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
index 96774d6..0008c49 100644
--- a/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
+++ b/src/main/java/org/apache/hadoop/hbase/coprocessor/MasterObserver.java
@@ -1,5 +1,5 @@
/*
- * Copyright 2011 The Apache Software Foundation
+ * Copyright 2010 The Apache Software Foundation
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
@@ -33,207 +33,142 @@ public interface MasterObserver extends Coprocessor {
/**
* Called before a new table is created by
* {@link org.apache.hadoop.hbase.master.HMaster}.
- * It can't bypass the default action, e.g., ctx.bypass() won't have effect.
- * @param ctx the environment to interact with the framework and master
- * @param desc the HTableDescriptor for the table
- * @param regions the initial regions created for the table
- * @throws IOException
*/
void preCreateTable(final ObserverContext ctx,
- HTableDescriptor desc, HRegionInfo[] regions) throws IOException;
+ HTableDescriptor desc, byte[][] splitKeys) throws IOException;
/**
- * Called after the createTable operation has been requested.
+ * Called after the initial table regions have been created.
* @param ctx the environment to interact with the framework and master
- * @param desc the HTableDescriptor for the table
* @param regions the initial regions created for the table
+ * @param sync whether the client call is waiting for region assignment to
+ * complete before returning
* @throws IOException
*/
void postCreateTable(final ObserverContext ctx,
- HTableDescriptor desc, HRegionInfo[] regions) throws IOException;
+ HRegionInfo[] regions, boolean sync) throws IOException;
/**
* Called before {@link org.apache.hadoop.hbase.master.HMaster} deletes a
* table
- * It can't bypass the default action, e.g., ctx.bypass() won't have effect.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
*/
void preDeleteTable(final ObserverContext ctx,
byte[] tableName) throws IOException;
/**
- * Called after the deleteTable operation has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
+ * Called after the table has been deleted, before returning to the client.
*/
void postDeleteTable(final ObserverContext ctx,
byte[] tableName) throws IOException;
/**
* Called prior to modifying a table's properties.
- * It can't bypass the default action, e.g., ctx.bypass() won't have effect.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param htd the HTableDescriptor
*/
void preModifyTable(final ObserverContext ctx,
final byte[] tableName, HTableDescriptor htd) throws IOException;
/**
- * Called after the modifyTable operation has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param htd the HTableDescriptor
+ * Called after {@link org.apache.hadoop.hbase.master.HMaster} has modified
+ * the table's properties in all the table regions.
*/
void postModifyTable(final ObserverContext ctx,
final byte[] tableName, HTableDescriptor htd) throws IOException;
/**
* Called prior to adding a new column family to the table.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param column the HColumnDescriptor
*/
void preAddColumn(final ObserverContext ctx,
byte[] tableName, HColumnDescriptor column) throws IOException;
/**
* Called after the new column family has been created.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param column the HColumnDescriptor
*/
void postAddColumn(final ObserverContext ctx,
byte[] tableName, HColumnDescriptor column) throws IOException;
/**
* Called prior to modifying a column family's attributes.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param descriptor the HColumnDescriptor
*/
void preModifyColumn(final ObserverContext ctx,
byte [] tableName, HColumnDescriptor descriptor) throws IOException;
/**
* Called after the column family has been updated.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param descriptor the HColumnDescriptor
*/
void postModifyColumn(final ObserverContext ctx,
byte[] tableName, HColumnDescriptor descriptor) throws IOException;
/**
* Called prior to deleting the entire column family.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param c the column
*/
void preDeleteColumn(final ObserverContext ctx,
final byte [] tableName, final byte[] c) throws IOException;
/**
* Called after the column family has been deleted.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
- * @param c the column
*/
void postDeleteColumn(final ObserverContext ctx,
final byte [] tableName, final byte[] c) throws IOException;
/**
* Called prior to enabling a table.
- * It can't bypass the default action, e.g., ctx.bypass() won't have effect.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
*/
void preEnableTable(final ObserverContext ctx,
final byte[] tableName) throws IOException;
/**
- * Called after the enableTable operation has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
+ * Called after the table has been enabled.
*/
void postEnableTable(final ObserverContext ctx,
final byte[] tableName) throws IOException;
/**
* Called prior to disabling a table.
- * It can't bypass the default action, e.g., ctx.bypass() won't have effect.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
*/
void preDisableTable(final ObserverContext ctx,
final byte[] tableName) throws IOException;
/**
- * Called after the disableTable operation has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param tableName the name of the table
+ * Called after the table has been disabled.
*/
void postDisableTable(final ObserverContext ctx,
final byte[] tableName) throws IOException;
/**
* Called prior to moving a given region from one region server to another.
- * @param ctx the environment to interact with the framework and master
- * @param region the HRegionInfo
- * @param srcServer the source ServerName
- * @param destServer the destination ServerName
*/
void preMove(final ObserverContext ctx,
- final HRegionInfo region, final ServerName srcServer,
- final ServerName destServer)
+ final HRegionInfo region, final ServerName srcServer, final ServerName destServer)
throws UnknownRegionException;
/**
* Called after the region move has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param region the HRegionInfo
- * @param srcServer the source ServerName
- * @param destServer the destination ServerName
*/
void postMove(final ObserverContext ctx,
- final HRegionInfo region, final ServerName srcServer,
- final ServerName destServer)
+ final HRegionInfo region, final ServerName srcServer, final ServerName destServer)
throws UnknownRegionException;
/**
* Called prior to assigning a specific region.
- * @param ctx the environment to interact with the framework and master
- * @param regionInfo the regionInfo of the region
- * @param force whether to force assignment or not
*/
void preAssign(final ObserverContext ctx,
- final HRegionInfo regionInfo, final boolean force)
+ final byte [] regionName, final boolean force)
throws IOException;
/**
* Called after the region assignment has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param regionInfo the regionInfo of the region
- * @param force whether to force assignment or not
*/
void postAssign(final ObserverContext ctx,
- final HRegionInfo regionInfo, final boolean force) throws IOException;
+ final HRegionInfo regionInfo) throws IOException;
/**
* Called prior to unassigning a given region.
- * @param ctx the environment to interact with the framework and master
- * @param regionName the name of the region
- * @param force whether to force unassignment or not
*/
void preUnassign(final ObserverContext ctx,
- final HRegionInfo regionInfo, final boolean force) throws IOException;
+ final byte [] regionName, final boolean force) throws IOException;
/**
* Called after the region unassignment has been requested.
- * @param ctx the environment to interact with the framework and master
- * @param regionName the name of the region
- * @param force whether to force unassignment or not
*/
void postUnassign(final ObserverContext ctx,
final HRegionInfo regionInfo, final boolean force) throws IOException;
@@ -241,14 +176,12 @@ public interface MasterObserver extends Coprocessor {
/**
* Called prior to requesting rebalancing of the cluster regions, though after
* the initial checks for regions in transition and the balance switch flag.
- * @param ctx the environment to interact with the framework and master
*/
void preBalance(final ObserverContext ctx)
throws IOException;
/**
* Called after the balancing plan has been submitted.
- * @param ctx the environment to interact with the framework and master
*/
void postBalance(final ObserverContext ctx)
throws IOException;
@@ -279,7 +212,7 @@ public interface MasterObserver extends Coprocessor {
/**
- * Called immediately prior to stopping this
+ * Called immediatly prior to stopping this
* {@link org.apache.hadoop.hbase.master.HMaster} process.
*/
void preStopMaster(final ObserverContext ctx)
diff --git a/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java b/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
index bdf888c..008d027 100644
--- a/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
+++ b/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
@@ -19,7 +19,6 @@ package org.apache.hadoop.hbase.coprocessor;
import java.util.List;
import java.util.Map;
-import com.google.common.collect.ImmutableList;
import org.apache.hadoop.hbase.Coprocessor;
import org.apache.hadoop.hbase.HRegionInfo;
import org.apache.hadoop.hbase.KeyValue;
@@ -34,8 +33,6 @@ import org.apache.hadoop.hbase.filter.WritableByteArrayComparable;
import org.apache.hadoop.hbase.regionserver.HRegion;
import org.apache.hadoop.hbase.regionserver.InternalScanner;
import org.apache.hadoop.hbase.regionserver.RegionScanner;
-import org.apache.hadoop.hbase.regionserver.Store;
-import org.apache.hadoop.hbase.regionserver.StoreFile;
import org.apache.hadoop.hbase.regionserver.wal.HLogKey;
import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
@@ -72,61 +69,22 @@ public interface RegionObserver extends Coprocessor {
void postFlush(final ObserverContext c);
/**
- * Called prior to selecting the {@link StoreFile}s to compact from the list
- * of available candidates. To alter the files used for compaction, you may
- * mutate the passed in list of candidates.
+ * Called before compaction.
* @param c the environment provided by the region server
- * @param store the store where compaction is being requested
- * @param candidates the store files currently available for compaction
+ * @param willSplit true if compaction will result in a split, false
+ * otherwise
*/
- void preCompactSelection(final ObserverContext c,
- final Store store, final List candidates);
+ void preCompact(final ObserverContext c,
+ final boolean willSplit);
/**
- * Called after the {@link StoreFile}s to compact have been selected from the
- * available candidates.
+ * Called after compaction.
* @param c the environment provided by the region server
- * @param store the store being compacted
- * @param selected the store files selected to compact
- */
- void postCompactSelection(final ObserverContext c,
- final Store store, final ImmutableList selected);
-
- /**
- * Called prior to writing the {@link StoreFile}s selected for compaction into
- * a new {@code StoreFile}. To override or modify the compaction process,
- * implementing classes have two options:
- *
- *
Wrap the provided {@link InternalScanner} with a custom
- * implementation that is returned from this method. The custom scanner
- * can then inspect {@link KeyValue}s from the wrapped scanner, applying
- * its own policy to what gets written.
- *
Call {@link org.apache.hadoop.hbase.coprocessor.ObserverContext#bypass()}
- * and provide a custom implementation for writing of new
- * {@link StoreFile}s. Note: any implementations bypassing
- * core compaction using this approach must write out new store files
- * themselves or the existing data will no longer be available after
- * compaction.
- *
- * @param c the environment provided by the region server
- * @param store the store being compacted
- * @param scanner the scanner over existing data used in the store file
- * rewriting
- * @return the scanner to use during compaction. Should not be {@code null}
- * unless the implementation is writing new store files on its own.
- */
- InternalScanner preCompact(final ObserverContext c,
- final Store store, final InternalScanner scanner);
-
- /**
- * Called after compaction has completed and the new store file has been
- * moved in to place.
- * @param c the environment provided by the region server
- * @param store the store being compacted
- * @param resultFile the new store file written out during compaction
+ * @param willSplit true if compaction will result in a split, false
+ * otherwise
*/
void postCompact(final ObserverContext c,
- final Store store, StoreFile resultFile);
+ final boolean willSplit);
/**
* Called before the region is split.
diff --git a/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java b/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
index 85e784d..a012e1e 100644
--- a/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
+++ b/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java
@@ -91,18 +91,17 @@ public class HFileBlock implements Cacheable {
private static final CacheableDeserializer blockDeserializer =
new CacheableDeserializer() {
public HFileBlock deserialize(ByteBuffer buf) throws IOException{
- ByteBuffer tempCopy = buf.duplicate();
- ByteBuffer newByteBuffer = ByteBuffer.allocate(tempCopy.limit()
+ ByteBuffer newByteBuffer = ByteBuffer.allocate(buf.limit()
- HFileBlock.EXTRA_SERIALIZATION_SPACE);
- tempCopy.limit(tempCopy.limit()
+ buf.limit(buf.limit()
- HFileBlock.EXTRA_SERIALIZATION_SPACE).rewind();
- newByteBuffer.put(tempCopy);
+ newByteBuffer.put(buf);
HFileBlock ourBuffer = new HFileBlock(newByteBuffer);
- tempCopy.position(tempCopy.limit());
- tempCopy.limit(tempCopy.limit() + HFileBlock.EXTRA_SERIALIZATION_SPACE);
- ourBuffer.offset = tempCopy.getLong();
- ourBuffer.nextBlockOnDiskSizeWithHeader = tempCopy.getInt();
+ buf.position(buf.limit());
+ buf.limit(buf.limit() + HFileBlock.EXTRA_SERIALIZATION_SPACE);
+ ourBuffer.offset = buf.getLong();
+ ourBuffer.nextBlockOnDiskSizeWithHeader = buf.getInt();
return ourBuffer;
}
};
diff --git a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java
index 18a3332..8ac54e6 100644
--- a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java
+++ b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java
@@ -19,12 +19,10 @@
*/
package org.apache.hadoop.hbase.io.hfile.slab;
-import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.List;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.atomic.AtomicLong;
-import java.util.concurrent.locks.ReentrantReadWriteLock;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
@@ -94,24 +92,9 @@ public class SingleSizeCache implements BlockCache {
MapEvictionListener listener = new MapEvictionListener() {
@Override
public void onEviction(String key, CacheablePair value) {
- try {
- value.evictionLock.writeLock().lock();
- timeSinceLastAccess.set(System.nanoTime()
- - value.recentlyAccessed.get());
- backingStore.free(value.serializedData);
- stats.evict();
- /**
- * We may choose to run this cache alone, without the SlabCache on
- * top, no evictionWatcher in that case
- */
- if (evictionWatcher != null) {
- evictionWatcher.onEviction(key, false);
- }
- size.addAndGet(-1 * value.heapSize());
- stats.evicted();
- } finally {
- value.evictionLock.writeLock().unlock();
- }
+ timeSinceLastAccess.set(System.nanoTime()
+ - value.recentlyAccessed.get());
+ doEviction(key, value);
}
};
@@ -121,7 +104,7 @@ public class SingleSizeCache implements BlockCache {
}
@Override
- public synchronized void cacheBlock(String blockName, Cacheable toBeCached) {
+ public void cacheBlock(String blockName, Cacheable toBeCached) {
ByteBuffer storedBlock;
/*
@@ -129,12 +112,18 @@ public class SingleSizeCache implements BlockCache {
* items than the memory we have allocated, but the Slab Allocator may still
* be empty if we have not yet completed eviction
*/
- do {
+
+ try {
storedBlock = backingStore.alloc(toBeCached.getSerializedLength());
- } while (storedBlock == null);
+ } catch (InterruptedException e) {
+ LOG.warn("SlabAllocator was interrupted while waiting for block to become available");
+ LOG.warn(e);
+ return;
+ }
CacheablePair newEntry = new CacheablePair(toBeCached.getDeserializer(),
storedBlock);
+ toBeCached.serialize(storedBlock);
CacheablePair alreadyCached = backingMap.putIfAbsent(blockName, newEntry);
@@ -142,7 +131,6 @@ public class SingleSizeCache implements BlockCache {
backingStore.free(storedBlock);
throw new RuntimeException("already cached " + blockName);
}
- toBeCached.serialize(storedBlock);
newEntry.recentlyAccessed.set(System.nanoTime());
this.size.addAndGet(newEntry.heapSize());
}
@@ -157,20 +145,21 @@ public class SingleSizeCache implements BlockCache {
stats.hit(caching);
// If lock cannot be obtained, that means we're undergoing eviction.
- if (contentBlock.evictionLock.readLock().tryLock()) {
- try {
- contentBlock.recentlyAccessed.set(System.nanoTime());
+ try {
+ contentBlock.recentlyAccessed.set(System.nanoTime());
+ synchronized (contentBlock) {
+ if (contentBlock.serializedData == null) {
+ // concurrently evicted
+ LOG.warn("Concurrent eviction of " + key);
+ return null;
+ }
return contentBlock.deserializer
- .deserialize(contentBlock.serializedData);
- } catch (IOException e) {
- e.printStackTrace();
- LOG.warn("Deserializer throwing ioexception, possibly deserializing wrong object buffer");
- return null;
- } finally {
- contentBlock.evictionLock.readLock().unlock();
+ .deserialize(contentBlock.serializedData.asReadOnlyBuffer());
}
+ } catch (Throwable t) {
+ LOG.error("Deserializer threw an exception. This may indicate a bug.", t);
+ return null;
}
- return null;
}
/**
@@ -183,23 +172,45 @@ public class SingleSizeCache implements BlockCache {
stats.evict();
CacheablePair evictedBlock = backingMap.remove(key);
if (evictedBlock != null) {
- try {
- evictedBlock.evictionLock.writeLock().lock();
- backingStore.free(evictedBlock.serializedData);
- evictionWatcher.onEviction(key, false);
- stats.evicted();
- size.addAndGet(-1 * evictedBlock.heapSize());
- } finally {
- evictedBlock.evictionLock.writeLock().unlock();
- }
+ doEviction(key, evictedBlock);
}
return evictedBlock != null;
}
+ private void doEviction(String key, CacheablePair evictedBlock) {
+ long evictedHeap = 0;
+ synchronized (evictedBlock) {
+ if (evictedBlock.serializedData == null) {
+ // someone else already freed
+ return;
+ }
+ evictedHeap = evictedBlock.heapSize();
+ ByteBuffer bb = evictedBlock.serializedData;
+ evictedBlock.serializedData = null;
+ backingStore.free(bb);
+
+ // We have to do this callback inside the synchronization here.
+ // Otherwise we can have the following interleaving:
+ // Thread A calls getBlock():
+ // SlabCache directs call to this SingleSizeCache
+ // It gets the CacheablePair object
+ // Thread B runs eviction
+ // doEviction() is called and sets serializedData = null, here.
+ // Thread A sees the null serializedData, and returns null
+ // Thread A calls cacheBlock on the same block, and gets
+ // "already cached" since the block is still in backingStore
+ if (evictionWatcher != null) {
+ evictionWatcher.onEviction(key, false);
+ }
+ }
+ stats.evicted();
+ size.addAndGet(-1 * evictedHeap);
+ }
+
public void logStats() {
- long milliseconds = (long)this.timeSinceLastAccess.get() / 1000000;
+ long milliseconds = (long) this.timeSinceLastAccess.get() / 1000000;
LOG.info("For Slab of size " + this.blockSize + ": "
+ this.getOccupiedSize() / this.blockSize
@@ -299,8 +310,7 @@ public class SingleSizeCache implements BlockCache {
/* Just a pair class, holds a reference to the parent cacheable */
private class CacheablePair implements HeapSize {
final CacheableDeserializer deserializer;
- final ByteBuffer serializedData;
- final ReentrantReadWriteLock evictionLock;
+ ByteBuffer serializedData;
AtomicLong recentlyAccessed;
private CacheablePair(CacheableDeserializer deserializer,
@@ -308,7 +318,6 @@ public class SingleSizeCache implements BlockCache {
this.recentlyAccessed = new AtomicLong();
this.deserializer = deserializer;
this.serializedData = serializedData;
- evictionLock = new ReentrantReadWriteLock();
}
/*
diff --git a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java
index 9811c6b..ed32980 100644
--- a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java
+++ b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/Slab.java
@@ -21,6 +21,8 @@ package org.apache.hadoop.hbase.io.hfile.slab;
import java.nio.ByteBuffer;
import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.hbase.util.ClassSize;
@@ -37,7 +39,7 @@ class Slab implements org.apache.hadoop.hbase.io.HeapSize {
static final Log LOG = LogFactory.getLog(Slab.class);
/** This is where our items, or blocks of the slab, are stored. */
- private ConcurrentLinkedQueue buffers;
+ private LinkedBlockingQueue buffers;
/** This is where our Slabs are stored */
private ConcurrentLinkedQueue slabs;
@@ -47,7 +49,7 @@ class Slab implements org.apache.hadoop.hbase.io.HeapSize {
private long heapSize;
Slab(int blockSize, int numBlocks) {
- buffers = new ConcurrentLinkedQueue();
+ buffers = new LinkedBlockingQueue();
slabs = new ConcurrentLinkedQueue();
this.blockSize = blockSize;
@@ -108,16 +110,13 @@ class Slab implements org.apache.hadoop.hbase.io.HeapSize {
}
/*
- * This returns null if empty. Throws an exception if you try to allocate a
- * bigger size than the allocator can handle.
+ * Throws an exception if you try to allocate a
+ * bigger size than the allocator can handle. Alloc will block until a buffer is available.
*/
- ByteBuffer alloc(int bufferSize) {
+ ByteBuffer alloc(int bufferSize) throws InterruptedException {
int newCapacity = Preconditions.checkPositionIndex(bufferSize, blockSize);
- ByteBuffer returnedBuffer = buffers.poll();
- if (returnedBuffer == null) {
- return null;
- }
+ ByteBuffer returnedBuffer = buffers.take();
returnedBuffer.clear().limit(newCapacity);
return returnedBuffer;
diff --git a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java
index 1611349..4e3d337 100644
--- a/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java
+++ b/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java
@@ -232,6 +232,7 @@ public class SlabCache implements SlabItemEvictionWatcher, BlockCache, HeapSize
public Cacheable getBlock(String key, boolean caching) {
SingleSizeCache cachedBlock = backingStore.get(key);
if (cachedBlock == null) {
+ // TODO: this is a miss, isn't it?
return null;
}
diff --git a/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java b/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
index 337da78..1c68875 100644
--- a/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
+++ b/src/main/java/org/apache/hadoop/hbase/ipc/HBaseRpcMetrics.java
@@ -77,7 +77,6 @@ public class HBaseRpcMetrics implements Updater {
public MetricsTimeVaryingRate rpcQueueTime = new MetricsTimeVaryingRate("RpcQueueTime", registry);
public MetricsTimeVaryingRate rpcProcessingTime = new MetricsTimeVaryingRate("RpcProcessingTime", registry);
- public MetricsTimeVaryingRate rpcSlowResponseTime = new MetricsTimeVaryingRate("RpcSlowResponse", registry);
//public Map metricsList = Collections.synchronizedMap(new HashMap());
@@ -130,48 +129,13 @@ public class HBaseRpcMetrics implements Updater {
* "classname.method"
*/
public void createMetrics(Class>[] ifaces, boolean prefixWithClass) {
- createMetrics(ifaces, prefixWithClass, null);
- }
-
- /**
- * Generate metrics entries for all the methods defined in the list of
- * interfaces. A {@link MetricsTimeVaryingRate} counter will be created for
- * each {@code Class.getMethods().getName()} entry.
- *
- *
- * If {@code prefixWithClass} is {@code true}, each metric will be named as
- * {@code [Class.getSimpleName()].[Method.getName()]}. Otherwise each metric
- * will just be named according to the method -- {@code Method.getName()}.
- *
- *
- *
- * Additionally, if {@code suffixes} is defined, additional metrics will be
- * created for each method named as the original metric concatenated with
- * the suffix.
- *
- * @param ifaces Define metrics for all methods in the given classes
- * @param prefixWithClass If {@code true}, each metric will be named as
- * "classname.method"
- * @param suffixes If not null, each method will get additional metrics ending
- * in each of the suffixes.
- */
- public void createMetrics(Class>[] ifaces, boolean prefixWithClass,
- String [] suffixes) {
for (Class> iface : ifaces) {
Method[] methods = iface.getMethods();
for (Method method : methods) {
String attrName = prefixWithClass ?
- getMetricName(iface, method.getName()) : method.getName();
+ getMetricName(iface, method.getName()) : method.getName();
if (get(attrName) == null)
create(attrName);
- if (suffixes != null) {
- // create metrics for each requested suffix
- for (String s : suffixes) {
- String metricName = attrName + s;
- if (get(metricName) == null)
- create(metricName);
- }
- }
}
}
}
@@ -204,4 +168,4 @@ public class HBaseRpcMetrics implements Updater {
if (rpcStatistics != null)
rpcStatistics.shutdown();
}
-}
+}
\ No newline at end of file
diff --git a/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java b/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
index 3679c02..8d8908c 100644
--- a/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
+++ b/src/main/java/org/apache/hadoop/hbase/ipc/HRegionInterface.java
@@ -44,7 +44,6 @@ import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.filter.WritableByteArrayComparable;
import org.apache.hadoop.hbase.io.hfile.BlockCacheColumnFamilySummary;
import org.apache.hadoop.hbase.regionserver.RegionOpeningState;
-import org.apache.hadoop.hbase.regionserver.wal.FailedLogCloseException;
import org.apache.hadoop.hbase.regionserver.wal.HLog;
import org.apache.hadoop.ipc.RemoteException;
import org.apache.hadoop.hbase.ipc.VersionedProtocol;
@@ -333,7 +332,7 @@ public interface HRegionInterface extends VersionedProtocol, Stoppable, Abortabl
* @param region
* region to open
* @return RegionOpeningState
- * OPENED - if region open request was successful.
+ * OPENED - if region opened succesfully.
* ALREADY_OPENED - if the region was already opened.
* FAILED_OPENING - if region opening failed.
*
@@ -342,22 +341,6 @@ public interface HRegionInterface extends VersionedProtocol, Stoppable, Abortabl
public RegionOpeningState openRegion(final HRegionInfo region) throws IOException;
/**
- * Opens the specified region.
- * @param region
- * region to open
- * @param versionOfOfflineNode
- * the version of znode to compare when RS transitions the znode from
- * OFFLINE state.
- * @return RegionOpeningState
- * OPENED - if region open request was successful.
- * ALREADY_OPENED - if the region was already opened.
- * FAILED_OPENING - if region opening failed.
- * @throws IOException
- */
- public RegionOpeningState openRegion(HRegionInfo region, int versionOfOfflineNode)
- throws IOException;
-
- /**
* Opens the specified regions.
* @param regions regions to open
* @throws IOException
@@ -530,14 +513,4 @@ public interface HRegionInterface extends VersionedProtocol, Stoppable, Abortabl
* @throws IOException exception
*/
public List getBlockCacheColumnFamilySummaries() throws IOException;
- /**
- * Roll the log writer. That is, start writing log messages to a new file.
- *
- * @throws IOException
- * @throws FailedLogCloseException
- * @return If lots of logs, flush the returned regions so next time through
- * we can clean logs. Returns null if nothing to flush. Names are actual
- * region names as returned by {@link HRegionInfo#getEncodedName()}
- */
- public byte[][] rollHLogWriter() throws IOException, FailedLogCloseException;
}
diff --git a/src/main/java/org/apache/hadoop/hbase/ipc/WritableRpcEngine.java b/src/main/java/org/apache/hadoop/hbase/ipc/WritableRpcEngine.java
index b618429..55e0339 100644
--- a/src/main/java/org/apache/hadoop/hbase/ipc/WritableRpcEngine.java
+++ b/src/main/java/org/apache/hadoop/hbase/ipc/WritableRpcEngine.java
@@ -264,9 +264,6 @@ class WritableRpcEngine implements RpcEngine {
private static final int DEFAULT_WARN_RESPONSE_TIME = 10000; // milliseconds
private static final int DEFAULT_WARN_RESPONSE_SIZE = 100 * 1024 * 1024;
- /** Names for suffixed metrics */
- private static final String ABOVE_ONE_SEC_METRIC = ".aboveOneSec.";
-
private final int warnResponseTime;
private final int warnResponseSize;
@@ -301,8 +298,7 @@ class WritableRpcEngine implements RpcEngine {
this.ifaces = ifaces;
// create metrics for the advertised interfaces this server implements.
- String [] metricSuffixes = new String [] {ABOVE_ONE_SEC_METRIC};
- this.rpcMetrics.createMetrics(this.ifaces, false, metricSuffixes);
+ this.rpcMetrics.createMetrics(this.ifaces);
this.authorize =
conf.getBoolean(
@@ -372,14 +368,15 @@ class WritableRpcEngine implements RpcEngine {
startTime, processingTime, qTime, responseSize);
// provides a count of log-reported slow responses
if (tooSlow) {
- rpcMetrics.rpcSlowResponseTime.inc(processingTime);
+ rpcMetrics.inc(call.getMethodName() + ".slowResponse.",
+ processingTime);
}
}
if (processingTime > 1000) {
// we use a hard-coded one second period so that we can clearly
// indicate the time period we're warning about in the name of the
// metric itself
- rpcMetrics.inc(call.getMethodName() + ABOVE_ONE_SEC_METRIC,
+ rpcMetrics.inc(call.getMethodName() + ".aboveOneSec.",
processingTime);
}
@@ -443,7 +440,7 @@ class WritableRpcEngine implements RpcEngine {
} else if (params.length == 1 && instance instanceof HRegionServer &&
params[0] instanceof Operation) {
// annotate the response map with operation details
- responseInfo.putAll(((Operation) params[0]).toMap());
+ responseInfo.putAll(((Operation) params[1]).toMap());
// report to the log file
LOG.warn("(operation" + tag + "): " +
mapper.writeValueAsString(responseInfo));
diff --git a/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java b/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
index ad88b76..ff05df8 100644
--- a/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
+++ b/src/main/java/org/apache/hadoop/hbase/mapreduce/TableMapReduceUtil.java
@@ -126,10 +126,10 @@ public class TableMapReduceUtil {
if (outputValueClass != null) job.setMapOutputValueClass(outputValueClass);
if (outputKeyClass != null) job.setMapOutputKeyClass(outputKeyClass);
job.setMapperClass(mapper);
- Configuration conf = job.getConfiguration();
- HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
- conf.set(TableInputFormat.INPUT_TABLE, table);
- conf.set(TableInputFormat.SCAN, convertScanToString(scan));
+ HBaseConfiguration.addHbaseResources(job.getConfiguration());
+ job.getConfiguration().set(TableInputFormat.INPUT_TABLE, table);
+ job.getConfiguration().set(TableInputFormat.SCAN,
+ convertScanToString(scan));
if (addDependencyJars) {
addDependencyJars(job);
}
@@ -333,8 +333,8 @@ public class TableMapReduceUtil {
Class partitioner, String quorumAddress, String serverClass,
String serverImpl, boolean addDependencyJars) throws IOException {
- Configuration conf = job.getConfiguration();
- HBaseConfiguration.merge(conf, HBaseConfiguration.create(conf));
+ Configuration conf = job.getConfiguration();
+ HBaseConfiguration.addHbaseResources(conf);
job.setOutputFormatClass(TableOutputFormat.class);
if (reducer != null) job.setReducerClass(reducer);
conf.set(TableOutputFormat.OUTPUT_TABLE, table);
diff --git a/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java b/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java
deleted file mode 100644
index b233d10..0000000
--- a/src/main/java/org/apache/hadoop/hbase/master/AssignCallable.java
+++ /dev/null
@@ -1,47 +0,0 @@
-/**
- * Copyright 2011 The Apache Software Foundation
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.master;
-
-import java.util.concurrent.Callable;
-
-import org.apache.hadoop.hbase.HRegionInfo;
-
-/**
- * A callable object that invokes the corresponding action that needs to be
- * taken for assignment of a region in transition.
- * Implementing as future callable we are able to act on the timeout
- * asynchronously.
- */
-public class AssignCallable implements Callable