From 09e2ecb9bd82be0bd7f68b1601f36a10a4124bbb Mon Sep 17 00:00:00 2001 From: Misty Stanley-Jones Date: Thu, 7 Aug 2014 15:15:45 +1000 Subject: [PATCH] HBASE-4593 Design and document official procedure for posting patches, commits, commit messages, etc --- src/main/docbkx/developer.xml | 1930 ++++++++++++++++++++++++----------------- 1 file changed, 1110 insertions(+), 820 deletions(-) diff --git src/main/docbkx/developer.xml src/main/docbkx/developer.xml index cf2ed10..b129920 100644 --- src/main/docbkx/developer.xml +++ src/main/docbkx/developer.xml @@ -33,203 +33,279 @@ just downloading the latest distribution).
- Contributing - - If you are looking to contribute to Apache HBase, look for issues in JIRA - tagged with the label 'beginner': project = HBASE AND labels in (beginner). These are issues HBase contributors have - deemed worthy but not of immediate priority and a good way to ramp on - HBase internalsSee What label is used for issues that are good on ramps for new contributors? - from the dev mailing list for background.. - + Contributing + If you are looking to contribute to Apache HBase, look for issues in JIRA tagged with + the label 'beginner': project = HBASE AND labels in (beginner). These are issues HBase + contributors have deemed worthy but not of immediate priority and a good way to ramp on + HBase internals. See What label + is used for issues that are good on ramps for new contributors? from the dev + mailing list for background.
- Apache HBase Repositories - There are two different repositories for Apache HBase: Subversion (SVN) and Git. - GIT is our repository of record for all but the Apache HBase website. - We used to be on SVN. We migrated. See Migrade Apache HBase SVN Repos to Git. - Updating hbase.apache.org still requires use of SVN (See ). - See Source Code Management - page for contributor and committer links or - seach for HBase on the Apache Git page. - + Apache HBase Repositories + There are two different repositories for Apache HBase: Subversion (SVN) and Git. GIT + is our repository of record for all but the Apache HBase website. We used to be on SVN. + We migrated. See Migrate Apache HBase SVN Repos to Git. Updating hbase.apache.org still + requires use of SVN (See ). See Source Code + Management page for contributor and committer links or seach for HBase on the + Apache Git page.
IDEs
- Eclipse + Eclipse
- Code Formatting - Under the dev-support folder, you will find hbase_eclipse_formatter.xml. - We encourage you to have this formatter in place in eclipse when editing HBase code. To load it into eclipse: - -Go to Eclipse->Preferences... -In Preferences, Go to Java->Code Style->Formatter -Import... hbase_eclipse_formatter.xml -Click Apply -Still in Preferences, Go to Java->Editor->Save Actions -Check the following: - -Perform the selected actions on save -Format source code -Format edited lines - - -Click Apply - - - In addition to the automatic formatting, make sure you follow the style guidelines explained in - Also, no @author tags - that's a rule. Quality Javadoc comments are appreciated. And include the Apache license. + Code Formatting + Under the dev-support/ folder, you will find + hbase_eclipse_formatter.xml. We encourage you to have + this formatter in place in eclipse when editing HBase code. + + Load the HBase Formatter Into Eclipse + + Open the + Eclipse + Preferences + menu item. + + + In Preferences, click the + Java + Code Style + Formatter + menu item. + + + Click Import and browse to the location of the + hbase_eclipse_formatter.xml file, which is in + the dev-support/ directory. Click + Apply. + + + Still in Preferences, click + Java Editor + Save Actions + . Be sure the following options are selected: + + Perform the selected actions on save + Format source code + Format edited lines + + Click Apply. Close all dialog boxes and return + to the main window. + + + + In addition to the automatic formatting, make sure you follow the style + guidelines explained in + Also, no @author tags - that's a rule. Quality Javadoc comments + are appreciated. And include the Apache license.
- Git Plugin - If you cloned the project via git, download and install the Git plugin (EGit). Attach to your local git repo (via the Git Repositories window) and you'll be able to see file revision history, generate patches, etc. + Eclipse Git Plugin + If you cloned the project via git, download and install the Git plugin (EGit). + Attach to your local git repo (via the Git Repositories + window) and you'll be able to see file revision history, generate patches, + etc.
- HBase Project Setup in Eclipse - The easiest way is to use the m2eclipse plugin for Eclipse. Eclipse Indigo or newer has m2eclipse built-in, or it can be found here:http://www.eclipse.org/m2e/. M2Eclipse provides Maven integration for Eclipse - it even lets you use the direct Maven commands from within Eclipse to compile and test your project. - To import the project, you merely need to go to File->Import...Maven->Existing Maven Projects and then point Eclipse at the HBase root directory; m2eclipse will automatically find all the hbase modules for you. - If you install m2eclipse and import HBase in your workspace, you will have to fix your eclipse Build Path. - Remove target folder, add target/generated-jamon - and target/generated-sources/java folders. You may also remove from your Build Path - the exclusions on the src/main/resources and src/test/resources - to avoid error message in the console 'Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase: - 'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml doesn't exist'. This will also - reduce the eclipse build cycles and make your life easier when developing. + HBase Project Setup in Eclipse using <code>m2eclipse</code> + The easiest way is to use the m2eclipse plugin for Eclipse. + Eclipse Indigo or newer includes m2eclipse, or you can + download it from http://www.eclipse.org/m2e//. It provides Maven integration for + Eclipse, and even lets you use the direct Maven commands from within Eclipse to + compile and test your project. + To import the project, click + File + Import + Maven + Existing Maven Projects + and select the HBase root directory. m2eclipse + locates all the hbase modules for you. + If you install m2eclipse and import HBase in your + workspace, do the following to fix your eclipse Build Path. + + + Remove target folder + + + Add target/generated-jamon and + target/generated-sources/java folders. + + + Remove from your Build Path the exclusions on the + src/main/resources and + src/test/resources to avoid error message in + the console, such as the following: + Failed to execute goal +org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase: +'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml +doesn't exist + This will also reduce the eclipse build cycles and make your life + easier when developing. + +
- Import into eclipse with the command line - For those not inclined to use m2eclipse, you can generate the Eclipse files from the command line. First, run (you should only have to do this once): - mvn clean install -DskipTests - and then close Eclipse and execute... - mvn eclipse:eclipse - ... from your local HBase project directory in your workspace to generate some new .project - and .classpathfiles. Then reopen Eclipse, or refresh your eclipse project (F5), and import - the .project file in the HBase directory to a workspace. - + HBase Project Setup in Eclipse Using the Command Line + Instead of using m2eclipse, you can generate the Eclipse files + from the command line. + + + First, run the following command, which builds HBase. You only need to + do this once. + mvn clean install -DskipTests + + + Close Eclipse, and execute the following command from the terminal, in + your local HBase project directory, to generate new + .project and .classpath + files. + mvn eclipse:eclipse + + + Reopen Eclipse and import the .project file in + the HBase directory to a workspace. + +
- Maven Classpath Variable - The M2_REPO classpath variable needs to be set up for the project. This needs to be set to - your local Maven repository, which is usually ~/.m2/repository -If this classpath variable is not configured, you will see compile errors in Eclipse like this: - + Maven Classpath Variable + The $M2_REPO classpath variable needs to be set up for the + project. This needs to be set to your local Maven repository, which is usually + ~/.m2/repository + If this classpath variable is not configured, you will see compile errors in + Eclipse like this: + Description Resource Path Location Type The project cannot be built until build path errors are resolved hbase Unknown Java Problem Unbound classpath variable: 'M2_REPO/asm/asm/3.1/asm-3.1.jar' in project 'hbase' hbase Build path Build Path Problem Unbound classpath variable: 'M2_REPO/com/google/guava/guava/r09/guava-r09.jar' in project 'hbase' hbase Build path Build Path Problem Unbound classpath variable: 'M2_REPO/com/google/protobuf/protobuf-java/2.3.0/protobuf-java-2.3.0.jar' in project 'hbase' hbase Build path Build Path Problem Unbound classpath variable: - +
- Eclipse Known Issues - Eclipse will currently complain about Bytes.java. It is not possible to turn these errors off. - + Eclipse Known Issues + Eclipse will currently complain about Bytes.java. It is + not possible to turn these errors off. + Description Resource Path Location Type Access restriction: The method arrayBaseOffset(Class) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar Bytes.java /hbase/src/main/java/org/apache/hadoop/hbase/util line 1061 Java Problem Access restriction: The method arrayIndexScale(Class) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar Bytes.java /hbase/src/main/java/org/apache/hadoop/hbase/util line 1064 Java Problem Access restriction: The method getLong(Object, long) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar Bytes.java /hbase/src/main/java/org/apache/hadoop/hbase/util line 1111 Java Problem - -
-
- Eclipse - More Information - For additional information on setting up Eclipse for HBase development on Windows, see - Michael Morello's blog on the topic. - -
+ +
+
+ Eclipse - More Information + For additional information on setting up Eclipse for HBase development on + Windows, see Michael Morello's blog on the topic. +
+
+
+ Other IDEs + TODO - Please contribute
- Building Apache HBase -
- Basic Compile - Thanks to maven, building HBase is pretty easy. You can read about the various maven commands in , - but the simplest command to compile HBase from its java source code is: - + Building Apache HBase +
+ Basic Compile + HBase is compiled using Maven. You can read about the various maven commands in + , but the simplest command to compile + HBase from its java source code is: + mvn package -DskipTests - Or, to clean up before compiling: - + Or, to clean up before compiling: + mvn clean package -DskipTests - With Eclipse set up as explained above in , you can also simply use the build command in Eclipse. - To create the full installable HBase package takes a little bit more work, so read on. - - - JDK Version Requirements - - Starting with HBase 1.0 you must use Java 7 or later to build from source code. See for more complete - information about supported JDK versions. - - -
-
Build Protobuf - You may need to change the protobuf definitions that reside in the hbase-protocol module or other modules. - - The protobuf files are located hbase-protocol/src/main/protobuf. - For the change to be effective, you will need to regenerate the classes. You can use maven profile compile-protobuf to do this. + With Eclipse set up as explained above in , you can also + use the Build command in Eclipse. To create the full installable + HBase package takes a little bit more work, so read on. + + JDK Version Requirements + Starting with HBase 1.0 you must use Java 7 or later to build from source + code. See for more complete information about supported + JDK versions. + +
+
+ Build Protobuf + You may need to change the protobuf definitions that reside in the + hbase-protocol module or other modules. + The protobuf files are located + hbase-protocol/src/main/protobuf. For the change to be + effective, you will need to regenerate the classes. You can use maven profile + compile-protobuf to do this. + mvn compile -Dcompile-protobuf + mvn compile -Pcompile-protobuf + You may also want to define protoc.path for the protoc binary, using the following + command: -mvn compile -Dcompile-protobuf -or -mvn compile -Pcompile-protobuf - - -You may also want to define protoc.path for the protoc binary - mvn compile -Dcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc - Read the hbase-protocol/README.txt for more details. - -
+
+ Read the hbase-protocol/README.txt for more details. +
-
Build Gotchas - If you see Unable to find resource 'VM_global_library.vm', ignore it. - Its not an error. It is officially ugly though. - -
-
- Building in snappy compression support - Pass -Dsnappy to trigger the snappy maven profile for building - snappy native libs into hbase. See also -
+
+ Build Gotchas + If you see Unable to find resource 'VM_global_library.vm', ignore it. + Its not an error. It is officially ugly though. +
+
+ Building in snappy compression support + Pass -Dsnappy to trigger the snappy maven profile for + building Google Snappy native libraries into HBase. See also +
- Releasing Apache HBase - HBase 0.96.x will run on hadoop 1.x or hadoop 2.x. HBase 0.98 will run on - both also (but HBase 0.98 deprecates use of hadoop 1). HBase 1.x will NOT - run on hadoop 1. In what follows, we make a distinction between HBase 1.x - builds and the awkward process involved building HBase 0.96/0.98 for either - hadoop 1 or hadoop 2 targets. - -
Building against HBase 0.96-0.98 - Building 0.98 and 0.96, you must choose which hadoop to build against; - we cannot make a single HBase binary that can run against both hadoop1 and - hadoop2. Since we include the Hadoop we were built - against -- so we can do standalone mode -- the set of modules included - in the tarball changes dependent on whether the hadoop1 or hadoop2 target - is chosen. You can tell which HBase you have -- whether it is for hadoop1 - or hadoop2 by looking at the version; the HBase for hadoop1 bundle will - include 'hadoop1' in its version. Ditto for hadoop2. - - Maven, our build system, natively will not let you have a single product - built against different dependencies. It is understandable. But neither could - we convince maven to change the set of included modules and write out - the correct poms w/ appropriate dependencies even though we have two - build targets; one for hadoop1 and another for hadoop2. So, there is a prestep - required. This prestep takes as input the current pom.xmls and it generates hadoop1 or - hadoop2 versions using a script in dev-tools called - generate-hadoopX-poms.sh. You then reference these generated - poms when you build. For now, just be aware of the difference between HBase 1.x - builds and those of HBase 0.96-0.98. Below we will come back to this difference - when we list out build instructions. - - -Publishing to maven requires you sign the artifacts you want to upload. To have the - build do this for you, you need to make sure you have a properly configured - settings.xml in your local repository under .m2. - Here is my ~/.m2/settings.xml. - Releasing Apache HBase +
+ Building against HBase 0.96-0.98 + HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x. HBase 0.98 still runs on both, + but HBase 0.98 deprecates use of Hadoop 1. HBase 1.x will not + run on Hadoop 1. In the following procedures, we make a distinction between HBase + 1.x builds and the awkward process involved building HBase 0.96/0.98 for either + Hadoop 1 or Hadoop 2 targets. + You must choose which Hadoop to build against. It is not possible to build a + single HBase binary that runs against both Hadoop 1 and Hadoop 2. Hadoop is included + in the build, because it is needed to run HBase in standalone mode. Therefore, the + set of modules included in the tarball changes, depending on the build target. To + determine which HBase you have, look at the HBase version. The Hadoop version is + embedded within it. + Maven, our build system, natively does not allow a single product to be built + against different dependencies. Also, Maven cannot change the set of included + modules and write out the correct pom.xml files with + appropriate dependencies, even using two build targets, one for Hadoop 1 and another + for Hadoop 2. A prerequisite step is required, which takes as input the current + pom.xmls and generates Hadoop 1 or Hadoop 2 versions using + a script in the dev-tools/ directory, called + generate-hadoopX-poms.sh + where X is either 1 or + 2. You then reference these generated poms when you build. + For now, just be aware of the difference between HBase 1.x builds and those of HBase + 0.96-0.98. This difference is important to the build instructions. + + + + Example <filename>~/.m2/settings.xml</filename> File + Publishing to maven requires you sign the artifacts you want to upload. For + the build to sign them for you, you a properly configured + settings.xml in your local repository under + .m2, such as the following. + @@ -264,282 +340,381 @@ mvn compile -Dcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc ]]> - - - You must use maven 3.0.x (Check by running mvn -version). - -
-
- Making a Release Candidate - I'll explain by running through the process. See later in this section for more detail on particular steps. -These instructions are for building HBase 1.0.x. For building earlier versions, the process is different. See this section -under the respective release documentation folders. - - If you are making a point release (for example to quickly address a critical incompatability or security - problem) off of a release branch instead of a development branch the tagging instructions are slightly different. - I'll prefix those special steps with Point Release Only. - - - I would advise before you go about making a release candidate, do a practise run by deploying a SNAPSHOT. - Also, make sure builds have been passing recently for the branch from where you are going to take your - release. You should also have tried recent branch tips out on a cluster under load running for instance - our hbase-it integration test suite for a few hours to 'burn in' the near-candidate bits. - - - Point Release Only - At this point you should tag the previous release branch (ex: 0.96.1) with - the new point release tag (e.g. 0.96.1.1 tag). Any commits with changes or mentioned below for the point release - should be appled to the new tag. - - - - - The Hadoop How To Release wiki - page informs much of the below and may have more detail on particular sections so it is worth review. - - Update CHANGES.txt with the changes since the last release. - Make sure the URL to the JIRA points to the properly location listing fixes for this release. - Adjust the version in all the poms appropriately. If you are making a release candidate, you must - remove the -SNAPSHOT from all versions. If you are running this receipe to - publish a SNAPSHOT, you must keep the -SNAPSHOT suffix on the hbase version. - The Versions Maven Plugin can be of use here. To - set a version in all the many poms of the hbase multi-module project, do something like this: - $ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0 - Checkin the CHANGES.txt and any version changes. - - - Update the documentation under src/main/docbkx. This usually involves copying the - latest from trunk making version-particular adjustments to suit this release candidate version. - - Now, build the src tarball. This tarball is hadoop version independent. It is just the pure src code and documentation without a particular hadoop taint, etc. - Add the -Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present. - $ MAVEN_OPTS="-Xmx2g" mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease - Undo the tarball and make sure it looks good. A good test for the src tarball being 'complete' is to see if - you can build new tarballs from this source bundle. - If the source tarball is good, save it off to a version directory, i.e a directory somewhere where you are collecting - all of the tarballs you will publish as part of the release candidate. For example if we were building a - hbase-0.96.0 release candidate, we might call the directory hbase-0.96.0RC0. Later - we will publish this directory as our release candidate up on people.apache.org/~YOU. - - Now lets build the binary tarball. - Add the -Prelease profile when building; it checks files for licenses and will fail the build if unlicensed files present. - Do it in two steps. First install into the local repository and then generate documentation and assemble the tarball - (Otherwise build complains that hbase modules are not in maven repo when we try to do it all in the one go especially on fresh repo). - It seems that you need the install goal in both steps. - $ MAVEN_OPTS="-Xmx3g" mvn clean install -DskipTests -Prelease -$ MAVEN_OPTS="-Xmx3g" mvn install -DskipTests site assembly:single -Prelease -Undo the generated tarball and check it out. Look at doc. and see if it runs, etc. -If good, copy the tarball to the above mentioned version directory. - -Point Release OnlyThe following step that creates a new tag can be skipped since you've already created the point release tag -I'll tag the release at this point since its looking good. If we find an issue later, we can delete the tag and start over. Release needs to be tagged when we do next step. -Now deploy hbase to the apache maven repository. -This time we use the apache-release profile instead of just release profile when doing mvn deploy; -it will invoke the apache pom referenced by our poms. It will also sign your artifacts published to mvn as long as your settings.xml in your local .m2 -repository is configured correctly (your settings.xml adds your gpg password property to the apache profile). -$ MAVEN_OPTS="-Xmx3g" mvn deploy -DskipTests -Papache-release -The last command above copies all artifacts up to a temporary staging apache mvn repo in an 'open' state. -We'll need to do more work on these maven artifacts to make them generally available. - - - The script dev-support/make_rc.sh automates alot of the above listed release steps. - It does not do the modification of the CHANGES.txt for the release, the close of the - staging repository up in apache maven (human intervention is needed here), the checking of - the produced artifacts to ensure they are 'good' -- e.g. undoing the produced tarballs, eyeballing them to make - sure they look right then starting and checking all is running properly -- and then the signing and pushing of - the tarballs to people.apache.org but it does the other stuff; it can come in handy. - - - Now lets get back to what is up in maven. Our artifacts should be up in maven repository in the staging area -in the 'open' state. While in this 'open' state you can check out what you've published to make sure all is good. -To do this, login at repository.apache.org -using your apache id. Find your artifacts in the staging repository. Browse the content. Make sure all artifacts made it up -and that the poms look generally good. If it checks out, 'close' the repo. This will make the artifacts publically available. -You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new -release candidate. Include it in the email that announces the release candidate. Folks will need to add this repo URL to their -local poms or to their local settings.xml file to pull the published release candidate artifacts. If the published artifacts are incomplete -or borked, just delete the 'open' staged artifacts. - - hbase-downstreamer - - See the hbase-downstreamer test for a simple - example of a project that is downstream of hbase an depends on it. - Check it out and run its simple test to make sure maven artifacts are properly deployed to the maven repository. - Be sure to edit the pom to point at the proper staging repo. Make sure you are pulling from the repo when tests run and that you are not - getting from your local repo (pass -U or delete your local repo content and check maven is pulling from remote out of the staging repo). - - - See Publishing Maven Artifacts for - some pointers on this maven staging process. - - We no longer publish using the maven release plugin. Instead we do mvn deploy. It seems to give - us a backdoor to maven release publishing. If no -SNAPSHOT on the version - string, then we are 'deployed' to the apache maven repository staging directory from which we - can publish URLs for candidates and later, if they pass, publish as release (if a - -SNAPSHOT on the version string, deploy will put the artifacts up into - apache snapshot repos). + + + + Maven Version + You must use maven 3.0.x (Check by running mvn -version). + +
+
+ Making a Release Candidate + + These instructions are for building HBase 1.0.x. For building earlier + versions, the process is different. See this section under the respective + release documentation folders. + + Point Releases + If you are making a point release (for example to quickly address a critical + incompatability or security problem) off of a release branch instead of a + development branch, the tagging instructions are slightly different. I'll prefix + those special steps with Point Release Only. + + + + Before You Begin + Before you make a release candidate, do a practise run by deploying a + snapshot. Before you start, check to be sure recent builds have been passing for + the branch from where you are going to take your release. You should also have + tried recent branch tips out on a cluster under load, perhaps by running the + hbase-it integration test suite for a few hours to 'burn in' + the near-candidate bits. + + + Point Release Only + At this point you should tag the previous release branch (ex: 0.96.1) with the + new point release tag (e.g. 0.96.1.1 tag). Any commits with changes for the + point release should be appled to the new tag. - - -If the hbase version ends in -SNAPSHOT, the artifacts go elsewhere. They are put into the apache snapshots repository - directly and are immediately available. Making a SNAPSHOT release, this is what you want to happen. - - - At this stage we have two tarballs in our 'version directory' and a set of artifacts up in maven in staging area in the - 'closed' state publically available in a temporary staging repository whose URL you should have gotten in an email. - The above mentioned script, make_rc.sh does all of the above for you minus the check of the artifacts built, - the closing of the staging repository up in maven, and the tagging of the release. If you run the script, do your checks at this - stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project. Tag before you start - the build. You can always delete it if the build goes haywire. - - - If all checks out, next put the version directory up on people.apache.org. You will need to sign and fingerprint them before you - push them up. In the version directory do this: - $ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done + + + The Hadoop How To + Release wiki page is used as a model for most of the instructions below, + and may have more detail on particular sections, so it is worth review. + + Release Procedure + The script dev-support/make_rc.sh automates many of these + steps. It does not do the modification of the CHANGES.txt + for the release, the close of the staging repository in Apache Maven (human + intervention is needed here), the checking of the produced artifacts to ensure + they are 'good' -- e.g. undoing the produced tarballs, eyeballing them to make + sure they look right then starting and checking all is running properly -- and + then the signing and pushing of the tarballs to people.apache.org but it does + the other stuff; it can come in handy. + + Update the <filename>CHANGES.txt</filename> file. + Update CHANGES.txt with the changes since the last + release. Make sure the URL to the JIRA points to the proper location which + lists fixes for this release. Adjust the version in all the POM files + appropriately. If you are making a release candidate, you must remove the + -SNAPSHOT label from all versions. If you are running + this receipe to publish a snapshot, you must keep the + -SNAPSHOT suffix on the hbase version. The Versions + Maven Plugin can be of use here. To set a version in all the many + poms of the hbase multi-module project, use a command like the + following: + +$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0 + + Checkin the CHANGES.txt and any version + changes. + + + Update the documentation. + Update the documentation under src/main/docbkx. This + usually involves copying the latest from trunk and making version-particular + adjustments to suit this release candidate version. + + + Build the source tarball. + Now, build the source tarball. This tarball is Hadoop-version-independent. + It is just the pure source code and documentation without a particular + hadoop taint, etc. Add the -Prelease profile when + building. It checks files for licenses and will fail the build if unlicensed + files are present. + +$ MAVEN_OPTS="-Xmx2g" mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease + + Extract the tarball and make sure it looks good. A good test for the src + tarball being 'complete' is to see if you can build new tarballs from this + source bundle. If the source tarball is good, save it off to a + version directory, a directory somewhere where you + are collecting all of the tarballs you will publish as part of the release + candidate. For example if you were building a hbase-0.96.0 release + candidate, you might call the directory + hbase-0.96.0RC0. Later you will publish this directory + as our release candidate up on people.apache.org/~YOU/. + + + Build the binary tarball. + Next, build the binary tarball. Add the -Prelease + profile when building. It checks files for licenses and will fail the build + if unlicensed files are present. Do it in two steps. + + + First install into the local repository + +$ MAVEN_OPTS="-Xmx3g" mvn clean install -DskipTests -Prelease + + + Next, generate documentation and assemble the tarball. + +$ MAVEN_OPTS="-Xmx3g" mvn install -DskipTests site assembly:single -Prelease + + + Otherwise, the build complains that hbase modules are not in the maven + repository when you try to do it at once, especially on fresh repository. It + seems that you need the install goal in both steps. + Extract the generated tarball and check it out. Look at the documentation, + see if it runs, etc. If good, copy the tarball to the above mentioned + version directory. + + + Create a new tag. + + Point Release Only + The following step that creates a new tag can be skipped since you've + already created the point release tag + + Tag the release at this point since it looks good. If you find an issue + later, you can delete the tag and start over. Release needs to be tagged for + the next step. + + + Deploy to the Maven Repository. + Next, deploy HBase to the Apache Maven repository, using the + apache-release profile instead of the + release profile when running the mvn + deploy command. This profile invokes the Apache pom referenced + by our pom files, and also signs your artifacts published to Maven, as long + as the settings.xml is configured correctly, as + described in . + +$ MAVEN_OPTS="-Xmx3g" mvn deploy -DskipTests -Papache-release + This command copies all artifacts up to a temporary staging Apache mvn + repository in an 'open' state. More work needs to be done on these maven + artifacts to make them generally available. + + + Make the Release Candidate available. + The artifacts are in the maven repository in the staging area in the + 'open' state. While in this 'open' state you can check out what you've + published to make sure all is good. To do this, login at repository.apache.org + using your Apache ID. Find your artifacts in the staging repository. Browse + the content. Make sure all artifacts made it up and that the poms look + generally good. If it checks out, 'close' the repo. This will make the + artifacts publically available. You will receive an email with the URL to + give out for the temporary staging repository for others to use trying out + this new release candidate. Include it in the email that announces the + release candidate. Folks will need to add this repo URL to their local poms + or to their local settings.xml file to pull the + published release candidate artifacts. If the published artifacts are + incomplete or have problems, just delete the 'open' staged artifacts. + + hbase-downstreamer + See the hbase-downstreamer test for a simple example of a project + that is downstream of HBase an depends on it. Check it out and run its + simple test to make sure maven artifacts are properly deployed to the + maven repository. Be sure to edit the pom to point to the proper staging + repository. Make sure you are pulling from the repository when tests run + and that you are not getting from your local repository, by either + passing the -U flag or deleting your local repo content and + check maven is pulling from remote out of the staging repository. + + + See Publishing Maven Artifacts for some pointers on this maven + staging process. + + We no longer publish using the maven release plugin. Instead we do + mvn deploy. It seems to give us a backdoor to + maven release publishing. If there is no -SNAPSHOT + on the version string, then we are 'deployed' to the apache maven + repository staging directory from which we can publish URLs for + candidates and later, if they pass, publish as release (if a + -SNAPSHOT on the version string, deploy will + put the artifacts up into apache snapshot repos). + + If the HBase version ends in -SNAPSHOT, the artifacts + go elsewhere. They are put into the Apache snapshots repository directly and + are immediately available. Making a SNAPSHOT release, this is what you want + to happen. + + + If you used the <filename>make_rc.sh</filename> script instead of doing + the above manually,, do your sanity checks now. + At this stage, you have two tarballs in your 'version directory' and a + set of artifacts in a staging area of the maven repository, in the 'closed' + state. These are publicly accessible in a temporary staging repository whose + URL you should have gotten in an email. The above mentioned script, + make_rc.sh does all of the above for you minus the + check of the artifacts built, the closing of the staging repository up in + maven, and the tagging of the release. If you run the script, do your checks + at this stage verifying the src and bin tarballs and checking what is up in + staging using hbase-downstreamer project. Tag before you start the build. + You can always delete it if the build goes haywire. + + + Sign and upload your version directory to <link + xlink:href="http://people.apache.org">people.apache.org</link>. + If all checks out, next put the version directory up + on people.apache.org. You + will need to sign and fingerprint them before you push them up. In the + version directory run the following commands: + + +$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done $ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i ; done $ cd .. # Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org. $ rsync -av 0.96.0RC0 people.apache.org:public_html - - - Make sure the people.apache.org directory is showing and that the - mvn repo urls are good. - Announce the release candidate on the mailing list and call a vote. - -
-
- Publishing a SNAPSHOT to maven - Make sure your settings.xml is set up properly (see above for how). - Make sure the hbase version includes -SNAPSHOT as a suffix. Here is how I published SNAPSHOTS of - a release that had an hbase version of 0.96.0 in its poms. - $ MAVEN_OPTS="-Xmx3g" mvn clean install -DskipTests javadoc:aggregate site assembly:single -Prelease + + Make sure the people.apache.org directory is showing and that the mvn repo + URLs are good. Announce the release candidate on the mailing list and call a + vote. + + +
+
+ Publishing a SNAPSHOT to maven + Make sure your settings.xml is set up properly, as in . Make sure the hbase version includes + -SNAPSHOT as a suffix. Following is an example of publishing + SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms. + + $ MAVEN_OPTS="-Xmx3g" mvn clean install -DskipTests javadoc:aggregate site assembly:single -Prelease $ MAVEN_OPTS="-Xmx3g" mvn -DskipTests deploy -Papache-release - -The make_rc.sh script mentioned above in the - (see ) can help you publish SNAPSHOTS. - Make sure your hbase.version has a -SNAPSHOT suffix and then run - the script. It will put a snapshot up into the apache snapshot repository for you. - -
+ The make_rc.sh script mentioned above (see ) can help you publish SNAPSHOTS. + Make sure your hbase.version has a -SNAPSHOT + suffix before running the script. It will put a snapshot up into the apache snapshot + repository for you. +
-
+
Generating the HBase Reference Guide - The manual is marked up using docbook. - We then use the docbkx maven plugin - to transform the markup to html. This plugin is run when you specify the site - goal as in when you run mvn site or you can call the plugin explicitly to - just generate the manual by doing mvn docbkx:generate-html - (TODO: It looks like you have to run mvn site first because docbkx wants to - include a transformed hbase-default.xml. Fix). - When you run mvn site, we do the document generation twice, once to generate the multipage - manual and then again for the single page manual (the single page version is easier to search). - + The manual is marked up using docbook. We then use the docbkx maven plugin to + transform the markup to html. This plugin is run when you specify the + site goal as in when you run mvn site or you + can call the plugin explicitly to just generate the manual by doing mvn + docbkx:generate-html (TODO: It looks like you have to run mvn + site first because docbkx wants to include a transformed + hbase-default.xml. Fix). When you run mvn site, we do the + document generation twice, once to generate the multipage manual and then again for the + single page manual (the single page version is easier to search). See for more information on building + the documentation.
- Updating hbase.apache.org -
- Contributing to hbase.apache.org - The Apache HBase apache web site (including this reference guide) is maintained as part of the main - Apache HBase source tree, under /src/main/docbkx and /src/main/site - Before 0.95.0, site and reference guide were at src/docbkx and src/site respectively. - The former -- docbkx -- is this reference guide as a bunch of xml marked up using docbook; - the latter is the hbase site (the navbars, the header, the layout, etc.), - and some of the documentation, legacy pages mostly that are in the process of being merged into the docbkx tree that is - converted to html by a maven plugin by the site build. - To contribute to the reference guide, edit these files under site or docbkx and submit them as a patch - (see ). Your Jira should contain a summary of the changes in each - section (see HBASE-6081 for an example). - To generate the site locally while you're working on it, run: - mvn site - Then you can load up the generated HTML files in your browser (file are under /target/site). -
-
- Publishing hbase.apache.org - As of INFRA-5680 Migrate apache hbase website, - to publish the website, build it, and then deploy it over a checkout of https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk. - Finally, check it in. For example, if trunk is checked out out at /Users/stack/checkouts/trunk - and the hbase website, hbase.apache.org, is checked out at /Users/stack/checkouts/hbase.apache.org/trunk, to update - the site, do the following: - - # Build the site and deploy it to the checked out directory - # Getting the javadoc into site is a little tricky. You have to build it before you invoke 'site'. - $ MAVEN_OPTS=" -Xmx3g" mvn clean install -DskipTests javadoc:aggregate site site:stage -DstagingDirectory=/Users/stack/checkouts/hbase.apache.org/trunk + Updating <link xlink:href="http://hbase.apache.org">hbase.apache.org</link> +
+ Contributing to hbase.apache.org + See for more information + on contributing to the documentation or website. +
+
+ Publishing <link xlink:href="http://hbase.apache.org" + >hbase.apache.org</link> + As of INFRA-5680 Migrate apache hbase website, to publish the website, build + it, and then deploy it over a checkout of + https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk. + Finally, check it in. For example, if trunk is checked out out at + /Users/stack/checkouts/trunk and the hbase website, + hbase.apache.org, is checked out at + /Users/stack/checkouts/hbase.apache.org/trunk, to update + the site, do the following: + +# Build the site and deploy it to the checked out directory +# Getting the javadoc into site is a little tricky. You have to build it before you invoke 'site'. +$ MAVEN_OPTS=" -Xmx3g" mvn clean install -DskipTests javadoc:aggregate site \ + site:stage -DstagingDirectory=/Users/stack/checkouts/hbase.apache.org/trunk - Now check the deployed site by viewing in a brower, browse to file:////Users/stack/checkouts/hbase.apache.org/trunk/index.html and check all is good. - If all checks out, commit it and your new build will show up immediately at http://hbase.apache.org - - $ cd /Users/stack/checkouts/hbase.apache.org/trunk - $ svn status - # Do an svn add of any new content... - $ svn add .... - $ svn commit -m 'Committing latest version of website...' + Now check the deployed site by viewing in a brower, browse to + file:////Users/stack/checkouts/hbase.apache.org/trunk/index.html and check all is + good. If all checks out, commit it and your new build will show up immediately at + http://hbase.apache.org. + +$ cd /Users/stack/checkouts/hbase.apache.org/trunk +$ svn status +# Do an svn add of any new content... +$ svn add .... +$ svn commit -m 'Committing latest version of website...' - -
-
- Voting on Release Candidates - - Everyone is encouraged to try and vote on HBase release candidates. - Only the votes of PMC members are binding. - PMC members, please read this WIP doc on policy voting for a release candidate, - Release Policy. - Before casting +1 binding votes, individuals are required to download the signed source code - package onto their own hardware, compile it as provided, and test the resulting executable on their - own platform, along with also validating cryptographic signatures and verifying that the package - meets the requirements of the ASF policy on releases. - Regards the latter, run mvn apache-rat:check to verify all files - are suitably licensed. See - HBase, mail # dev - On recent discussion clarifying ASF release policy. - for how we arrived at this process. - -
+
+
+ Voting on Release Candidates + Everyone is encouraged to try and vote on HBase release candidates. Only the + votes of PMC members are binding. PMC members, please read this WIP doc on policy + voting for a release candidate, Release Policy. Before casting +1 binding votes, individuals are + required to download the signed source code package onto their own hardware, + compile it as provided, and test the resulting executable on their own platform, + along with also validating cryptographic signatures and verifying that the + package meets the requirements of the ASF policy on releases. Regards + the latter, run mvn apache-rat:check to verify all files are + suitably licensed. See HBase, mail # dev - On recent discussion clarifying ASF release policy. + for how we arrived at this process. +
- Tests + Tests - Developers, at a minimum, should familiarize themselves with the unit test detail; unit tests in -HBase have a character not usually seen in other projects. + Developers, at a minimum, should familiarize themselves with the unit test detail; + unit tests in HBase have a character not usually seen in other projects. -
-Apache HBase Modules -As of 0.96, Apache HBase is split into multiple modules which creates "interesting" rules for - how and where tests are written. If you are writing code for +
+ Apache HBase Modules + As of 0.96, Apache HBase is split into multiple modules. This creates + "interesting" rules for how and where tests are written. If you are writing code for hbase-server, see for - how to write your tests; these tests can spin up a minicluster and will need to be + how to write your tests. These tests can spin up a minicluster and will need to be categorized. For any other module, for example hbase-common, - the tests must be strict unit tests and just test the class under test - no use of - the HBaseTestingUtility or minicluster is allowed (or even possible given the - dependency tree). -
- Running Tests in other Modules - If the module you are developing in has no other dependencies on other HBase modules, then - you can cd into that module and just run: - mvn test - which will just run the tests IN THAT MODULE. If there are other dependencies on other modules, - then you will have run the command from the ROOT HBASE DIRECTORY. This will run the tests in the other - modules, unless you specify to skip the tests in that module. For instance, to skip the tests in the hbase-server module, - you would run: - mvn clean test -PskipServerTests - from the top level directory to run all the tests in modules other than hbase-server. Note that you - can specify to skip tests in multiple modules as well as just for a single module. For example, to skip - the tests in hbase-server and hbase-common, you would run: - mvn clean test -PskipServerTests -PskipCommonTests - Also, keep in mind that if you are running tests in the hbase-server module you will need to - apply the maven profiles discussed in to get the tests to run properly. -
-
- -
-Unit Tests -Apache HBase unit tests are subdivided into four categories: small, medium, large, and -integration with corresponding JUnit categories: -SmallTests, MediumTests, -LargeTests, IntegrationTests. -JUnit categories are denoted using java annotations and look like this in your unit test code. -... + the tests must be strict unit tests and only test the class under test - no use of + the HBaseTestingUtility or minicluster is allowed (or even + possible given the dependency tree). +
+ Running Tests in other Modules + If the module you are developing in has no other dependencies on other HBase + modules, then you can cd into that module and run the mvn test command, which will only run the tests in that + module. If there are other dependencies on other modules, then + you must run the command from the root hbase directory. + This will run the tests in all modules unless you specify to skip the tests in a + given module. For instance, to skip the tests in the + hbase-server module, you would run the following + command, to run all the tests in modules other than + hbase-server: + mvn clean test -PskipServerTests + You can specify to skip tests in multiple modules as well as just for a single + module. For example, to skip the tests in hbase-server + and hbase-common, you would run the following + command: + + mvn clean test -PskipServerTests -PskipCommonTests + Also, keep in mind that if you are running tests in the + hbase-server module, you need to apply the maven + profiles discussed in to get the tests to + run properly. +
+
+ +
+ Unit Tests + Apache HBase unit tests are subdivided into four categories: + small, medium, large, + and integration with corresponding JUnit categories: + SmallTests, MediumTests, + LargeTests, IntegrationTests. + JUnit categories are denoted using java annotations and look like this in your unit + test code. + ... @Category(SmallTests.class) public class TestHRegionInfo { @Test @@ -547,108 +722,112 @@ public class TestHRegionInfo { // ... } } - The above example shows how to mark a unit test as belonging to the small category. - All unit tests in HBase have a categorization. - The first three categories, small, medium, and large are for tests run when you - type $ mvn test; i.e. these three categorizations are for HBase unit - tests. The integration category is for not for unit tests but for integration tests. - These are run when you invoke $ mvn verify. Integration tests are - described in and will not be discussed further in this section - on HBase unit tests. - Apache HBase uses a patched maven surefire plugin and maven profiles to implement + The above example shows how to mark a unit test as belonging to the + small category. All unit tests in HBase have a + categorization. + The first three categories, small, medium, + and large, are for tests run when you type $ mvn + test. In other words, these three categorizations are for HBase unit + tests. The integration category is not for unit tests, but for + integration tests. These are run when you invoke $ mvn verify. + Integration tests are described in . + HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations. - Read the below to figure which annotation of the set small, medium, and large to + Keep reading to figure which annotation of the set small, medium, and large to put on your new HBase unit test. -
- Small Tests<indexterm><primary>SmallTests</primary></indexterm> - - Small tests are executed in a shared JVM. We put in this - category all the tests that can be executed quickly in a shared JVM. The maximum - execution time for a small test is 15 seconds, and small tests should not use a - (mini)cluster. -
+ + Categorizing Tests + + Small TestsSmallTests + + + Small tests are executed in a shared JVM. We put in + this category all the tests that can be executed quickly in a shared + JVM. The maximum execution time for a small test is 15 seconds, and + small tests should not use a (mini)cluster. + + -
- Medium Tests<indexterm><primary>MediumTests</primary></indexterm> - Medium tests represent tests that must be executed before - proposing a patch. They are designed to run in less than 30 minutes altogether, - and are quite stable in their results. They are designed to last less than 50 - seconds individually. They can use a cluster, and each of them is executed in a - separate JVM. -
+ + Medium TestsMediumTests + + Medium tests represent tests that must be + executed before proposing a patch. They are designed to run in less than + 30 minutes altogether, and are quite stable in their results. They are + designed to last less than 50 seconds individually. They can use a + cluster, and each of them is executed in a separate JVM. + + -
- Large Tests<indexterm><primary>LargeTests</primary></indexterm> - Large tests are everything else. They are typically - large-scale tests, regression tests for specific bugs, timeout tests, - performance tests. They are executed before a commit on the pre-integration - machines. They can be run on the developer machine as well. -
-
- Integration - Tests<indexterm><primary>IntegrationTests</primary></indexterm> - Integration tests are system level tests. See for more info. -
+ + Large TestsLargeTests + + Large tests are everything else. They are + typically large-scale tests, regression tests for specific bugs, timeout + tests, performance tests. They are executed before a commit on the + pre-integration machines. They can be run on the developer machine as + well. + + + + Integration + TestsIntegrationTests + + Integration tests are system level tests. See + for more info. + + +
-
+
Running tests - Below we describe how to run the Apache HBase junit categories. -
+
Default: small and medium category tests - Running mvn test will execute all small tests - in a single JVM (no fork) and then medium tests in a separate JVM for each test - instance. Medium tests are NOT executed if there is an error in a small test. - Large tests are NOT executed. There is one report for small tests, and one - report for medium tests if they are executed. + Running mvn test will + execute all small tests in a single JVM (no fork) and then medium tests in a + separate JVM for each test instance. Medium tests are NOT executed if there is + an error in a small test. Large tests are NOT executed. There is one report for + small tests, and one report for medium tests if they are executed.
-
+
Running all tests - Running mvn test -P runAllTests will execute - small tests in a single JVM then medium and large tests in a separate JVM for - each test. Medium and large tests are NOT executed if there is an error in a - small test. Large tests are NOT executed if there is an error in a small or + Running + mvn test -P runAllTests will + execute small tests in a single JVM then medium and large tests in a separate + JVM for each test. Medium and large tests are NOT executed if there is an error + in a small test. Large tests are NOT executed if there is an error in a small or medium test. There is one report for small tests, and one report for medium and large tests if they are executed.
-
+
Running a single test or all tests in a package - To run an individual test, e.g. MyTest, do - mvn test -Dtest=MyTest You can also pass - multiple, individual tests as a comma-delimited list: - mvn test -Dtest=MyTest1,MyTest2,MyTest3 You can - also pass a package, which will run all tests under the package: - mvn test '-Dtest=org.apache.hadoop.hbase.client.*' + To run an individual test, e.g. MyTest, rum mvn test -Dtest=MyTest You can also pass multiple, + individual tests as a comma-delimited list: mvn test + -Dtest=MyTest1,MyTest2,MyTest3 You can also pass a package, which + will run all tests under the package: mvn test + '-Dtest=org.apache.hadoop.hbase.client.*' - When -Dtest is specified, localTests profile will - be used. It will use the official release of maven surefire, rather than our - custom surefire plugin, and the old connector (The HBase build uses a patched - version of the maven surefire plugin). Each junit tests is executed in a + When -Dtest is specified, the localTests profile + will be used. It will use the official release of maven surefire, rather than + our custom surefire plugin, and the old connector (The HBase build uses a + patched version of the maven surefire plugin). Each junit test is executed in a separate JVM (A fork per test class). There is no parallelization when tests are running in this mode. You will see a new message at the end of the -report: - "[INFO] Tests are skipped". It's harmless. While you need to make sure the sum - of Tests run: in the Results : section of test reports - matching the number of tests you specified because no error will be reported - when a non-existent test case is specified. + "[INFO] Tests are skipped". It's harmless. However, you + need to make sure the sum of Tests run: in the Results + : section of test reports matching the number of tests you specified + because no error will be reported when a non-existent test case is specified. +
-
+
Other test invocation permutations Running mvn test -P runSmallTests will execute "small" tests only, using a single JVM. @@ -660,8 +839,7 @@ public class TestHRegionInfo { execute both small and medium tests, using a single JVM.
-
+
Running tests faster By default, $ mvn test -P runAllTests runs 5 tests in parallel. It can be increased on a developer's machine. Allowing that you can have 2 tests @@ -681,8 +859,7 @@ sudo mount -t tmpfs -o size=2048M tmpfs /ram2G -Dtest.build.data.basedirectory=/ram2G
-
+
<command>hbasetests.sh</command> It's also possible to use the script hbasetests.sh. This script runs the medium and large tests in parallel with two maven instances, and @@ -697,8 +874,7 @@ sudo mount -t tmpfs -o size=2048M tmpfs /ram2G failed tests a second time, in a separate jvm and without parallelisation.
-
+
Test Resource Checker<indexterm><primary>Test Resource Checker</primary></indexterm> A custom Maven SureFire plugin listener checks a number of resources before @@ -723,11 +899,9 @@ ConnectionCount=1 (was 1)
-
+
Writing Tests -
+
General rules @@ -751,8 +925,7 @@ ConnectionCount=1 (was 1)
-
+
Categories and execution time @@ -775,8 +948,7 @@ ConnectionCount=1 (was 1)
-
+
Sleeps in tests Whenever possible, tests should not use Thread.sleep, but rather waiting for the real event they need. This is faster and clearer for @@ -788,8 +960,7 @@ ConnectionCount=1 (was 1) 200 ms sleep loop.
-
+
Tests using a cluster Tests using a HRegion do not have to start a cluster: A region can use the @@ -802,8 +973,7 @@ ConnectionCount=1 (was 1)
-
+
Integration Tests HBase integration/system tests are tests that are beyond HBase unit tests. They are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows), @@ -854,8 +1024,7 @@ ConnectionCount=1 (was 1) ...\""}. The command is logged in the test logs, so you can verify it is correct for your environment. -
+
Running integration tests against mini cluster HBase 0.92 added a verify maven target. Invoking it, for example by doing mvn verify, will run all the phases up to and @@ -881,8 +1050,7 @@ mvn verify results (so don't remove the 'target' directory) for test failures and reports the results. -
+
Running a subset of Integration tests This is very similar to how you specify running a subset of unit tests (see above), but use the property it.test instead of @@ -901,14 +1069,13 @@ mvn verify
-
+
Running integration tests against distributed cluster If you have an already-setup HBase cluster, you can launch the integration tests by invoking the class IntegrationTestsDriver. You may have to run test-compile first. The configuration will be picked by the bin/hbase - script. mvn test-compile Then launch the tests - with: + script. mvn test-compile Then + launch the tests with: bin/hbase [--config config_dir] org.apache.hadoop.hbase.IntegrationTestsDriver Pass -h to get usage on this sweet tool. Running the IntegrationTestsDriver without any argument will launch tests found under @@ -919,7 +1086,8 @@ mvn verify the full class name; so, part of class name can be used. IntegrationTestsDriver uses Junit to run the tests. Currently there is no support for running integration tests against a distributed cluster using maven (see HBASE-6201). + xlink:href="https://issues.apache.org/jira/browse/HBASE-6201" + >HBASE-6201). The tests interact with the distributed cluster by using the methods in the DistributedHBaseCluster (implementing @@ -940,16 +1108,15 @@ mvn verify implemented and plugged in.
-
+
Destructive integration / system tests In 0.96, a tool named ChaosMonkey has been introduced. It is modeled after the same-named - tool by Netflix. Some of the tests use ChaosMonkey to simulate faults - in the running cluster in the way of killing random servers, disconnecting - servers, etc. ChaosMonkey can also be used as a stand-alone tool to run a - (misbehaving) policy while you are running other tests. + xlink:href="http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html" + >same-named tool by Netflix. Some of the tests use ChaosMonkey to + simulate faults in the running cluster in the way of killing random servers, + disconnecting servers, etc. ChaosMonkey can also be used as a stand-alone tool + to run a (misbehaving) policy while you are running other tests. ChaosMonkey defines Action's and Policy's. Actions are sequences of events. We have at least the following actions: @@ -1024,26 +1191,31 @@ mvn verify 12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6 - -As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions. ChaosMonkey tool, if run from command line, will keep on running until the process is killed. - -
-
- Passing individual Chaos Monkey per-test Settings/Properties - - Since HBase version 1.0.0 (HBASE-11348 Make frequency and sleep times of chaos monkeys configurable), - the chaos monkeys used to run integration tests can be configured per test run. Users can create a java properties file and - and pass this to the chaos monkey with timing configurations. The properties file needs to be in the HBase classpath. - The various properties that can be configured and their default values can be found listed in the - org.apache.hadoop.hbase.chaos.factories.MonkeyConstants class. - If any chaos monkey configuration is missing from the property file, then the default values are assumed. - For example: - + As you can see from the log, ChaosMonkey started the default + PeriodicRandomActionPolicy, which is configured with all the available actions, + and ran RestartActiveMaster and RestartRandomRs actions. ChaosMonkey tool, if + run from command line, will keep on running until the process is killed. +
+
+ Passing individual Chaos Monkey per-test Settings/Properties + Since HBase version 1.0.0 (HBASE-11348 + Make frequency and sleep times of chaos monkeys configurable), the + chaos monkeys used to run integration tests can be configured per test run. + Users can create a java properties file and and pass this to the chaos monkey + with timing configurations. The properties file needs to be in the HBase + classpath. The various properties that can be configured and their default + values can be found listed in the + org.apache.hadoop.hbase.chaos.factories.MonkeyConstants + class. If any chaos monkey configuration is missing from the property file, then + the default values are assumed. For example: + $bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties - The above command will start the integration tests and chaos monkey passing the properties file monkey.properties. - Here is an example chaos monkey file: - + The above command will start the integration tests and chaos monkey passing + the properties file monkey.properties. Here is an example + chaos monkey file: + sdm.action1.period=120000 sdm.action2.period=40000 move.regions.sleep.time=80000 @@ -1051,9 +1223,9 @@ move.regions.max.time=1000000 move.regions.sleep.time=80000 batch.restart.rs.ratio=0.4f -
-
-
+
+
+
Maven Build Commands @@ -1061,18 +1233,18 @@ batch.restart.rs.ratio=0.4f Note: use Maven 3 (Maven 2 may work but we suggest you use Maven 3). -
+ Compile mvn compile -
+ -
+ Running all or individual Unit Tests See the section above in -
+
Building against various hadoop versions. @@ -1110,7 +1282,7 @@ pecularity that is probably fixable but we've not spent the time trying to figur As Apache HBase is an Apache Software Foundation project, see for more information about how the ASF functions. -
+ Mailing Lists Sign up for the dev-list and the user-list. See the mailing lists page. @@ -1118,36 +1290,46 @@ pecularity that is probably fixable but we've not spent the time trying to figur There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.) -
+
- Jira - Check for existing issues in Jira. - If it's either a new feature request, enhancement, or a bug, file a ticket. - -
Jira Priorities - The following is a guideline on setting Jira issue priorities: - - Blocker: Should only be used if the issue WILL cause data loss or cluster instability reliably. - Critical: The issue described can cause data loss or cluster instability in some cases. - Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant - bugs that need to be fixed but that don't cause data loss. - Minor: Useful enhancements and annoying but not damaging bugs. - Trivial: Useful enhancements but generally cosmetic. - - -
-
- Code Blocks in Jira Comments - A commonly used macro in Jira is {code}. If you do this in a Jira comment... - + Jira + Check for existing issues in Jira. If it's + either a new feature request, enhancement, or a bug, file a ticket. + + JIRA Priorities + + Blocker: Should only be used if the issue WILL cause data loss or cluster + instability reliably. + + + Critical: The issue described can cause data loss or cluster instability + in some cases. + + + Major: Important but not tragic issues, like updates to the client API + that will add a lot of much-needed functionality or significant bugs that + need to be fixed but that don't cause data loss. + + + Minor: Useful enhancements and annoying but not damaging bugs. + + + Trivial: Useful enhancements but generally cosmetic. + + + + Code Blocks in Jira Comments + A commonly used macro in Jira is {code}. Everything inside the tags is + preformatted, as in this example. + {code} code snippet {code} - - ... Jira will format the code snippet like code, instead of a regular comment. It improves readability. - -
+ +
+
@@ -1163,8 +1345,10 @@ pecularity that is probably fixable but we've not spent the time trying to figur xml:id="unit.tests"> Unit Tests The following information is from http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/. - The following sections discuss JUnit, Mockito, MRUnit, and HBaseTestingUtility. + xlink:href="http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/" + >http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/. + The following sections discuss JUnit, Mockito, MRUnit, and HBaseTestingUtility. +
JUnit @@ -1219,8 +1403,8 @@ public class TestMyHbaseDAOData { These tests ensure that your createPut method creates, populates, and returns a Put object with expected values. Of course, JUnit can do much more than this. For an introduction to JUnit, see https://github.com/junit-team/junit/wiki/Getting-started. - + xlink:href="https://github.com/junit-team/junit/wiki/Getting-started" + >https://github.com/junit-team/junit/wiki/Getting-started.
Submitting Patches - HBase moved to GIT from SVN. Until we develop our own documentation for how to contribute patches in our new GIT context, - caveat the fact that we have a different - branching modes and that we don't currently do the merge practice - described in the following, the accumulo doc on how to contribute and develop - after our move to GIT is worth a read. - - - If you are new to submitting patches to open source or new to submitting patches to Apache, - I'd suggest you start by reading the On Contributing Patches - page from Apache Commons Project. Its a nice overview that - applies equally to the Apache HBase Project. -
+ HBase moved to GIT from SVN. Until we develop our own documentation for how to + contribute patches in our new GIT context, caveat the fact that we have a different + branching model and that we don't currently do the merge practice described in the + following, the accumulo doc on + how to contribute and develop after our move to GIT is worth a read. + + If you are new to submitting patches to open source or new to submitting patches to + Apache, start by reading the On Contributing Patches page from Apache Commons Project. It provides a + nice overview that applies equally to the Apache HBase Project. +
Create Patch - See the aforementioned Apache Commons link for how to make patches against a checked out subversion - repository. Patch files can also be easily generated from Eclipse, for example by selecting "Team -> Create Patch". - Patches can also be created by git diff and svn diff. - - Make patches against the master branch. Do the master - branch first even if you want the patch in another branch altogether. We operate by - apply fixes first to the master branch and then backporting. Another reason to patch - master first is because our patch testing system only works against the master branch. - Attach your patch to your JIRA and then mark the issue 'Patch Available'. This will prompt - the HadoopQA system to run your - patch through all unit tests and report back the results to the JIRA. It may take a few hours - for your patch to be run. HadoopQA runs the most recently attached patch to the JIRA regardless. - If you want to rerun your patch against HadoopQA perhaps to verify that indeed a certain test - failure is the result of your patch, then reattach your patch and it will be run again. - Please submit one patch-file per Jira. For example, if multiple files are changed make sure the - selected resource when generating the patch is a directory. Patch files can reflect changes in multiple files. - - Generating patches using git: -$ git diff --no-prefix > HBASE_XXXX.patch - - Don't forget the 'no-prefix' option; and generate the diff from the root directory of project - - Make sure you review for code style. -
-
- Patch File Naming - The patch file should have the Apache HBase Jira ticket in the name. For example, if a patch was submitted for Foo.java, then - a patch file called Foo_HBASE_XXXX.patch would be acceptable where XXXX is the Apache HBase Jira number. - - If you generating from a branch, then including the target branch in the filename is advised, e.g., HBASE_XXXX-0.90.patch. - -
-
+ See the aforementioned Apache Commons link for how to make patches against a + checked out subversion repository. + + Patching Workflow + + Always patch against the master branch first, + even if you want to patch in another branch. HBase committers always apply + patches first to the master branch, and backport if necessary. + + + Submit one single patch for a fix. If necessary, use git + squash to merge local commits into a single one first. + + + The patch should have the JIRA ID in the name. If you generating from a + branch, include the target branch in the filename. A common naming scheme + for patches is: + HBASE-XXXX.patch + HBASE-XXXX-1.patch + HBASE-XXXX-0.90.patch + + + To submit a patch, first create it using one of the methods in . Next, attach the patch to the JIRA (one + patch for the whole fix), using the + More + Attach Patch + dialog. Next, click the Patch Available + button, which triggers the Hudson job which checks the patch for + validity. + Please understand that not every patch may get committed, and that + feedback will likely be provided on the patch. + + + If your patch is longer than a single screen, also attach a Review Board + to the case. See . + + + If you need to revise your patch, leave the previous patch file(s) + attached to the JIRA, and upload the new one with a revision ID. Cancel the + Patch Available flag and then re-trigger it, by toggling the + Patch Available button in JIRA. + + + + Methods to Create Patches + + Eclipse + + Select the + Team + Create Patch + menu item. + + + Git + + git format-patch is preferred because it preserves commit + messages. Use git squash first, to combine smaller commits + into a single larger one. + + + git format-patch --no-prefix origin/master --stdout > HBASE-XXXX.patch + + + git diff --no-prefix origin/master > HBASE-XXXX.patch + + + + + + Subversion + svn diff > HBASE-XXXX.patch + + + Make sure you review and for code style. +
+ +
Unit Tests - Yes, please. Please try to include unit tests with every code patch (and especially new classes and large changes). - Make sure unit tests pass locally before submitting the patch. + Yes, please. Please try to include unit tests with every code patch (and + especially new classes and large changes). Make sure unit tests pass locally before + submitting the patch. Also, see . - If you are creating a new unit test class, notice how other unit test classes have classification/sizing - annotations at the top and a static method on the end. Be sure to include these in any new unit test files - you generate. See for more on how the annotations work. - -
-
- Attach Patch to Jira - The patch should be attached to the associated Jira ticket "More Actions -> Attach Files". Make sure you click the - ASF license inclusion, otherwise the patch can't be considered for inclusion. - - Once attached to the ticket, click "Submit Patch" and - the status of the ticket will change. Committers will review submitted patches for inclusion into the codebase. Please - understand that not every patch may get committed, and that feedback will likely be provided on the patch. Fear not, though, - because the Apache HBase community is helpful! - -
+ If you are creating a new unit test class, notice how other unit test classes have + classification/sizing annotations at the top and a static method on the end. Be sure + to include these in any new unit test files you generate. See for more on how the annotations work. +
- Common Patch Feedback - The following items are representative of common patch feedback. Your patch process will go faster if these are - taken into account before submission. - - - See the Java coding standards - for more information on coding conventions in Java. - -
+ Code Formatting Conventions + Please adhere to the following guidelines so that your patches can be reviewed + more quickly. These guidelines have been developed based upon common feedback on + patches from new submitters. + See the Java + coding standards for more information on coding conventions in Java. + + Space Invaders - Rather than do this... + Do not use extra spaces around brackets. Use the second style, rather than the + first. if ( foo.equals( bar ) ) { // don't do this - ... do this instead... if (foo.equals(bar)) { - - Also, rather than do this... foo = barArray[ i ]; // don't do this - ... do this instead... foo = barArray[i]; - -
-
+ + Auto Generated Code - Auto-generated code in Eclipse often looks like this... + Auto-generated code in Eclipse often uses bad variable names such as + arg0. Use more informative variable names. Use code like + the second example here. public void readFields(DataInput arg0) throws IOException { // don't do this foo = arg0.readUTF(); // don't do this - ... do this instead ... public void readFields(DataInput di) throws IOException { foo = di.readUTF(); - See the difference? 'arg0' is what Eclipse uses for arguments by default. - -
-
- Long Lines - - Keep lines less than 100 characters. - + + + Long Lines + Keep lines less than 100 characters. You can configure your IDE to do this + automatically. + Bar bar = foo.veryLongMethodWithManyArguments(argument1, argument2, argument3, argument4, argument5, argument6, argument7, argument8, argument9); // don't do this - ... do something like this instead ... - + Bar bar = foo.veryLongMethodWithManyArguments( argument1, argument2, argument3,argument4, argument5, argument6, argument7, argument8, argument9); - -
-
+ + Trailing Spaces - - This happens more than people would imagine. + Trailing spaces are a common problem. Be sure there is a line break after the + end of your code, and avoid lines with nothing but whitespace. This makes diffs + more meaningful. You can configure your IDE to help with this. -Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the semicolon instead of a line break. +Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the semicolon. - Make sure there's a line-break after the end of your code, and also avoid lines that have nothing - but whitespace. - -
-
- Implementing Writable - - Applies pre-0.96 only - In 0.96, HBase moved to protobufs. The below section on Writables - applies to 0.94.x and previous, not to 0.96 and beyond. - - - Every class returned by RegionServers must implement Writable. If you - are creating a new class that needs to implement this interface, don't forget the default constructor. - -
-
- Javadoc - This is also a very common feedback item. Don't forget Javadoc! - Javadoc warnings are checked during precommit. If the precommit tool gives you a '-1', - please fix the javadoc issue. Your patch won't be committed if it adds such warnings. - -
-
- Findbugs + +
+ Implementing Writable + + Applies pre-0.96 only + In 0.96, HBase moved to protocol buffers (protobufs). The below section on + Writables applies to 0.94.x and previous, not to 0.96 and beyond. + + Every class returned by RegionServers must implement the Writable + interface. If you are creating a new class that needs to implement this + interface, do not forget the default constructor. +
+
+ API Documentation (Javadoc) + This is also a very common feedback item. Don't forget Javadoc! + Javadoc warnings are checked during precommit. If the precommit tool gives you + a '-1', please fix the javadoc issue. Your patch won't be committed if it adds + such warnings. +
+
+ Findbugs - Findbugs is used to detect common bugs pattern. As Javadoc, it is checked during - the precommit build up on Apache's Jenkins, and as with Javadoc, please fix them. - You can run findbugs locally with 'mvn findbugs:findbugs': it will generate the - findbugs files locally. Sometimes, you may have to write code smarter than - Findbugs. You can annotate your code to tell Findbugs you know what you're - doing, by annotating your class with: - @edu.umd.cs.findbugs.annotations.SuppressWarnings( - value="HE_EQUALS_USE_HASHCODE", - justification="I know what I'm doing") - - - Note that we're using the apache licensed version of the annotations. - -
+ Findbugs is used to detect common bugs pattern. It is checked + during the precommit build by Apache's Jenkins. If errors are found, please fix + them. You can run findbugs locally with mvn + findbugs:findbugs, which will generate the findbugs files + locally. Sometimes, you may have to write code smarter than + findbugs. You can annotate your code to tell + findbugs you know what you're doing, by annotating your class + with the following annotation: + @edu.umd.cs.findbugs.annotations.SuppressWarnings( +value="HE_EQUALS_USE_HASHCODE", +justification="I know what I'm doing") + It is important to use the Apache-licensed version of the annotations. + +
Javadoc - Useless Defaults - Don't just leave the @param arguments the way your IDE generated them. Don't do - this... + Don't just leave the @param arguments the way your IDE generated them.: /** * @@ -1820,9 +2030,9 @@ Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the */ public Foo getFoo(Bar bar); - ... either add something descriptive to the @param and @return lines, or just - remove them. But the preference is to add something descriptive and - useful. + Either add something descriptive to the @param and + @return lines, or just remove them. The preference is to add + something descriptive and useful.
@@ -1842,35 +2052,6 @@ Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the
-
- Submitting a patch again - Sometimes committers ask for changes for a patch. After incorporating the - suggested/requested changes, follow the following process to submit the patch again. - - - Do not delete the old patch file - - - version your new patch file using a simple scheme like this: - HBASE-{jira number}-{version}.patch - e.g: - HBASE_XXXX-v2.patch - - - 'Cancel Patch' on JIRA.. bug status will change back to Open - - - Attach new patch file (e.g. HBASE_XXXX-v2.patch) using 'Files --> - Attach' - - - Click on 'Submit Patch'. Now the bug status will say 'Patch - Available'. - - - Committers will review the patch. Rinse and repeat as many times as needed - :-) -
Submitting incremental patches @@ -1906,9 +2087,6 @@ Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the $ git diff --no-prefix > HBASE_XXXX-2.patch - - - @@ -1916,86 +2094,198 @@ Bar bar = foo.getBar(); <--- imagine there is an extra space(s) after the
ReviewBoard - Larger patches should go through ReviewBoard. - + Patches larger than one screen, or patches that will be tricky to review, should + go through ReviewBoard. + + Use ReviewBoard + + Register for an account if you don't already have one. It does not use the + credentials from issues.apache.org. Log in. + + + Click New Review Request. + + + Choose the hbase-git repository. Click Choose File to + select the diff and optionally a parent diff. Click Create Review + Request. + + + Fill in the fields as required. At the minimum, fill in the + Summary and choose hbase as the + Review Group. If you fill in the + Bugs field, the review board is attached to the + relevant JIRA automatically. The more fields you fill in, the better. Click + Publish to make your review request public. An + email will be sent to everyone in the hbase group, to + review the patch. + + + Back in your JIRA, click + More + Link + Web Link + , and paste in the URL of your ReviewBoard request. + + + To cancel the request, click + Close + Discarded + . + + For more information on how to use ReviewBoard, see the ReviewBoard documentation.
-
Guide for HBase Committers - -
New committersNew committers are encouraged to first read Apache's generic committer documentation: Apache New Committer Guide -Apache Committer FAQ
+
+ Guide for HBase Committers -
ReviewHBase committers should, as often as possible, attempt to review patches submitted by others. Ideally every submitted patch will get reviewed by a committer within a few days. If a committer reviews a patch they've not authored, and believe it to be of sufficient quality, then they can commit the patch, otherwise the patch should be cancelled with a clear explanation for why it was rejected. The list of submitted patches is in the HBase Review Queue. -This is ordered by time of last modification. Committers should scan the list from top-to-bottom, looking for patches that they feel qualified to review and possibly commit. For non-trivial changes, it is required to get another committer to review your own patches before commit. Use "Submit Patch" like other contributors, and then wait for a "+1" from another committer before committing.
+
+ New committers + New committers are encouraged to first read Apache's generic committer + documentation: + + + Apache New Committer Guide + + + + Apache + Committer FAQ + + + +
-
RejectPatches should be rejected which do not adhere to the guidelines in HowToContribute -and to the code review checklist. -Committers should always be polite to contributors and try to instruct and encourage them to contribute better patches. -If a committer wishes to improve an unacceptable patch, then it should first be rejected, and a new patch should be attached by the committer for review.
+
+ Review + HBase committers should, as often as possible, attempt to review patches + submitted by others. Ideally every submitted patch will get reviewed by a + committer within a few days. If a committer reviews a patch + they have not authored, and believe it to be of sufficient quality, then they + can commit the patch, otherwise the patch should be cancelled with a clear + explanation for why it was rejected. + The list of submitted patches is in the HBase Review Queue, which is ordered by time of last modification. + Committers should scan the list from top to bottom, looking for patches that + they feel qualified to review and possibly commit. + For non-trivial changes, it is required to get another committer to review + your own patches before commit. Use the Submit Patch + button in JIRA, just like other contributors, and then wait for a + +1 response from another committer before committing. + +
-
-Commit -Committers commit patches to the Apache HBase GIT repository. - - - Before you commit!!!! - Make sure your local configuration is correct. In particular, your identity - and email. Do $ git config --list. Check what shows as your - user.email and user.name. - See this GitHub article, Set Up Git - if you need pointers. - - -When you commit a patch, please: -Include the Jira issue id in the commit message, along with a short description of the change and the name of the contributor if it is not you. -Be sure to get the issue id right, as this causes Jira to link to the change in Subversion (use the issue's "All" tab to see these). -Resolve the issue as fixed, thanking the contributor. -Always set the "Fix Version" at this point, but please only set a single fix version, the earliest release in which the change will appear. - -
- Add Amending-Author when a conflict cherrypick backporting - - We've established the practice of committing to trunk and then - cherry picking back to branches whenever possible. When there is a minor - conflict we can fix it up and just proceed with the commit. The resulting commit - retains the original author. When the amending author is different from the - original committer, add notice of this at the end of the commit message as: - Amending-Author: Author <committer&apache> - See discussion at HBase, mail # dev - [DISCUSSION] Best practice when amending commits cherry picked from master to branch. - -
+
+ Reject + Patches which do not adhere to the guidelines in HowToContribute and to the code review checklist should be rejected. Committers should always + be polite to contributors and try to instruct and encourage them to contribute + better patches. If a committer wishes to improve an unacceptable patch, then it + should first be rejected, and a new patch should be attached by the committer + for review. +
-
- Committers are responsible for making sure commits do not break the build or tests - - If a committer commits a patch it is their responsibility - to make sure it passes the test suite. It is helpful - if contributors keep an eye out that their patch - does not break the hbase build and/or tests but ultimately, - a contributor cannot be expected to be up on the - particular vagaries and interconnections that occur - in a project like hbase. A committer should. - -
-
- Patching Etiquette - In the thread HBase, mail # dev - ANNOUNCEMENT: Git Migration In Progress (WAS => Re: Git Migration), - it was agreed on the following patch flow - - Develop and commit the patch against trunk/master first. - Try to cherry-pick the patch when backporting if possible. - If this does not work, manually commit the patch to the branch. - - -
+
+ Commit + Committers commit patches to the Apache HBase GIT repository. + + Before you commit!!!! + Make sure your local configuration is correct, especially your identity + and email. Examine the output of the $ git config --list + command and be sure it is correct. See this GitHub article, Set Up + Git if you need pointers. + + When you commit a patch, please: + + + Include the Jira issue id in the commit message, along with a short + description of the change and the name of the contributor if it is not + you. Be sure to get the issue id right, as this causes Jira to link to + the change in Subversion (use the issue's "All" tab to see + these). + + + Resolve the issue as fixed, thanking the contributor. Always set the + "Fix Version" at this point, but please only set a single fix + version, the earliest release in which the change will appear. + + + +
+ Commit Message Format + The commit message should contain the JIRA ID and a description of what + the patch does. The preferred commit message format is: + <jira-id> <jira-title> (<contributor-name-if-not-commit-author>) + HBASE-12345 Fix All The Things (jane@example.com) + If the submitter used git format-patch to generate the + patch, their commit message is in their patch and you can use that. +
+
+ Add Amending-Author when a conflict cherrypick backporting + We've established the practice of committing to trunk and then cherry + picking back to branches whenever possible. When there is a minor conflict + we can fix it up and just proceed with the commit. The resulting commit + retains the original author. When the amending author is different from the + original committer, add notice of this at the end of the commit message as: + Amending-Author: Author <committer&apache> + See discussion at HBase, mail # dev - [DISCUSSION] Best practice when amending commits + cherry picked from master to branch. +
-
Committing DocumentationTBS
+
+ Committers are responsible for making sure commits do not break the build + or tests + If a committer commits a patch, it is their responsibility to make sure + it passes the test suite. It is helpful if contributors keep an eye out that + their patch does not break the hbase build and/or tests, but ultimately, a + contributor cannot be expected to be aware of all the particular vagaries + and interconnections that occur in a project like HBase. A committer should. + +
+
+ Patching Etiquette + In the thread HBase, mail # dev - ANNOUNCEMENT: Git Migration In Progress (WAS => Re: + Git Migration), it was agreed on the following patch flow + + Develop and commit the patch against trunk/master + first. + + + Try to cherry-pick the patch when backporting if + possible. + + + If this does not work, manually commit the patch to the + branch. + + + +
+ +
+ Merge Commits + Avoid merge commits, as they create problems in the git history. +
+
+ Committing Documentation + See . +
+
-
+
DialogCommitters should hang out in the #hbase room on irc.freenode.net for real-time discussions. However any substantive discussion (as with any off-list project-related discussion) should be re-iterated in Jira or on the developer list.
-- 1.8.5.2 (Apple Git-48)