Index: src/docs/src/documentation/content/xdocs/dynpartition.xml =================================================================== --- src/docs/src/documentation/content/xdocs/dynpartition.xml (revision 1377542) +++ src/docs/src/documentation/content/xdocs/dynpartition.xml (working copy) @@ -49,7 +49,8 @@
The way dynamic partitioning works is that HCatalog locates partition columns in the data passed to it and uses the data in these columns to split the rows across multiple partitions. (The data passed to HCatalog must have a schema that matches the schema of the destination table and hence should always contain partition columns.) It is important to note that partition columns can’t contain null values or the whole process will fail.
-It is also important to note that all partitions created during a single run are part of a transaction and if any part of the process fails none of the partitions will be added to the table.
+It is also important to note that all partitions created during a single run are part of one transaction; +therefore if any part of the process fails, none of the partitions will be added to the table.
@@ -101,7 +102,7 @@As with Pig, the only change in dynamic partitioning that a MapReduce programmer sees is that they don't have to specify all the partition key/value combinations.
-A current code example for writing out a specific partition for (a=1,b=1) would go something like this:
+A current code example for writing out a specific partition for (a=1, b=1) would go something like this:
Prerequisites
You can now procede to starting the server.
+You can now proceed to starting the server.
To start your server, HCatalog needs to know where Hive is installed.
This is communicated by setting the environment variable HIVE_HOME
to the location you installed Hive. Start the HCatalog server by switching directories to
- root and invoking HIVE_HOME=hive_home sbin/hcat_server.sh start
HIVE_HOME=hive_home sbin/hcat_server.sh start".
To stop the HCatalog server, change directories to the root
- directory and invoking HIVE_HOME=hive_home sbin/hcat_server.sh stop
HIVE_HOME=hive_home sbin/hcat_server.sh stop".
Index: src/docs/src/documentation/content/xdocs/cli.xml
===================================================================
--- src/docs/src/documentation/content/xdocs/cli.xml (revision 1377542)
+++ src/docs/src/documentation/content/xdocs/cli.xml (working copy)
@@ -41,31 +41,62 @@
The HCatalog CLI supports these command line options:
-Option |
+ Usage |
+ Description |
+
|---|---|---|
-g |
+
|
+ Tells HCatalog that the table which needs to be created must have group "mygroup". |
+
-p |
+
|
+ Tells HCatalog that the table which needs to be created must have permissions "rwxr-xr-x". |
+
-f |
+
|
+ Tells HCatalog that myscript.hcatalog is a file containing DDL commands to execute. |
+
-e |
+
|
+ Tells HCatalog to treat the following string as a DDL command and execute it. |
+
-D |
+
|
+ Passes the key-value pair to HCatalog as a Java System Property. |
+
| + |
|
+ Prints a usage message. |
+
Note the following:
If no option is provided, then a usage message is printed:
Assumptions
-When using the HCatalog CLI, you cannot specify a permission string without read permissions for owner, such as -wxrwxr-x. If such a permission setting is desired, you can use the octal version instead, which in this case would be 375. Also, any other kind of permission string where the owner has read permissions (for example r-x------ or r--r--r--) will work fine.
+ +Owner Permissions
+When using the HCatalog CLI, you cannot specify a permission string without read permissions for owner, such as -wxrwxr-x, because the string begins with "-". If such a permission setting is desired, you can use the octal version instead, which in this case would be 375. Also, any other kind of permission string where the owner has read permissions (for example r-x------ or r--r--r--) will work fine.
@@ -113,7 +144,7 @@Note: Pig and MapReduce coannot read from or write to views.
+Note: Pig and MapReduce cannot read from or write to views.
CREATE VIEW
Supported. Behavior same as Hive.
@@ -162,7 +193,7 @@Supported. Behavior same as Hive.
Authentication
If a failure results in a message like "2010-11-03 16:17:28,225 WARN hive.metastore ... - Unable to connect metastore with URI thrift://..." in /tmp/<username>/hive.log, then make sure you have run "kinit <username>@FOO.COM" to get a Kerberos ticket and to be able to authenticate to the HCatalog server. |
Error Log
+If other errors occur while using the HCatalog CLI, more detailed messages are written to /tmp/<username>/hive.log.
+HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools – Pig, MapReduce, and Hive – to more easily read and write data on the grid. HCatalog’s table abstraction presents users with a relational view of data in the Hadoop distributed file system (HDFS) and ensures that users need not worry about where or in what format their data is stored – RCFile format, text files, or SequenceFiles.
-HCatalog supports reading and writing files in any format for which a SerDe can be written. By default, HCatalog supports RCFile, CSV, JSON, and SequenceFile formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.
+HCatalog supports reading and writing files in any format for which a SerDe (serializer-deserializer) can be written. By default, HCatalog supports RCFile, CSV, JSON, and SequenceFile formats. To use a custom format, you must provide the InputFormat, OutputFormat, and SerDe.
The HCatalog interface for Pig consists of HCatLoader and HCatStorer, which implement the Pig load and store interfaces respectively. HCatLoader accepts a table to read data from; you can indicate which partitions to scan by immediately following the load statement with a partition filter statement. HCatStorer accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the STORE clause; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. HCatLoader is implemented on top of HCatInputFormat and HCatStorer is implemented on top of HCatOutputFormat (see HCatalog Load and Store).
+The HCatalog interface for Pig consists of HCatLoader and HCatStorer, which implement the Pig load and store interfaces respectively. HCatLoader accepts a table to read data from; you can indicate which partitions to scan by immediately following the load statement with a partition filter statement. HCatStorer accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the STORE clause; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. HCatLoader is implemented on top of HCatInputFormat and HCatStorer is implemented on top of HCatOutputFormat. +(See Load and Store Interfaces.)
-The HCatalog interface for MapReduce – HCatInputFormat and HCatOutputFormat – is an implementation of Hadoop InputFormat and OutputFormat. HCatInputFormat accepts a table to read data from and optionally a selection predicate to indicate which partitions to scan. HCatOutputFormat accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the setOutput method; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. (See HCatalog Input and Output.)
+The HCatalog interface for MapReduce — HCatInputFormat and HCatOutputFormat — is an implementation of Hadoop InputFormat and OutputFormat. HCatInputFormat accepts a table to read data from and optionally a selection predicate to indicate which partitions to scan. HCatOutputFormat accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the setOutput method; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. +(See Input and Output Interfaces.)
-Note: There is no Hive-specific interface. Since HCatalog uses Hive's metastore, Hive can read data in HCatalog directly.
+Note: There is no Hive-specific interface. Since HCatalog uses Hive's metastore, Hive can read data in HCatalog directly.
-Data is defined using HCatalog's command line interface (CLI). The HCatalog CLI supports all Hive DDL that does not require MapReduce to execute, allowing users to create, alter, drop tables, etc. (Unsupported Hive DDL includes import/export, CREATE TABLE AS SELECT, ALTER TABLE options REBUILD and CONCATENATE, and ANALYZE TABLE ... COMPUTE STATISTICS.) The CLI also supports the data exploration part of the Hive command line, such as SHOW TABLES, DESCRIBE TABLE, etc. (see the HCatalog Command Line Interface).
+Data is defined using HCatalog's command line interface (CLI). The HCatalog CLI supports all Hive DDL that does not require MapReduce to execute, allowing users to create, alter, drop tables, etc. The CLI also supports the data exploration part of the Hive command line, such as SHOW TABLES, DESCRIBE TABLE, and so on. +Unsupported Hive DDL includes import/export, the REBUILD and CONCATENATE options of ALTER TABLE, CREATE TABLE AS SELECT, and ANALYZE TABLE ... COMPUTE STATISTICS. +(See Command Line Interface.)
HCatalog presents a relational view of data. Data is stored in tables and these tables can be placed in databases. Tables can also be hash partitioned on one or more keys; that is, for a given value of a key (or set of keys) there will be one partition that contains all rows with that value (or set of values). For example, if a table is partitioned on date and there are three days of data in the table, there will be three partitions in the table. New partitions can be added to a table, and partitions can be dropped from a table. Partitioned tables have no partitions at create time. Unpartitioned tables effectively have one default partition that must be created at table creation time. There is no guaranteed read consistency when a partition is dropped.
-Partitions contain records. Once a partition is created records cannot be added to it, removed from it, or updated in it. Partitions are multi-dimensional and not hierarchical. Records are divided into columns. Columns have a name and a datatype. HCatalog supports the same datatypes as Hive (see HCatalog Load and Store).
+Partitions contain records. Once a partition is created records cannot be added to it, removed from it, or updated in it. Partitions are multi-dimensional and not hierarchical. Records are divided into columns. Columns have a name and a datatype. HCatalog supports the same datatypes as Hive. +See Load and Store Interfaces for more information about datatypes.
Since HCatalog 0.2 provides notifications for certain events happening in the system. This way applications such as Oozie can wait for those events and schedule the work that depends on them. The current version of HCatalog supports two kinds of events:
+Since version 0.2, HCatalog provides notifications for certain events happening in the system. This way applications such as Oozie can wait for those events and schedule the work that depends on them. The current version of HCatalog supports two kinds of events:
No additional work is required to send a notification when a new partition is added: the existing addPartition call will send the notification message.
+To receive notification that a new partition has been added, you need to follow these three steps.
-1. To start receiving messages, create a connection to a message bus as shown here:
-2. Subscribe to a topic you are interested in. When subscribing on a message bus, you need to subscribe to a particular topic to receive the messages that are being delivered on that topic.
+The topic name corresponding to a particular table is stored in table properties and can be retrieved using the following piece of code:
-Use the topic name to subscribe to a topic as follows:
-3. To start receiving messages you need to implement the JMS interface MessageListener, which, in turn, will make you implement the method onMessage(Message msg). This method will be called whenever a new message arrives on the message bus. The message contains a partition object representing the corresponding partition, which can be retrieved as shown here:
MessageListener, which, in turn, will make you implement the method onMessage(Message msg). This method will be called whenever a new message arrives on the message bus. The message contains a partition object representing the corresponding partition, which can be retrieved as shown here:
+You need to have a JMS jar in your classpath to make this work. Additionally, you need to have a JMS provider’s jar in your classpath. HCatalog is tested with ActiveMQ as a JMS provider, although any JMS provider can be used. ActiveMQ can be obtained from: http://activemq.apache.org/activemq-550-release.html .
+public void onMessage(Message msg) { + // We are interested in only add_partition events on this table. + // So, check message type first. + if(msg.getStringProperty(HCatConstants.HCAT_EVENT).equals(HCatConstants.HCAT_ADD_PARTITION_EVENT)){ + Object obj = (((ObjectMessage)msg).getObject()); + } +} +You need to have a JMS jar in your classpath to make this work. Additionally, you need to have a JMS provider’s jar in your classpath. HCatalog is tested with ActiveMQ as a JMS provider, although any JMS provider can be used. ActiveMQ can be obtained from: http://activemq.apache.org/activemq-550-release.html.
Sometimes a user wants to wait until a collection of partitions is finished. For example, you may want to start processing after all partitions for a day are done. However, HCatalog has no notion of collections or hierarchies of partitions. To support this, HCatalog allows data writers to signal when they are finished writing a collection of partitions. Data readers may wait for this signal before beginning to read.
+Sometimes you need to wait until a collection of partitions is finished before proceeding with another operation. For example, you may want to start processing after all partitions for a day are done. However, HCatalog has no notion of collections or hierarchies of partitions. To support this, HCatalog allows data writers to signal when they are finished writing a collection of partitions. Data readers may wait for this signal before beginning to read.
The example code below illustrates how to send a notification when a set of partitions has been added.
@@ -154,17 +160,19 @@ System.out.println("Message: "+msg);To enable notification, you need to configure the server (see below).
To disable notification, you need to leave hive.metastore.event.listeners blank or remove it from hive-site.xml.
Enable JMS Notifications
-You need to make (add/modify) the following changes to the hive-site.xml file of your HCatalog server to turn on notifications.
- + +You need to make (add/modify) the following changes to the hive-site.xml file of your HCatalog server to turn on notifications.
+For the server to start with support for notifications, the following must be in the classpath:
-Then, follow these steps:
+(a) activemq jar
+(b) jndi.properties file with properties suitably configured for notifications
+ +Then, follow these guidelines to set up your environment:
Topic Names
-If tables are created while the server is configured for notifications, a default topic name is automatically set as table property. To use notifications with tables created previously (previous HCatalog installations or created prior to enabling notifications), you will have to manually set a topic name, an example will be:
+If tables are created while the server is configured for notifications, a default topic name is automatically set as a table property. To use notifications with tables created previously (either in other HCatalog installations or prior to enabling notifications in the current installation) you will have to manually set a topic name. For example:
You then need to configure your activemq Consumer(s) to listen for messages on the topic you gave in $TOPIC_NAME. A good default policy for TOPIC_NAME = "$database.$table" (that is a literal dot)
- +You then need to configure your ActiveMQ Consumer(s) to listen for messages on the topic you gave in $TOPIC_NAME. A good default policy is TOPIC_NAME = "$database.$table" (that is a literal dot).
+