Index: src/docs/src/documentation/content/xdocs/inputoutput.xml
===================================================================
--- src/docs/src/documentation/content/xdocs/inputoutput.xml (revision 1332447)
+++ src/docs/src/documentation/content/xdocs/inputoutput.xml (working copy)
@@ -149,28 +149,21 @@
export HADOOP_HOME=<path_to_hadoop_install>
export HCAT_HOME=<path_to_hcat_install>
export LIB_JARS=$HCAT_HOME/share/hcatalog/hcatalog-0.4.0.jar,
-$HCAT_HOME/share/hcatalog/lib/hive-metastore-0.8.1.jar,$HCAT_HOME/share/hcatalog/lib/libthrift-0.7.0.jar,
-$HCAT_HOME/share/hcatalog/lib/hive-exec-0.8.1.jar,$HCAT_HOME/share/hcatalog/lib/libfb303-0.7.0.jar,
-$HCAT_HOME/share/hcatalog/lib/jdo2-api-2.3-ec.jar,$HCAT_HOME/share/hcatalog/lib/slf4j-api-1.6.1.jar,
-$HCAT_HOME/share/hcatalog/lib/antlr-runtime-3.0.1.jar,
-$HCAT_HOME/share/hcatalog/lib/datanucleus-connectionpool-2.0.3.jar,
-$HCAT_HOME/share/hcatalog/lib/datanucleus-core-2.0.3.jar,
-$HCAT_HOME/share/hcatalog/lib/datanucleus-enhancer-2.0.3.jar,
-$HCAT_HOME/share/hcatalog/lib/datanucleus-rdbms-2.0.3.jar,
-$HCAT_HOME/share/hcatalog/lib/commons-dbcp-1.4.jar,
-$HCAT_HOME/share/hcatalog/lib/commons-pool-1.5.4.jar
+$HIVE_HOME/lib/hive-metastore-0.9.0.jar,
+$HIVE_HOME/lib/libthrift-0.7.0.jar,
+$HIVE_HOME/lib/hive-exec-0.9.0.jar,
+$HIVE_HOME/lib/libfb303-0.7.0.jar,
+$HIVE_HOME/lib/jdo2-api-2.3-ec.jar,
+$HIVE_HOME/lib/slf4j-api-1.6.1.jar
+
export HADOOP_CLASSPATH=$HCAT_HOME/share/hcatalog/hcatalog-0.4.0.jar:
-$HCAT_HOME/share/hcatalog/lib/hive-metastore-0.8.1.jar:$HCAT_HOME/share/hcatalog/lib/libthrift-0.7.0.jar:
-$HCAT_HOME/share/hcatalog/lib/hive-exec-0.8.1.jar:$HCAT_HOME/share/hcatalog/lib/libfb303-0.7.0.jar:
-$HCAT_HOME/share/hcatalog/lib/jdo2-api-2.3-ec.jar:$HCAT_HOME/share/hcatalog/lib/slf4j-api-1.6.1.jar:
-$HCAT_HOME/share/hcatalog/lib/antlr-runtime-3.0.1.jar:
-$HCAT_HOME/share/hcatalog/lib/datanucleus-connectionpool-2.0.3.jar:
-$HCAT_HOME/share/hcatalog/lib/datanucleus-core-2.0.3.jar:
-$HCAT_HOME/share/hcatalog/lib/datanucleus-enhancer-2.0.3.jar:
-$HCAT_HOME/share/hcatalog/lib/datanucleus-rdbms-2.0.3.jar:
-$HCAT_HOME/share/hcatalog/lib/commons-dbcp-1.4.jar:
-$HCAT_HOME/share/hcatalog/lib/commons-pool-1.5.4.jar:
-$HCAT_HOME/etc/hcatalog
+$HIVE_HOME/lib/hive-metastore-0.9.0.jar:
+$HIVE_HOME/lib/libthrift-0.7.0.jar:
+$HIVE_HOME/lib/hive-exec-0.9.0.jar:
+$HIVE_HOME/lib/libfb303-0.7.0.jar:
+$HIVE_HOME/lib/jdo2-api-2.3-ec.jar:
+$HIVE_HOME/conf:$HADOOP_HOME/conf:
+$HIVE_HOME/lib/slf4j-api-1.6.1.jar
$HADOOP_HOME/bin/hadoop --config $HADOOP_HOME/conf jar <path_to_jar>
<main_class> -libjars $LIB_JARS <program_arguments>
Index: src/docs/src/documentation/content/xdocs/install.xml
===================================================================
--- src/docs/src/documentation/content/xdocs/install.xml (revision 1332447)
+++ src/docs/src/documentation/content/xdocs/install.xml (working copy)
@@ -113,7 +113,7 @@
where you have installed Hive. If you are using Hive rpms, then this will
be /usr/lib/hive.
mysql -u hive -D hivemetastoredb -hhivedb.acme.com -p < hive_homescripts/metastore/upgrade/mysql/hive-schema-0.9.0.mysql.sql
mysql -u hive -D hivemetastoredb -hhivedb.acme.com -p < hive_home/scripts/metastore/upgrade/mysql/hive-schema-0.9.0.mysql.sql
Thrift Server Setup
Index: src/docs/src/documentation/content/xdocs/cli.xml =================================================================== --- src/docs/src/documentation/content/xdocs/cli.xml (revision 1332447) +++ src/docs/src/documentation/content/xdocs/cli.xml (working copy) @@ -27,7 +27,7 @@The HCatalog command line interface (CLI) can be invoked as
-HIVE_HOME=hive_home hcat_homebin/hcat
+HIVE_HOME=hive_home hcat_home/bin/hcat
where hive_home is the directory where Hive has been installed and
hcat_home is the directory where HCatalog has been installed.
Supported. Behavior same as Hive.
+Any command not listed above is NOT supported and throws an exception with the message "Operation Not Supported".
The HCatalog interface for Pig – HCatLoader and HCatStorer – is an implementation of the Pig load and store interfaces. HCatLoader accepts a table to read data from; you can indicate which partitions to scan by immediately following the load statement with a partition filter statement. HCatStorer accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the STORE clause; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. HCatLoader and HCatStorer are implemented on top of HCatInputFormat and HCatOutputFormat, respectively (see HCatalog Load and Store).
-The HCatalog interface for MapReduce – HCatInputFormat and HCatOutputFormat – is an implementation of Hadoop InputFormat and OutputFormat. HCatInputFormat accepts a table to read data from and optionally a selection predicate to indicate which partitions to scan. HCatOutputFormat accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the STORE clause; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. (See HCatalog Input and Output.)
+The HCatalog interface for MapReduce – HCatInputFormat and HCatOutputFormat – is an implementation of Hadoop InputFormat and OutputFormat. HCatInputFormat accepts a table to read data from and optionally a selection predicate to indicate which partitions to scan. HCatOutputFormat accepts a table to write to and optionally a specification of partition keys to create a new partition. You can write to a single partition by specifying the partition key(s) and value(s) in the setOutput method; and you can write to multiple partitions if the partition key(s) are columns in the data being stored. (See HCatalog Input and Output.)
Note: There is no Hive-specific interface. Since HCatalog uses Hive's metastore, Hive can read data in HCatalog directly.
@@ -82,7 +82,7 @@With HCatalog, HCatalog will send a JMS message that data is available. The Pig job can then be started.
Authentication