Testsuite: org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat Tests run: 3, Failures: 1, Errors: 0, Time elapsed: 188.392 sec ------------- Standard Output --------------- Cluster work directory: /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806 Generating rack names for tasktrackers Generating host names for tasktrackers Trying to cleanup: /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806 ------------- ---------------- --------------- ------------- Standard Error ----------------- 12/03/16 19:36:06 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 12/03/16 19:36:06 INFO mapred.JobTracker: Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1) 12/03/16 19:36:06 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list 12/03/16 19:36:06 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 12/03/16 19:36:06 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 12/03/16 19:36:06 INFO mapred.JobTracker: Starting jobtracker with owner as hortonal 12/03/16 19:36:06 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=JobTracker, port=40861 12/03/16 19:36:06 INFO metrics.RpcDetailedMetrics: Initializing RPC Metrics with hostName=JobTracker, port=40861 12/03/16 19:36:06 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 12/03/16 19:36:06 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 12/03/16 19:36:06 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0 12/03/16 19:36:06 INFO http.HttpServer: listener.getLocalPort() returned 55124 webServer.getConnectors()[0].getLocalPort() returned 55124 12/03/16 19:36:06 INFO http.HttpServer: Jetty bound to port 55124 12/03/16 19:36:06 INFO mortbay.log: jetty-6.1.14 12/03/16 19:36:06 INFO mortbay.log: Extract jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-core-0.20.3-CDH3-SNAPSHOT.jar!/webapps/job to /tmp/Jetty_localhost_55124_job____.p78bc5/webapp 12/03/16 19:36:06 INFO mortbay.log: Started SelectChannelConnector@localhost:55124 12/03/16 19:36:06 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId= 12/03/16 19:36:06 INFO mapred.JobTracker: JobTracker up at: 40861 12/03/16 19:36:06 INFO mapred.JobTracker: JobTracker webserver: 55124 12/03/16 19:36:06 INFO mapred.JobTracker: Cleaning up the system directory 12/03/16 19:36:06 INFO mapred.JobHistory: Creating DONE folder at file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:36:06 INFO mapred.CompletedJobStatusStore: Completed job store is inactive 12/03/16 19:36:06 INFO mapred.JobTracker: Refreshing hosts information 12/03/16 19:36:06 INFO util.HostsFileReader: Setting the includes file to 12/03/16 19:36:06 INFO util.HostsFileReader: Setting the excludes file to 12/03/16 19:36:06 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list 12/03/16 19:36:06 INFO mapred.JobTracker: Decommissioning 0 nodes 12/03/16 19:36:06 INFO ipc.Server: IPC Server Responder: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server listener on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 0 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 1 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 2 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 3 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 4 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 5 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 6 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 7 on 40861: starting 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 8 on 40861: starting 12/03/16 19:36:06 INFO mapred.JobTracker: Starting RUNNING 12/03/16 19:36:06 INFO ipc.Server: IPC Server handler 9 on 40861: starting 12/03/16 19:36:07 INFO mapred.MiniMRCluster: mapred.local.dir is /tmp/hadoop-hortonal/mapred/local/0_0 12/03/16 19:36:07 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 12/03/16 19:36:07 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0 12/03/16 19:36:07 INFO http.HttpServer: listener.getLocalPort() returned 59933 webServer.getConnectors()[0].getLocalPort() returned 59933 12/03/16 19:36:07 INFO http.HttpServer: Jetty bound to port 59933 12/03/16 19:36:07 INFO mortbay.log: jetty-6.1.14 12/03/16 19:36:07 INFO mortbay.log: Extract jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-core-0.20.3-CDH3-SNAPSHOT.jar!/webapps/task to /tmp/Jetty_0_0_0_0_59933_task____o3tkyb/webapp 12/03/16 19:36:07 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:59933 12/03/16 19:36:07 INFO mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 12/03/16 19:36:07 INFO mapred.TaskTracker: Starting tasktracker with owner as hortonal 12/03/16 19:36:08 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=TaskTracker, sessionId= - already initialized 12/03/16 19:36:08 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=TaskTracker, port=52380 12/03/16 19:36:08 INFO metrics.RpcDetailedMetrics: Initializing RPC Metrics with hostName=TaskTracker, port=52380 12/03/16 19:36:08 INFO ipc.Server: IPC Server Responder: starting 12/03/16 19:36:08 INFO ipc.Server: IPC Server listener on 52380: starting 12/03/16 19:36:08 INFO ipc.Server: IPC Server handler 1 on 52380: starting 12/03/16 19:36:08 INFO ipc.Server: IPC Server handler 2 on 52380: starting 12/03/16 19:36:08 INFO mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:52380 12/03/16 19:36:08 INFO mapred.TaskTracker: Starting tracker tracker_host0.foo.com:localhost/127.0.0.1:52380 12/03/16 19:36:08 INFO ipc.Server: IPC Server handler 3 on 52380: starting 12/03/16 19:36:08 INFO ipc.Server: IPC Server handler 0 on 52380: starting 12/03/16 19:36:08 WARN util.MRAsyncDiskService: Failure in deletion of toBeDeleted/2012-03-16_19-36-07.998_1 on /tmp/hadoop-hortonal/mapred/local/0_0 with original name userlogs 12/03/16 19:36:08 INFO mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:52380 12/03/16 19:36:08 INFO mapred.TaskTracker: Using MemoryCalculatorPlugin : org.apache.hadoop.util.LinuxMemoryCalculatorPlugin@9b688e 12/03/16 19:36:08 INFO util.ProcessTree: setsid exited with exit code 0 12/03/16 19:36:08 WARN mapred.TaskTracker: TaskTracker's totalMemoryAllottedForTasks is -1. TaskMemoryManager is disabled. 12/03/16 19:36:08 INFO mapred.IndexCache: IndexCache created with max memory = 10485760 12/03/16 19:36:08 INFO net.NetworkTopology: Adding a new node: /default-rack/host0.foo.com 12/03/16 19:36:08 INFO mapred.JobTracker: Adding tracker tracker_host0.foo.com:localhost/127.0.0.1:52380 to host host0.foo.com 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:host.name=hrt8n25.cc1.ygridcore.net 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.version=1.6.0_05 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.vendor=Sun Microsystems Inc. 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.home=/usr/java/jdk1.6.0_05/jre 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.class.path=/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/test/classes:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/classes:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/conf:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/lib/jsp-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/lib/jsp-api-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/lib/jsp-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/lib/jsp-api-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/index/lib/lucene-core-2.3.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-test-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/aspectjrt-1.6.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/aspectjtools-1.6.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-codec-1.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-daemon-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-el-1.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-httpclient-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-logging-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-logging-api-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-net-1.4.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/core-3.1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/hadoop-fairscheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/hsqldb-1.8.0.10.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jackson-core-asl-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jackson-mapper-asl-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jasper-compiler-5.5.12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jasper-runtime-5.5.12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jets3t-0.6.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jetty-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jetty-util-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/junit-4.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/kfs-0.2.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/log4j-1.2.15.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/mockito-all-1.8.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/mysql-connector-java-5.0.8-bin.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/oro-2.0.8.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/servlet-api-2.5-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/slf4j-api-1.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/slf4j-log4j12-1.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/xmlenc-0.52.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/cloud/lib/pyAntTasks-1.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/thriftfs/lib/hadoopthriftapi.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/thriftfs/lib/libthrift.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftplet-api-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftpserver-core-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftpserver-server-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/mina-core-2.0.0-M2-20080407.124109-12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT-tests.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/high-scale-lib-1.1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/jetty-6.1.26.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3-tests.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-server-extensions-0.4.0-dev.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/testlibs/ant-contrib-1.0b3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/testlibs/junit-4.10.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/lib/javaewah-0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/lib/log4j-1.2.15.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/builtins/hive-builtins-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/cli/hive-cli-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/common/hive-common-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/serde/hive-serde-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/capacity-scheduler/hadoop-capacity-scheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/datajoin/hadoop-datajoin-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/failmon/hadoop-failmon-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/hadoop-fairscheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/hadoop-gridmix-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/index/hadoop-index-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/mrunit/hadoop-mrunit-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/streaming/hadoop-streaming-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/thriftfs/hadoop-thriftfs-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/vaidya/hadoop-vaidya-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-ant-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-core-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-examples-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-tools-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/activemq-all-5.5.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/hadoop-tools-0.20.205.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jackson-core-asl-1.7.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jackson-mapper-asl-1.7.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jdeb-0.8.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jms-1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/management-api-1.1-rev-1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/pig-0.8.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-lang-2.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-logging-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-logging-api-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-enhancer-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-core-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/jdo2-api-2.3-ec.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-2.7.7.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-runtime-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/asm-3.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-codec-1.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-collections-3.2.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-dbcp-1.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-pool-1.5.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-connectionpool-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-rdbms-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/derby-10.4.2.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/jline-0.9.94.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/json-20090211.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libthrift-0.7.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/log4j-1.2.16.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/mockito-all-1.8.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/stringtemplate-3.1-b1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/velocity-1.5.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-launcher.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-junit.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-junit4.jar 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.library.path=/usr/java/jdk1.6.0_05/jre/lib/i386/server:/usr/java/jdk1.6.0_05/jre/lib/i386:/usr/java/jdk1.6.0_05/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.io.tmpdir=/tmp 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:java.compiler= 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:os.name=Linux 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:os.arch=i386 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:os.version=2.6.18-238.1.1.el5.YAHOO.20110221 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:user.name=hortonal 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:user.home=/homes/hortonal 12/03/16 19:36:08 INFO server.ZooKeeperServer: Server environment:user.dir=/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase 12/03/16 19:36:08 INFO server.ZooKeeperServer: Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/zk/zookeeper_0/version-2 snapdir /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/zk/zookeeper_0/version-2 12/03/16 19:36:08 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:55343 12/03/16 19:36:08 INFO persistence.FileTxnSnapLog: Snapshotting: 0x0 to /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/zk/zookeeper_0/version-2/snapshot.0 12/03/16 19:36:08 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40266 12/03/16 19:36:08 INFO server.NIOServerCnxn: Processing stat command from /127.0.0.1:40266 12/03/16 19:36:08 INFO server.NIOServerCnxn: Stat command output 12/03/16 19:36:08 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40266 (no session established for client) 12/03/16 19:36:08 INFO zookeeper.MiniZooKeeperCluster: Started MiniZK Cluster and connect 1 ZK server on client port: 55343 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-45 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting IPC Server listener on 55642 12/03/16 19:36:08 INFO ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=55642 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:host.name=hrt8n25.cc1.ygridcore.net 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_05 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc. 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_05/jre 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/test/classes:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/classes:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/conf:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/lib/jsp-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/lib/jsp-api-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/lib/jsp-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/lib/jsp-api-2.1-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/index/lib/lucene-core-2.3.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-test-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/aspectjrt-1.6.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/aspectjtools-1.6.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-codec-1.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-daemon-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-el-1.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-httpclient-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-logging-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-logging-api-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/commons-net-1.4.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/core-3.1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/hadoop-fairscheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/hsqldb-1.8.0.10.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jackson-core-asl-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jackson-mapper-asl-1.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jasper-compiler-5.5.12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jasper-runtime-5.5.12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jets3t-0.6.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jetty-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/jetty-util-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/junit-4.5.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/kfs-0.2.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/log4j-1.2.15.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/mockito-all-1.8.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/mysql-connector-java-5.0.8-bin.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/oro-2.0.8.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/servlet-api-2.5-6.1.14.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/slf4j-api-1.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/slf4j-log4j12-1.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/lib/xmlenc-0.52.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/cloud/lib/pyAntTasks-1.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/thriftfs/lib/hadoopthriftapi.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/contrib/thriftfs/lib/libthrift.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftplet-api-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftpserver-core-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/ftpserver-server-1.0.0-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/src/test/lib/mina-core-2.0.0-M2-20080407.124109-12.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT-tests.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/high-scale-lib-1.1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/jetty-6.1.26.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3-tests.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-server-extensions-0.4.0-dev.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/testlibs/ant-contrib-1.0b3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/testlibs/junit-4.10.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/lib/javaewah-0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/lib/log4j-1.2.15.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/builtins/hive-builtins-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/cli/hive-cli-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/common/hive-common-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/serde/hive-serde-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/capacity-scheduler/hadoop-capacity-scheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/datajoin/hadoop-datajoin-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/failmon/hadoop-failmon-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/fairscheduler/hadoop-fairscheduler-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/gridmix/hadoop-gridmix-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/index/hadoop-index-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/mrunit/hadoop-mrunit-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/streaming/hadoop-streaming-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/thriftfs/hadoop-thriftfs-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/contrib/vaidya/hadoop-vaidya-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-ant-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-core-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-examples-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hadoopcore/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-tools-0.20.3-CDH3-SNAPSHOT.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/activemq-all-5.5.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/hadoop-tools-0.20.205.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jackson-core-asl-1.7.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jackson-mapper-asl-1.7.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jdeb-0.8.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/jms-1.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/management-api-1.1-rev-1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/ivy/lib/hcatalog/pig-0.8.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-lang-2.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-logging-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-logging-api-1.0.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-enhancer-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-core-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/jdo2-api-2.3-ec.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-2.7.7.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/antlr-runtime-3.0.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/asm-3.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-cli-1.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-codec-1.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-collections-3.2.1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-dbcp-1.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/commons-pool-1.5.4.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-connectionpool-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/datanucleus-rdbms-2.0.3.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/derby-10.4.2.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/jline-0.9.94.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/json-20090211.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libthrift-0.7.0.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/log4j-1.2.16.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/mockito-all-1.8.2.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/stringtemplate-3.1-b1.jar:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/velocity-1.5.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-launcher.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-junit.jar:/homes/hortonal/tools/apache-ant-1.8.2/lib/ant-junit4.jar 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/java/jdk1.6.0_05/jre/lib/i386/server:/usr/java/jdk1.6.0_05/jre/lib/i386:/usr/java/jdk1.6.0_05/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:java.compiler= 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.18-238.1.1.el5.YAHOO.20110221 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:user.name=hortonal 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:user.home=/homes/hortonal 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=master:55642 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:08 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40267 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:36:08 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40267 12/03/16 19:36:08 INFO persistence.FileTxnLog: Creating new log file: log.1 12/03/16 19:36:08 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0000 with negotiated timeout 40000 for client /127.0.0.1:40267 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0000, negotiated timeout = 40000 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server Responder: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server listener on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 0 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 1 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 2 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 3 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 4 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 5 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 6 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 7 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 8 on 55642: starting 12/03/16 19:36:08 INFO ipc.HBaseServer: IPC Server handler 9 on 55642: starting 12/03/16 19:36:08 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=Master, sessionId=hrt8n25.cc1.ygridcore.net,55642,1331926568427 - already initialized 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: revision 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: hdfsUser 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: hdfsDate 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: hdfsUrl 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: date 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: hdfsRevision 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: user 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: hdfsVersion 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: url 12/03/16 19:36:08 INFO hbase.metrics: MetricsString added: version 12/03/16 19:36:08 INFO hbase.metrics: new MBeanInfo 12/03/16 19:36:08 INFO hbase.metrics: new MBeanInfo 12/03/16 19:36:08 INFO metrics.MasterMetrics: Initialized 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting Thread-57 12/03/16 19:36:08 INFO ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=MiniHBaseCluster$MiniHBaseClusterRegionServer, port=48635 12/03/16 19:36:08 INFO ipc.HBaseServer: Starting IPC Server listener on 48635 12/03/16 19:36:08 INFO hfile.CacheConfig: Allocating LruBlockCache with maximum size 123.3m 12/03/16 19:36:08 INFO regionserver.ShutdownHook: Installed shutdown hook thread: Shutdownhook:RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=regionserver:48635 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:08 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40268 12/03/16 19:36:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:36:08 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40268 12/03/16 19:36:08 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0001 with negotiated timeout 40000 for client /127.0.0.1:40268 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0001, negotiated timeout = 40000 12/03/16 19:36:08 INFO master.ActiveMasterManager: Master=hrt8n25.cc1.ygridcore.net,55642,1331926568427 12/03/16 19:36:08 INFO master.SplitLogManager: found 0 orphan tasks and 0 rescan nodes 12/03/16 19:36:08 INFO master.MasterFileSystem: BOOTSTRAP: creating ROOT and first META regions 12/03/16 19:36:08 INFO regionserver.HRegion: creating HRegion -ROOT- HTD == {NAME => '-ROOT-', IS_ROOT => 'true', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}]} RootDir = file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase Table name == -ROOT- 12/03/16 19:36:08 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:36:08 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/-ROOT-/70236052/.logs/hlog.1331926568708 12/03/16 19:36:08 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@14bc4e6, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:36:08 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:08 INFO regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=1 12/03/16 19:36:08 INFO regionserver.HRegion: creating HRegion .META. HTD == {NAME => '.META.', IS_META => 'true', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '10', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '8192', IN_MEMORY => 'false', BLOCKCACHE => 'false'}]} RootDir = file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase Table name == .META. 12/03/16 19:36:08 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:36:08 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/.META./1028785192/.logs/hlog.1331926568786 12/03/16 19:36:08 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@f052d5, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:36:08 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:08 INFO regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=1 12/03/16 19:36:08 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/-ROOT-/70236052/info/a1a14b451e1e40efb90a252a0220d8b7, entries=2, sequenceid=2, memsize=360.0, filesize=714.0 12/03/16 19:36:08 INFO regionserver.HRegion: Finished memstore flush of ~360.0/360, currentsize=0.0/0 for region -ROOT-,,0.70236052 in 49ms, sequenceid=2, compaction requested=false 12/03/16 19:36:08 INFO regionserver.HRegion: Closed -ROOT-,,0.70236052 12/03/16 19:36:08 INFO wal.HLog: Master:0;hrt8n25.cc1.ygridcore.net,55642,1331926568427.logSyncer exiting 12/03/16 19:36:08 INFO regionserver.HRegion: Closed .META.,,1.1028785192 12/03/16 19:36:08 INFO wal.HLog: Master:0;hrt8n25.cc1.ygridcore.net,55642,1331926568427.logSyncer exiting 12/03/16 19:36:08 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=hconnection 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:08 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:08 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40269 12/03/16 19:36:08 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40269 12/03/16 19:36:08 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:36:08 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0002 with negotiated timeout 40000 for client /127.0.0.1:40269 12/03/16 19:36:08 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0002, negotiated timeout = 40000 12/03/16 19:36:08 INFO master.HMaster: Server active/primary master; hrt8n25.cc1.ygridcore.net,55642,1331926568427, sessionid=0x1361d028cfb0000, cluster-up flag was=false 12/03/16 19:36:08 INFO regionserver.MemStoreFlusher: globalMemStoreLimit=197.2m, globalMemStoreLimitLowMark=172.6m, maxHeap=493.1m 12/03/16 19:36:08 INFO regionserver.HRegionServer: Runs every 2hrs, 46mins, 40sec 12/03/16 19:36:09 INFO regionserver.HRegionServer: Attempting connect to Master server at hrt8n25.cc1.ygridcore.net,55642,1331926568427 12/03/16 19:36:09 INFO regionserver.HRegionServer: Connected to master at hrt8n25.cc1.ygridcore.net/98.137.233.229:48635 12/03/16 19:36:09 INFO regionserver.HRegionServer: Telling master at hrt8n25.cc1.ygridcore.net,55642,1331926568427 that we are up with port=48635, startcode=1331926568651 12/03/16 19:36:09 INFO master.ServerManager: Registering server=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:09 INFO regionserver.HRegionServer: Master passed us hostname to use. Was=hrt8n25.cc1.ygridcore.net, Now=hrt8n25.cc1.ygridcore.net 12/03/16 19:36:09 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:36:09 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/.logs/hrt8n25.cc1.ygridcore.net,48635,1331926568651/hrt8n25.cc1.ygridcore.net%2C48635%2C1331926568651.1331926569077 12/03/16 19:36:09 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@974600, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:36:09 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=RegionServer, sessionId=RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651 - already initialized 12/03/16 19:36:09 INFO hbase.metrics: new MBeanInfo 12/03/16 19:36:09 INFO metrics.RegionServerMetrics: Initialized 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server Responder: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server listener on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 0 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 1 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 2 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 3 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 4 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 5 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 6 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 7 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 8 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: IPC Server handler 9 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 0 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 1 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 2 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 3 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 4 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 5 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 6 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 7 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 8 on 48635: starting 12/03/16 19:36:09 INFO ipc.HBaseServer: PRI IPC Server handler 9 on 48635: starting 12/03/16 19:36:09 INFO regionserver.HRegionServer: Serving as hrt8n25.cc1.ygridcore.net,48635,1331926568651, RPC listening on hrt8n25.cc1.ygridcore.net/98.137.233.229:48635, sessionid=0x1361d028cfb0001 12/03/16 19:36:09 INFO regionserver.SplitLogWorker: SplitLogWorker hrt8n25.cc1.ygridcore.net,48635,1331926568651 starting 12/03/16 19:36:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=hconnection 12/03/16 19:36:09 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:09 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:09 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:09 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40274 12/03/16 19:36:09 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40274 12/03/16 19:36:09 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:36:09 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0003 with negotiated timeout 40000 for client /127.0.0.1:40274 12/03/16 19:36:09 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0003, negotiated timeout = 40000 12/03/16 19:36:10 INFO master.ServerManager: Waiting on regionserver(s) count to settle; currently=1 12/03/16 19:36:11 INFO master.ServerManager: Waiting on regionserver(s) count to settle; currently=1 12/03/16 19:36:13 INFO master.ServerManager: Finished waiting for regionserver count to settle; count=1, sleptFor=4500 12/03/16 19:36:13 INFO master.MasterFileSystem: Log folder file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/.logs/hrt8n25.cc1.ygridcore.net,48635,1331926568651 belongs to an existing region server 12/03/16 19:36:13 INFO master.MasterFileSystem: No logs to split 12/03/16 19:36:14 INFO catalog.RootLocationEditor: Unsetting ROOT region location in ZooKeeper 12/03/16 19:36:14 INFO server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x1361d028cfb0000 type:delete cxid:0x21 zxid:0x10 txntype:-1 reqpath:n/a Error Path:/hbase/root-region-server Error:KeeperErrorCode = NoNode for /hbase/root-region-server 12/03/16 19:36:14 WARN zookeeper.RecoverableZooKeeper: Node /hbase/root-region-server already deleted, and this is not a retry 12/03/16 19:36:14 INFO regionserver.HRegionServer: Received request to open region: -ROOT-,,0.70236052 12/03/16 19:36:14 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:14 INFO regionserver.HRegion: Onlined -ROOT-,,0.70236052; next sequenceid=3 12/03/16 19:36:14 INFO regionserver.HRegionServer: Post open deploy tasks for region=-ROOT-,,0.70236052, daughter=false 12/03/16 19:36:14 INFO catalog.RootLocationEditor: Setting ROOT region location in ZooKeeper as hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:14 INFO regionserver.HRegionServer: Done with post open deploy task for region=-ROOT-,,0.70236052, daughter=false 12/03/16 19:36:14 INFO handler.OpenedRegionHandler: Handling OPENED event for -ROOT-,,0.70236052 from hrt8n25.cc1.ygridcore.net,48635,1331926568651; deleting unassigned node 12/03/16 19:36:14 INFO master.AssignmentManager: The master has opened the region -ROOT-,,0.70236052 that was online on hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:14 INFO master.HMaster: -ROOT- assigned=1, rit=false, location=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:15 INFO regionserver.HRegionServer: Received request to open region: .META.,,1.1028785192 12/03/16 19:36:15 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:15 INFO regionserver.HRegion: Onlined .META.,,1.1028785192; next sequenceid=1 12/03/16 19:36:15 INFO regionserver.HRegionServer: Post open deploy tasks for region=.META.,,1.1028785192, daughter=false 12/03/16 19:36:15 INFO catalog.MetaEditor: Updated row .META.,,1.1028785192 with server=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:15 INFO regionserver.HRegionServer: Done with post open deploy task for region=.META.,,1.1028785192, daughter=false 12/03/16 19:36:15 INFO handler.OpenedRegionHandler: Handling OPENED event for .META.,,1.1028785192 from hrt8n25.cc1.ygridcore.net,48635,1331926568651; deleting unassigned node 12/03/16 19:36:15 INFO master.AssignmentManager: The master has opened the region .META.,,1.1028785192 that was online on hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:15 INFO master.HMaster: .META. assigned=2, rit=false, location=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:15 INFO catalog.MetaMigrationRemovingHTD: Meta version=0; migrated=true 12/03/16 19:36:15 INFO catalog.MetaMigrationRemovingHTD: ROOT/Meta already up-to date with new HRI. 12/03/16 19:36:15 INFO master.AssignmentManager: Clean cluster startup. Assigning userregions 12/03/16 19:36:15 INFO master.HMaster: Master has completed initialization 12/03/16 19:36:16 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:36:16 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:36:16 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.resources" but it cannot be resolved. 12/03/16 19:36:16 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.core.runtime" but it cannot be resolved. 12/03/16 19:36:16 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core" requires "org.eclipse.text" but it cannot be resolved. 12/03/16 19:36:16 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored 12/03/16 19:36:16 INFO DataNucleus.Persistence: Property javax.jdo.option.NonTransactionalRead unknown - will be ignored 12/03/16 19:36:16 INFO DataNucleus.Persistence: ================= Persistence Configuration =============== 12/03/16 19:36:16 INFO DataNucleus.Persistence: DataNucleus Persistence Factory - Vendor: "DataNucleus" Version: "2.0.3" 12/03/16 19:36:16 INFO DataNucleus.Persistence: DataNucleus Persistence Factory initialised for datastore URL="jdbc:derby:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/metastore_db;create=true" driver="org.apache.derby.jdbc.EmbeddedDriver" userName="APP" 12/03/16 19:36:16 INFO DataNucleus.Persistence: =========================================================== 12/03/16 19:36:19 INFO Datastore.Schema: Initialising Catalog "", Schema "APP" using "None" auto-start option 12/03/16 19:36:19 INFO Datastore.Schema: Catalog "", Schema "APP" initialised - managing 0 classes 12/03/16 19:36:19 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 12/03/16 19:36:19 INFO DataNucleus.MetaData: Registering listener for metadata initialisation 12/03/16 19:36:19 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 11, column 6 : cvc-elt.1: Cannot find the declaration of element 'jdo'. - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 321, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 368, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 390, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 425, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 462, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 503, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 544, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 585, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 630, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 675, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 WARN DataNucleus.MetaData: MetaData Parser encountered an error in file "jar:file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar!/package.jdo" at line 703, column 13 : The content of element type "class" must match "(extension*,implements*,datastore-identity?,primary-key?,inheritance?,version?,join*,foreign-key*,index*,unique*,column*,field*,property*,query*,fetch-group*,extension*)". - Please check your specification of DTD and the validity of the MetaData XML that you have specified. 12/03/16 19:36:19 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MDatabase [Table : DBS, InheritanceStrategy : new-table] 12/03/16 19:36:19 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MDatabase.parameters [Table : DATABASE_PARAMS] 12/03/16 19:36:19 INFO Datastore.Schema: Creating table DBS 12/03/16 19:36:19 INFO Datastore.Schema: Creating table DATABASE_PARAMS 12/03/16 19:36:19 INFO Datastore.Schema: Creating index "UNIQUE_DATABASE" in catalog "" schema "" 12/03/16 19:36:19 INFO Datastore.Schema: Creating foreign key constraint : "DATABASE_PARAMS_FK1" in catalog "" schema "" 12/03/16 19:36:19 INFO Datastore.Schema: Creating index "DATABASE_PARAMS_N49" in catalog "" schema "" 12/03/16 19:36:19 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193619920. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193619920. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:389) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:408) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:485) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$300(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:507) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$5.run(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:504) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:266) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:228) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:114) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:98) at org.apache.hcatalog.hbase.ManyMiniCluster.setUpMetastore(ManyMiniCluster.java:295) at org.apache.hcatalog.hbase.ManyMiniCluster.start(ManyMiniCluster.java:115) at org.apache.hcatalog.hbase.SkeletonHBaseTest$Context.start(SkeletonHBaseTest.java:177) at org.apache.hcatalog.hbase.SkeletonHBaseTest.setup(SkeletonHBaseTest.java:87) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:20 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MDatabase 12/03/16 19:36:20 INFO Datastore.Schema: Creating table SEQUENCE_TABLE Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1508205069.txt 12/03/16 19:36:20 INFO exec.HiveHistory: Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1508205069.txt 12/03/16 19:36:20 INFO handler.CreateTableHandler: Attemping to create the table directoutputformattest_8162747622161253787 12/03/16 19:36:20 INFO regionserver.HRegion: creating HRegion directoutputformattest_8162747622161253787 HTD == {NAME => 'directoutputformattest_8162747622161253787', FAMILIES => [{NAME => 'my_family', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} RootDir = file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase Table name == directoutputformattest_8162747622161253787 12/03/16 19:36:20 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:36:20 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directoutputformattest_8162747622161253787/de3fa48cf8026e93eabf414ade76a454/.logs/hlog.1331926580607 12/03/16 19:36:20 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@7510e7, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:36:20 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:20 INFO regionserver.HRegion: Onlined directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454.; next sequenceid=1 12/03/16 19:36:20 INFO catalog.MetaEditor: Added 1 regions in META 12/03/16 19:36:20 INFO regionserver.HRegion: Closed directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. 12/03/16 19:36:20 INFO wal.HLog: MASTER_TABLE_OPERATIONS-hrt8n25.cc1.ygridcore.net,55642,1331926568427-0.logSyncer exiting 12/03/16 19:36:20 INFO master.AssignmentManager: Bulk assigning 1 region(s) round-robin across 1 server(s) 12/03/16 19:36:20 INFO master.AssignmentManager: Bulk assigning done 12/03/16 19:36:20 INFO master.AssignmentManager: hrt8n25.cc1.ygridcore.net,48635,1331926568651 unassigned znodes=1 of total=1 12/03/16 19:36:20 INFO regionserver.HRegionServer: Received request to open 1 region(s) 12/03/16 19:36:20 INFO regionserver.HRegionServer: Received request to open region: directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. 12/03/16 19:36:20 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:20 INFO regionserver.HRegion: Onlined directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454.; next sequenceid=1 12/03/16 19:36:20 INFO regionserver.HRegionServer: Post open deploy tasks for region=directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454., daughter=false 12/03/16 19:36:20 INFO catalog.MetaEditor: Updated row directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. with server=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:20 INFO regionserver.HRegionServer: Done with post open deploy task for region=directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454., daughter=false 12/03/16 19:36:20 INFO master.AssignmentManager: The master has opened the region directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. that was online on hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@195da41 12/03/16 19:36:21 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:21 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:21 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:21 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40284 12/03/16 19:36:21 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40284 12/03/16 19:36:21 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0004 with negotiated timeout 40000 for client /127.0.0.1:40284 12/03/16 19:36:21 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0004, negotiated timeout = 40000 12/03/16 19:36:21 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:36:21 INFO snapshot.ZKUtil: Added transaction : revision: 1 ts: 1331940981680 12/03/16 19:36:21 INFO snapshot.ZKUtil: Transaction list stored at /revision-management/data/directoutputformattest_8162747622161253787/my_family/runningTxns. 12/03/16 19:36:21 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0004 12/03/16 19:36:21 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40284 which had sessionid 0x1361d028cfb0004 12/03/16 19:36:21 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0004 closed 12/03/16 19:36:21 INFO snapshot.ZKUtil: Disconnected to ZooKeeper 12/03/16 19:36:21 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:36:21 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/03/16 19:36:21 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 12/03/16 19:36:21 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/16 19:36:21 INFO mapred.JobTracker: Job job_20120316193605670_0001 added successfully for user 'hortonal' to queue 'default' 12/03/16 19:36:21 INFO mapred.JobTracker: Initializing job_20120316193605670_0001 12/03/16 19:36:21 INFO mapred.JobInProgress: Initializing job_20120316193605670_0001 12/03/16 19:36:21 INFO mapred.AuditLogger: USER=hortonal IP=127.0.0.1 OPERATION=SUBMIT_JOB TARGET=job_20120316193605670_0001 RESULT=SUCCESS 12/03/16 19:36:21 INFO mapred.JobClient: Running job: job_20120316193605670_0001 12/03/16 19:36:21 INFO mapred.JobInProgress: jobToken generated and stored with users keys in /tmp/hadoop-hortonal/mapred/system/job_20120316193605670_0001/jobToken 12/03/16 19:36:21 INFO mapred.JobInProgress: Input size for job job_20120316193605670_0001 = 79. Number of splits = 2 12/03/16 19:36:21 INFO net.NetworkTopology: Adding a new node: /default-rack/localhost 12/03/16 19:36:21 INFO mapred.JobInProgress: tip:task_20120316193605670_0001_m_000000 has split on node:/default-rack/localhost 12/03/16 19:36:21 INFO mapred.JobInProgress: tip:task_20120316193605670_0001_m_000001 has split on node:/default-rack/localhost 12/03/16 19:36:21 INFO mapred.JobInProgress: Job job_20120316193605670_0001 initialized successfully with 2 map tasks and 0 reduce tasks. 12/03/16 19:36:22 INFO mapred.JobClient: map 0% reduce 0% 12/03/16 19:36:23 INFO mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_20120316193605670_0001_m_000003_0' to tip task_20120316193605670_0001_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:23 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0001_m_000003_0 task's state:UNASSIGNED 12/03/16 19:36:23 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0001_m_000003_0 which needs 1 slots 12/03/16 19:36:23 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0001_m_000003_0 which needs 1 slots 12/03/16 19:36:23 INFO tasktracker.Localizer: Initializing user hortonal on this TT. 12/03/16 19:36:23 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0001_m_2008615856 12/03/16 19:36:23 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0001_m_2008615856 spawned. 12/03/16 19:36:23 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0001_m_2008615856 given task: attempt_20120316193605670_0001_m_000003_0 12/03/16 19:36:23 INFO mapred.TaskTracker: attempt_20120316193605670_0001_m_000003_0 0.0% setup 12/03/16 19:36:23 INFO mapred.TaskTracker: Task attempt_20120316193605670_0001_m_000003_0 is done. 12/03/16 19:36:23 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0001_m_000003_0 was -1 12/03/16 19:36:23 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:36:24 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -13485: No such process 12/03/16 19:36:24 INFO util.ProcessTree: Killing all processes in the process group 13485 with SIGTERM. Exit code 1 12/03/16 19:36:26 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0001_m_000003_0' has completed task_20120316193605670_0001_m_000003 successfully. 12/03/16 19:36:26 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0001_m_000000_0' to tip task_20120316193605670_0001_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:26 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0001_m_000000 12/03/16 19:36:26 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0001_m_000001_0' to tip task_20120316193605670_0001_m_000001, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:26 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0001_m_000001 12/03/16 19:36:26 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0001_m_000000_0 task's state:UNASSIGNED 12/03/16 19:36:26 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0001_m_000001_0 task's state:UNASSIGNED 12/03/16 19:36:26 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0001_m_000000_0 which needs 1 slots 12/03/16 19:36:26 INFO mapred.TaskTracker: Received KillTaskAction for task: attempt_20120316193605670_0001_m_000003_0 12/03/16 19:36:26 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0001_m_000000_0 which needs 1 slots 12/03/16 19:36:26 INFO mapred.TaskTracker: About to purge task: attempt_20120316193605670_0001_m_000003_0 12/03/16 19:36:26 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:26 INFO mapred.TaskRunner: attempt_20120316193605670_0001_m_000003_0 done; removing files. 12/03/16 19:36:26 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0001_m_000003_0 not found in cache 12/03/16 19:36:26 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0001_m_000001_0 which needs 1 slots 12/03/16 19:36:26 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0001_m_000001_0 which needs 1 slots 12/03/16 19:36:26 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:26 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0001_m_-665852803 12/03/16 19:36:26 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0001_m_-665852803 spawned. 12/03/16 19:36:26 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0001_m_2008615856 12/03/16 19:36:26 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0001_m_000003_0, Status : SUCCEEDED 12/03/16 19:36:29 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0001_m_776785856 12/03/16 19:36:29 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0001_m_776785856 spawned. 12/03/16 19:36:29 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0001_m_-665852803 given task: attempt_20120316193605670_0001_m_000000_0 12/03/16 19:36:29 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0001_m_2008615856 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:36:29 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40297 12/03/16 19:36:29 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40297 12/03/16 19:36:29 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0005 with negotiated timeout 40000 for client /127.0.0.1:40297 12/03/16 19:36:29 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0001_m_776785856 given task: attempt_20120316193605670_0001_m_000001_0 12/03/16 19:36:29 INFO mapred.TaskTracker: attempt_20120316193605670_0001_m_000000_0 1.0% file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directOutputFormatTest/mr_input/inputFile.txt:39+40 12/03/16 19:36:29 INFO mapred.TaskTracker: Task attempt_20120316193605670_0001_m_000000_0 is done. 12/03/16 19:36:29 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0001_m_000000_0 was -1 12/03/16 19:36:29 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:36:30 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40301 12/03/16 19:36:30 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40301 12/03/16 19:36:30 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0006 with negotiated timeout 40000 for client /127.0.0.1:40301 12/03/16 19:36:30 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb0005, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:36:30 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40297 which had sessionid 0x1361d028cfb0005 12/03/16 19:36:30 INFO mapred.TaskTracker: attempt_20120316193605670_0001_m_000001_0 1.0% file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directOutputFormatTest/mr_input/inputFile.txt:0+39 12/03/16 19:36:30 INFO mapred.TaskTracker: Task attempt_20120316193605670_0001_m_000001_0 is done. 12/03/16 19:36:30 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0001_m_000001_0 was -1 12/03/16 19:36:30 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:36:30 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -13552: No such process 12/03/16 19:36:30 INFO util.ProcessTree: Killing all processes in the process group 13552 with SIGTERM. Exit code 1 12/03/16 19:36:30 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb0006, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:36:30 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40301 which had sessionid 0x1361d028cfb0006 12/03/16 19:36:30 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -13593: No such process 12/03/16 19:36:30 INFO util.ProcessTree: Killing all processes in the process group 13593 with SIGTERM. Exit code 1 12/03/16 19:36:32 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0001_m_000000_0' has completed task_20120316193605670_0001_m_000000 successfully. 12/03/16 19:36:32 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0001_m_000001_0' has completed task_20120316193605670_0001_m_000001 successfully. 12/03/16 19:36:32 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0001_m_000002_0' to tip task_20120316193605670_0001_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:32 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0001_m_000002_0 task's state:UNASSIGNED 12/03/16 19:36:32 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0001_m_000002_0 which needs 1 slots 12/03/16 19:36:32 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0001_m_000002_0 which needs 1 slots 12/03/16 19:36:32 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:32 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0001_m_776785856 12/03/16 19:36:32 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0001_m_000000_0, Status : SUCCEEDED 12/03/16 19:36:32 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0001_m_000001_0, Status : SUCCEEDED 12/03/16 19:36:33 INFO mapred.JobClient: map 100% reduce 0% 12/03/16 19:36:35 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0001_m_1834237451 12/03/16 19:36:35 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0001_m_1834237451 spawned. 12/03/16 19:36:35 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0001_m_776785856 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:36:35 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0001_m_-665852803 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:36:36 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0001_m_1834237451 given task: attempt_20120316193605670_0001_m_000002_0 12/03/16 19:36:36 INFO mapred.TaskTracker: attempt_20120316193605670_0001_m_000002_0 0.0% 12/03/16 19:36:36 INFO mapred.TaskTracker: attempt_20120316193605670_0001_m_000002_0 0.0% cleanup 12/03/16 19:36:36 INFO mapred.TaskTracker: Task attempt_20120316193605670_0001_m_000002_0 is done. 12/03/16 19:36:36 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0001_m_000002_0 was -1 12/03/16 19:36:36 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:36:36 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -13715: No such process 12/03/16 19:36:36 INFO util.ProcessTree: Killing all processes in the process group 13715 with SIGTERM. Exit code 1 12/03/16 19:36:38 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0001_m_000002_0' has completed task_20120316193605670_0001_m_000002 successfully. 12/03/16 19:36:38 INFO mapred.JobInProgress: Job job_20120316193605670_0001 has completed successfully. 12/03/16 19:36:38 INFO mapred.JobInProgress$JobSummary: jobId=job_20120316193605670_0001,submitTime=1331926581827,launchTime=1331926581906,finishTime=1331926598235,numMaps=2,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=hortonal,queue=default,status=SUCCEEDED,mapSlotSeconds=12,reduceSlotsSeconds=0,clusterMapCapacity=2,clusterReduceCapacity=2 12/03/16 19:36:38 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0001_hortonal_directOutputFormatTest to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:36:38 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0001_m_000000_0' 12/03/16 19:36:38 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0001_m_000001_0' 12/03/16 19:36:38 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0001_m_000002_0' 12/03/16 19:36:38 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0001_m_000003_0' 12/03/16 19:36:38 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20120316193605670_0001 12/03/16 19:36:38 INFO mapred.TaskRunner: attempt_20120316193605670_0001_m_000000_0 done; removing files. 12/03/16 19:36:38 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0001_m_000000_0 not found in cache 12/03/16 19:36:38 INFO mapred.TaskRunner: attempt_20120316193605670_0001_m_000001_0 done; removing files. 12/03/16 19:36:38 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0001_m_000001_0 not found in cache 12/03/16 19:36:38 INFO mapred.TaskRunner: attempt_20120316193605670_0001_m_000002_0 done; removing files. 12/03/16 19:36:38 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0001_conf.xml to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:36:38 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0001_m_000002_0 not found in cache 12/03/16 19:36:38 INFO mapred.UserLogCleaner: Adding job_20120316193605670_0001 for user-log deletion with retainTimeStamp:1332012998262 12/03/16 19:36:39 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0001_m_000002_0, Status : SUCCEEDED 12/03/16 19:36:39 INFO mapred.JobClient: Job complete: job_20120316193605670_0001 12/03/16 19:36:39 INFO mapred.JobClient: Counters: 12 12/03/16 19:36:39 INFO mapred.JobClient: Job Counters 12/03/16 19:36:39 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12625 12/03/16 19:36:39 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/03/16 19:36:39 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/03/16 19:36:39 INFO mapred.JobClient: Rack-local map tasks=2 12/03/16 19:36:39 INFO mapred.JobClient: Launched map tasks=2 12/03/16 19:36:39 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0 12/03/16 19:36:39 INFO mapred.JobClient: FileSystemCounters 12/03/16 19:36:39 INFO mapred.JobClient: FILE_BYTES_READ=1064 12/03/16 19:36:39 INFO mapred.JobClient: Map-Reduce Framework 12/03/16 19:36:39 INFO mapred.JobClient: Map input records=3 12/03/16 19:36:39 INFO mapred.JobClient: Spilled Records=0 12/03/16 19:36:39 INFO mapred.JobClient: Map input bytes=79 12/03/16 19:36:39 INFO mapred.JobClient: Map output records=3 12/03/16 19:36:39 INFO mapred.JobClient: SPLIT_RAW_BYTES=422 Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1687770510.txt 12/03/16 19:36:39 INFO exec.HiveHistory: Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1687770510.txt 12/03/16 19:36:39 INFO ql.Driver: 12/03/16 19:36:39 INFO parse.ParseDriver: Parsing command: CREATE DATABASE IF NOT EXISTS directhcatoutputformattest LOCATION '/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directHCatOutputFormatTest/DB_directHCatOutputFormatTest' 12/03/16 19:36:39 INFO parse.ParseDriver: Parse Completed 12/03/16 19:36:39 INFO metastore.HiveMetaStore: 0: get_databases: directhcatoutputformattest 12/03/16 19:36:39 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:36:39 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:36:39 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:36:39 INFO ql.Driver: Semantic Analysis Completed 12/03/16 19:36:39 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 12/03/16 19:36:39 INFO ql.Driver: 12/03/16 19:36:39 INFO ql.Driver: 12/03/16 19:36:39 INFO ql.Driver: Starting command: CREATE DATABASE IF NOT EXISTS directhcatoutputformattest LOCATION '/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directHCatOutputFormatTest/DB_directHCatOutputFormatTest' 12/03/16 19:36:39 INFO metastore.HiveMetaStore: 0: create_database: directhcatoutputformattest /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directHCatOutputFormatTest/DB_directHCatOutputFormatTest null 12/03/16 19:36:39 INFO metastore.HiveMetaStore: 0: get_database: directhcatoutputformattest 12/03/16 19:36:40 INFO ql.Driver: OK 12/03/16 19:36:40 INFO ql.Driver: OK 12/03/16 19:36:40 INFO ql.Driver: 12/03/16 19:36:40 INFO ql.Driver: 12/03/16 19:36:40 INFO ql.Driver: 12/03/16 19:36:40 INFO parse.ParseDriver: Parsing command: CREATE TABLE directhcatoutputformattest.directhcatoutputformattest_4039773820211212643(key int, english string, spanish string) STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'TBLPROPERTIES ('hbase.columns.mapping'=':key,my_family:english,my_family:spanish') 12/03/16 19:36:40 INFO parse.ParseDriver: Parse Completed 12/03/16 19:36:40 INFO parse.SemanticAnalyzer: Starting Semantic Analysis 12/03/16 19:36:40 INFO parse.SemanticAnalyzer: Creating table directhcatoutputformattest.directhcatoutputformattest_4039773820211212643 position=13 12/03/16 19:36:40 INFO ql.Driver: Semantic Analysis Completed 12/03/16 19:36:40 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 12/03/16 19:36:40 INFO ql.Driver: 12/03/16 19:36:40 INFO ql.Driver: 12/03/16 19:36:40 INFO ql.Driver: Starting command: CREATE TABLE directhcatoutputformattest.directhcatoutputformattest_4039773820211212643(key int, english string, spanish string) STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'TBLPROPERTIES ('hbase.columns.mapping'=':key,my_family:english,my_family:spanish') 12/03/16 19:36:40 INFO exec.DDLTask: Use StorageHandler-supplied org.apache.hadoop.hive.hbase.HBaseSerDe for table directhcatoutputformattest.directhcatoutputformattest_4039773820211212643 12/03/16 19:36:40 INFO hive.log: DDL: struct directhcatoutputformattest_4039773820211212643 { i32 key, string english, string spanish} 12/03/16 19:36:40 INFO hive.log: DDL: struct directhcatoutputformattest_4039773820211212643 { i32 key, string english, string spanish} 12/03/16 19:36:40 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@10dd4c8 12/03/16 19:36:40 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:40 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:40 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:40 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40336 12/03/16 19:36:40 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40336 12/03/16 19:36:40 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:36:40 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0007 with negotiated timeout 40000 for client /127.0.0.1:40336 12/03/16 19:36:40 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0007, negotiated timeout = 40000 12/03/16 19:36:40 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0007 12/03/16 19:36:40 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0007 closed 12/03/16 19:36:40 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40336 which had sessionid 0x1361d028cfb0007 12/03/16 19:36:40 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:36:40 INFO handler.CreateTableHandler: Attemping to create the table directhcatoutputformattest.directhcatoutputformattest_4039773820211212643 12/03/16 19:36:40 INFO regionserver.HRegion: creating HRegion directhcatoutputformattest.directhcatoutputformattest_4039773820211212643 HTD == {NAME => 'directhcatoutputformattest.directhcatoutputformattest_4039773820211212643', FAMILIES => [{NAME => 'my_family', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '2147483647', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} RootDir = file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase Table name == directhcatoutputformattest.directhcatoutputformattest_4039773820211212643 12/03/16 19:36:40 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:36:40 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directhcatoutputformattest.directhcatoutputformattest_4039773820211212643/5c27c681b665d178890076e4158076f1/.logs/hlog.1331926600214 12/03/16 19:36:40 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@aea981, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:36:40 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:40 INFO regionserver.HRegion: Onlined directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1.; next sequenceid=1 12/03/16 19:36:40 INFO catalog.MetaEditor: Added 1 regions in META 12/03/16 19:36:40 INFO regionserver.HRegion: Closed directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. 12/03/16 19:36:40 INFO wal.HLog: MASTER_TABLE_OPERATIONS-hrt8n25.cc1.ygridcore.net,55642,1331926568427-0.logSyncer exiting 12/03/16 19:36:40 INFO master.AssignmentManager: Bulk assigning 1 region(s) round-robin across 1 server(s) 12/03/16 19:36:40 INFO master.AssignmentManager: Bulk assigning done 12/03/16 19:36:40 INFO master.AssignmentManager: hrt8n25.cc1.ygridcore.net,48635,1331926568651 unassigned znodes=1 of total=1 12/03/16 19:36:40 INFO regionserver.HRegionServer: Received request to open 1 region(s) 12/03/16 19:36:40 INFO regionserver.HRegionServer: Received request to open region: directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. 12/03/16 19:36:40 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:36:40 INFO regionserver.HRegion: Onlined directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1.; next sequenceid=1 12/03/16 19:36:40 INFO regionserver.HRegionServer: Post open deploy tasks for region=directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1., daughter=false 12/03/16 19:36:40 INFO catalog.MetaEditor: Updated row directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. with server=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:40 INFO regionserver.HRegionServer: Done with post open deploy task for region=directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1., daughter=false 12/03/16 19:36:40 INFO master.AssignmentManager: The master has opened the region directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. that was online on hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:36:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@132967d 12/03/16 19:36:41 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:41 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:41 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:41 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40338 12/03/16 19:36:41 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40338 12/03/16 19:36:41 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0001_m_1834237451 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:36:42 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0008 with negotiated timeout 40000 for client /127.0.0.1:40338 12/03/16 19:36:42 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0008, negotiated timeout = 40000 12/03/16 19:36:42 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:36:42 INFO metastore.HiveMetaStore: 0: create_table: db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MColumnDescriptor [Table : CDS, InheritanceStrategy : new-table] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table : SERDES, InheritanceStrategy : new-table] 12/03/16 19:36:42 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor [Table : SDS, InheritanceStrategy : new-table] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MTable [Table : TBLS, InheritanceStrategy : new-table] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters [Table : SERDE_PARAMS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters [Table : TABLE_PARAMS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys [Table : PARTITION_KEYS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols [Table : BUCKETING_COLS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters [Table : SD_PARAMS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols [Table : SORT_COLS] 12/03/16 19:36:42 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MColumnDescriptor.cols [Table : COLUMNS_V2] 12/03/16 19:36:42 INFO Datastore.Schema: Creating table SERDES 12/03/16 19:36:42 INFO Datastore.Schema: Creating table TBLS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table SDS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table CDS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table PARTITION_KEYS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table SERDE_PARAMS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table SORT_COLS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table COLUMNS_V2 12/03/16 19:36:42 INFO Datastore.Schema: Creating table BUCKETING_COLS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table TABLE_PARAMS 12/03/16 19:36:42 INFO Datastore.Schema: Creating table SD_PARAMS 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "TBLS_FK2" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "TBLS_FK1" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "TBLS_N50" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642800. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642800. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "UniqueTable" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "TBLS_N49" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642780. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642780. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "SDS_FK1" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "SDS_FK2" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "SDS_N50" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642850. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642850. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "SDS_N49" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642870. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642870. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "PARTITION_KEYS_FK1" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "PARTITION_KEYS_N49" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642900. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642900. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "SERDE_PARAMS_FK1" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "SERDE_PARAMS_N49" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642930. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642930. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "SORT_COLS_FK1" in catalog "" schema "" 12/03/16 19:36:42 INFO Datastore.Schema: Creating index "SORT_COLS_N49" in catalog "" schema "" 12/03/16 19:36:42 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642950. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642950. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:42 INFO Datastore.Schema: Creating foreign key constraint : "COLUMNS_V2_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "COLUMNS_V2_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193642980. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193642980. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "BUCKETING_COLS_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "BUCKETING_COLS_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643020. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643020. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "TABLE_PARAMS_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "TABLE_PARAMS_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643050. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643050. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "SD_PARAMS_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "SD_PARAMS_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643080. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643080. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:799) at org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:733) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.is_table_exists(HiveMetaStore.java:1101) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1017) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$700(HiveMetaStore.java:141) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1079) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:1076) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:400) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540) at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479) at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931) at org.apache.hcatalog.cli.HCatDriver.run(HCatDriver.java:42) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:210) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MColumnDescriptor 12/03/16 19:36:43 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MSerDeInfo 12/03/16 19:36:43 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MStorageDescriptor 12/03/16 19:36:43 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MTable 12/03/16 19:36:43 INFO DataNucleus.MetaData: Listener found initialisation for persistable class org.apache.hadoop.hive.metastore.model.MFieldSchema 12/03/16 19:36:43 INFO ql.Driver: OK 12/03/16 19:36:43 INFO ql.Driver: OK 12/03/16 19:36:43 INFO ql.Driver: 12/03/16 19:36:43 INFO ql.Driver: 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: get_table : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:36:43 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:36:43 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: get_index_names : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 12/03/16 19:36:43 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 12/03/16 19:36:43 INFO DataNucleus.Persistence: Managing Persistence of Class : org.apache.hadoop.hive.metastore.model.MIndex [Table : IDXS, InheritanceStrategy : new-table] 12/03/16 19:36:43 INFO DataNucleus.Persistence: Managing Persistence of Field : org.apache.hadoop.hive.metastore.model.MIndex.parameters [Table : INDEX_PARAMS] 12/03/16 19:36:43 INFO Datastore.Schema: Creating table IDXS 12/03/16 19:36:43 INFO Datastore.Schema: Creating table INDEX_PARAMS 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "IDXS_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "IDXS_FK2" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "IDXS_FK3" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "UniqueINDEX" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "IDXS_N51" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643520. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643520. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.listIndexNames(ObjectStore.java:2233) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2779) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_index_names(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listIndexNames(HiveMetaStoreClient.java:899) at org.apache.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:82) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.configureJob(TestHBaseDirectOutputFormat.java:381) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "IDXS_N50" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643570. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643570. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.listIndexNames(ObjectStore.java:2233) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2779) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_index_names(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listIndexNames(HiveMetaStoreClient.java:899) at org.apache.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:82) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.configureJob(TestHBaseDirectOutputFormat.java:381) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "IDXS_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643540. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643540. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.listIndexNames(ObjectStore.java:2233) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2779) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_index_names(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listIndexNames(HiveMetaStoreClient.java:899) at org.apache.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:82) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.configureJob(TestHBaseDirectOutputFormat.java:381) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO Datastore.Schema: Creating foreign key constraint : "INDEX_PARAMS_FK1" in catalog "" schema "" 12/03/16 19:36:43 INFO Datastore.Schema: Creating index "INDEX_PARAMS_N49" in catalog "" schema "" 12/03/16 19:36:43 WARN DataNucleus.Datastore: SQL Warning : The new index is a duplicate of an existing index: SQL120316193643710. java.sql.SQLWarning: The new index is a duplicate of an existing index: SQL120316193643710. at org.apache.derby.iapi.error.StandardException.newWarningCommon(Unknown Source) at org.apache.derby.iapi.error.StandardException.newWarning(Unknown Source) at org.apache.derby.impl.sql.execute.CreateIndexConstantAction.executeConstantAction(Unknown Source) at org.apache.derby.impl.sql.execute.MiscResultSet.open(Unknown Source) at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.derby.impl.jdbc.EmbedStatement.execute(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:730) at org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:652) at org.datanucleus.store.rdbms.table.TableImpl.createConstraints(TableImpl.java:434) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:2768) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:2503) at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2148) at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:113) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:986) at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:952) at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:919) at org.datanucleus.store.mapped.MappedStoreManager.getDatastoreClass(MappedStoreManager.java:356) at org.datanucleus.store.rdbms.query.legacy.ExtentHelper.getExtent(ExtentHelper.java:48) at org.datanucleus.store.rdbms.RDBMSStoreManager.getExtent(RDBMSStoreManager.java:1332) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4149) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:266) at org.apache.hadoop.hive.metastore.ObjectStore.listIndexNames(ObjectStore.java:2233) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2779) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$42.run(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:360) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_index_names(HiveMetaStore.java:2776) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listIndexNames(HiveMetaStoreClient.java:899) at org.apache.hcatalog.mapreduce.HCatOutputFormat.setOutput(HCatOutputFormat.java:82) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.configureJob(TestHBaseDirectOutputFormat.java:381) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directHCatOutputFormatTest(TestHBaseDirectOutputFormat.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906) 12/03/16 19:36:43 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@1c818c4 12/03/16 19:36:43 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:36:43 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:36:43 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:36:43 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40344 12/03/16 19:36:43 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40344 12/03/16 19:36:43 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0009 with negotiated timeout 40000 for client /127.0.0.1:40344 12/03/16 19:36:43 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0009, negotiated timeout = 40000 12/03/16 19:36:43 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:36:43 INFO snapshot.ZKUtil: Added transaction : revision: 1 ts: 1331941003851 12/03/16 19:36:43 INFO snapshot.ZKUtil: Transaction list stored at /revision-management/data/directhcatoutputformattest.directhcatoutputformattest_4039773820211212643/my_family/runningTxns. 12/03/16 19:36:43 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0009 12/03/16 19:36:43 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0009 closed 12/03/16 19:36:43 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:36:43 INFO snapshot.ZKUtil: Disconnected to ZooKeeper 12/03/16 19:36:43 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40344 which had sessionid 0x1361d028cfb0009 12/03/16 19:36:43 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: get_table : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:43 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:36:43 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:36:43 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: get_index_names : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:44 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: get_table : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:36:44 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:36:44 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: get_index_names : db=directhcatoutputformattest tbl=directhcatoutputformattest_4039773820211212643 12/03/16 19:36:44 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:36:44 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:36:44 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/03/16 19:36:47 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 12/03/16 19:36:47 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:36:47 INFO input.FileInputFormat: Total input paths to process : 3 12/03/16 19:36:47 INFO mapred.JobTracker: Job job_20120316193605670_0002 added successfully for user 'hortonal' to queue 'default' 12/03/16 19:36:47 INFO mapred.AuditLogger: USER=hortonal IP=127.0.0.1 OPERATION=SUBMIT_JOB TARGET=job_20120316193605670_0002 RESULT=SUCCESS 12/03/16 19:36:47 INFO mapred.JobTracker: Initializing job_20120316193605670_0002 12/03/16 19:36:47 INFO mapred.JobInProgress: Initializing job_20120316193605670_0002 12/03/16 19:36:47 INFO mapred.JobClient: Running job: job_20120316193605670_0002 12/03/16 19:36:47 INFO mapred.JobInProgress: jobToken generated and stored with users keys in /tmp/hadoop-hortonal/mapred/system/job_20120316193605670_0002/jobToken 12/03/16 19:36:47 INFO mapred.JobInProgress: Input size for job job_20120316193605670_0002 = 79. Number of splits = 3 12/03/16 19:36:47 INFO mapred.JobInProgress: tip:task_20120316193605670_0002_m_000000 has split on node:/default-rack/localhost 12/03/16 19:36:47 INFO mapred.JobInProgress: tip:task_20120316193605670_0002_m_000001 has split on node:/default-rack/localhost 12/03/16 19:36:47 INFO mapred.JobInProgress: tip:task_20120316193605670_0002_m_000002 has split on node:/default-rack/localhost 12/03/16 19:36:47 INFO mapred.JobInProgress: Job job_20120316193605670_0002 initialized successfully with 3 map tasks and 0 reduce tasks. 12/03/16 19:36:48 INFO mapred.JobClient: map 0% reduce 0% 12/03/16 19:36:50 INFO mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_20120316193605670_0002_m_000004_0' to tip task_20120316193605670_0002_m_000004, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:50 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0002_m_000004_0 task's state:UNASSIGNED 12/03/16 19:36:50 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0002_m_000004_0 which needs 1 slots 12/03/16 19:36:50 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0002_m_000004_0 which needs 1 slots 12/03/16 19:36:50 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Doing chmod on localdir :/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860 12/03/16 19:36:50 INFO filecache.TrackerDistributedCacheManager: Cached file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar as /tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:36:50 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0002_m_1527033851 12/03/16 19:36:50 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0002_m_1527033851 spawned. 12/03/16 19:36:51 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0002_m_1527033851 given task: attempt_20120316193605670_0002_m_000004_0 12/03/16 19:36:51 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000004_0 0.0% setup 12/03/16 19:36:51 INFO mapred.TaskTracker: Task attempt_20120316193605670_0002_m_000004_0 is done. 12/03/16 19:36:51 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0002_m_000004_0 was -1 12/03/16 19:36:51 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:36:51 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -14379: No such process 12/03/16 19:36:51 INFO util.ProcessTree: Killing all processes in the process group 14379 with SIGTERM. Exit code 1 12/03/16 19:36:53 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0002_m_000004_0' has completed task_20120316193605670_0002_m_000004 successfully. 12/03/16 19:36:53 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0002_m_000000_0' to tip task_20120316193605670_0002_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:53 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0002_m_000000 12/03/16 19:36:53 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0002_m_000001_0' to tip task_20120316193605670_0002_m_000001, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:53 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0002_m_000001 12/03/16 19:36:53 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0002_m_000000_0 task's state:UNASSIGNED 12/03/16 19:36:53 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0002_m_000000_0 which needs 1 slots 12/03/16 19:36:53 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0002_m_000000_0 which needs 1 slots 12/03/16 19:36:53 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:53 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0002_m_000001_0 task's state:UNASSIGNED 12/03/16 19:36:53 INFO mapred.TaskTracker: Received KillTaskAction for task: attempt_20120316193605670_0002_m_000004_0 12/03/16 19:36:53 INFO mapred.TaskTracker: About to purge task: attempt_20120316193605670_0002_m_000004_0 12/03/16 19:36:53 INFO mapred.TaskRunner: attempt_20120316193605670_0002_m_000004_0 done; removing files. 12/03/16 19:36:53 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0002_m_000001_0 which needs 1 slots 12/03/16 19:36:53 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0002_m_000001_0 which needs 1 slots 12/03/16 19:36:53 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:53 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0002_m_000004_0 not found in cache 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:36:53 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:36:53 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0002_m_1422237759 12/03/16 19:36:53 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0002_m_1422237759 spawned. 12/03/16 19:36:53 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0002_m_1527033851 12/03/16 19:36:53 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0002_m_000004_0, Status : SUCCEEDED 12/03/16 19:36:56 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0002_m_-491070637 12/03/16 19:36:56 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0002_m_-491070637 spawned. 12/03/16 19:36:56 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0002_m_1527033851 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:36:56 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0002_m_1422237759 given task: attempt_20120316193605670_0002_m_000000_0 12/03/16 19:36:57 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0002_m_-491070637 given task: attempt_20120316193605670_0002_m_000001_0 12/03/16 19:36:57 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40356 12/03/16 19:36:57 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40356 12/03/16 19:36:57 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000a with negotiated timeout 40000 for client /127.0.0.1:40356 12/03/16 19:36:57 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000000_0 1.0% 12/03/16 19:36:57 INFO mapred.TaskTracker: Task attempt_20120316193605670_0002_m_000000_0 is done. 12/03/16 19:36:57 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0002_m_000000_0 was -1 12/03/16 19:36:57 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:36:58 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40358 12/03/16 19:36:58 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40358 12/03/16 19:36:58 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000b with negotiated timeout 40000 for client /127.0.0.1:40358 12/03/16 19:36:58 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb000a, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:36:58 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40356 which had sessionid 0x1361d028cfb000a 12/03/16 19:36:58 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -14423: No such process 12/03/16 19:36:58 INFO util.ProcessTree: Killing all processes in the process group 14423 with SIGTERM. Exit code 1 12/03/16 19:36:58 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000001_0 1.0% 12/03/16 19:36:58 INFO mapred.TaskTracker: Task attempt_20120316193605670_0002_m_000001_0 is done. 12/03/16 19:36:58 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0002_m_000001_0 was -1 12/03/16 19:36:58 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:36:58 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb000b, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:36:58 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40358 which had sessionid 0x1361d028cfb000b 12/03/16 19:36:58 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -14463: No such process 12/03/16 19:36:58 INFO util.ProcessTree: Killing all processes in the process group 14463 with SIGTERM. Exit code 1 12/03/16 19:36:59 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0002_m_000000_0' has completed task_20120316193605670_0002_m_000000 successfully. 12/03/16 19:36:59 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0002_m_000001_0' has completed task_20120316193605670_0002_m_000001 successfully. 12/03/16 19:36:59 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0002_m_000002_0' to tip task_20120316193605670_0002_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:36:59 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0002_m_000002 12/03/16 19:36:59 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0002_m_000002_0 task's state:UNASSIGNED 12/03/16 19:36:59 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0002_m_000002_0 which needs 1 slots 12/03/16 19:36:59 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0002_m_000002_0 which needs 1 slots 12/03/16 19:36:59 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:36:59 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:36:59 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0002_m_1422237759 12/03/16 19:37:00 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0002_m_000000_0, Status : SUCCEEDED 12/03/16 19:37:00 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0002_m_000001_0, Status : SUCCEEDED 12/03/16 19:37:01 INFO mapred.JobClient: map 66% reduce 0% 12/03/16 19:37:03 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0002_m_-922457805 12/03/16 19:37:03 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0002_m_-922457805 spawned. 12/03/16 19:37:03 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0002_m_1422237759 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:03 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0002_m_-491070637 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:03 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0002_m_-922457805 given task: attempt_20120316193605670_0002_m_000002_0 12/03/16 19:37:04 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40366 12/03/16 19:37:04 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40366 12/03/16 19:37:04 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000c with negotiated timeout 40000 for client /127.0.0.1:40366 12/03/16 19:37:04 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000002_0 1.0% 12/03/16 19:37:04 INFO mapred.TaskTracker: Task attempt_20120316193605670_0002_m_000002_0 is done. 12/03/16 19:37:04 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0002_m_000002_0 was -1 12/03/16 19:37:04 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:37:05 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb000c, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:37:05 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40366 which had sessionid 0x1361d028cfb000c 12/03/16 19:37:05 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -14592: No such process 12/03/16 19:37:05 INFO util.ProcessTree: Killing all processes in the process group 14592 with SIGTERM. Exit code 1 12/03/16 19:37:05 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0002_m_000002_0' has completed task_20120316193605670_0002_m_000002 successfully. 12/03/16 19:37:05 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0002_m_000003_0' to tip task_20120316193605670_0002_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:05 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0002_m_000003_0 task's state:UNASSIGNED 12/03/16 19:37:05 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0002_m_000003_0 which needs 1 slots 12/03/16 19:37:05 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0002_m_000003_0 which needs 1 slots 12/03/16 19:37:05 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:05 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:05 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0002_m_204405682 12/03/16 19:37:05 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0002_m_204405682 spawned. 12/03/16 19:37:06 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0002_m_204405682 given task: attempt_20120316193605670_0002_m_000003_0 12/03/16 19:37:06 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0002_m_000002_0, Status : SUCCEEDED 12/03/16 19:37:06 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000003_0 0.0% 12/03/16 19:37:06 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40376 12/03/16 19:37:06 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40376 12/03/16 19:37:06 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000d with negotiated timeout 40000 for client /127.0.0.1:40376 12/03/16 19:37:06 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000d 12/03/16 19:37:06 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40376 which had sessionid 0x1361d028cfb000d 12/03/16 19:37:07 INFO mapred.JobClient: map 100% reduce 0% 12/03/16 19:37:09 INFO mapred.TaskTracker: attempt_20120316193605670_0002_m_000003_0 0.0% cleanup 12/03/16 19:37:09 INFO mapred.TaskTracker: Task attempt_20120316193605670_0002_m_000003_0 is done. 12/03/16 19:37:09 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0002_m_000003_0 was -1 12/03/16 19:37:09 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:37:09 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -14661: No such process 12/03/16 19:37:09 INFO util.ProcessTree: Killing all processes in the process group 14661 with SIGTERM. Exit code 1 12/03/16 19:37:10 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb0005, timeout of 40000ms exceeded 12/03/16 19:37:10 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0005 12/03/16 19:37:10 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0002_m_-922457805 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:11 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0002_m_000003_0' has completed task_20120316193605670_0002_m_000003 successfully. 12/03/16 19:37:11 INFO mapred.JobInProgress: Job job_20120316193605670_0002 has completed successfully. 12/03/16 19:37:11 INFO mapred.JobInProgress$JobSummary: jobId=job_20120316193605670_0002,submitTime=1331926607525,launchTime=1331926607562,finishTime=1331926631882,numMaps=3,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=hortonal,queue=default,status=SUCCEEDED,mapSlotSeconds=19,reduceSlotsSeconds=0,clusterMapCapacity=2,clusterReduceCapacity=2 12/03/16 19:37:11 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0002_hortonal_directHCatOutputFormatTest to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:37:11 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0002_m_000000_0' 12/03/16 19:37:11 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0002_m_000001_0' 12/03/16 19:37:11 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0002_m_000002_0' 12/03/16 19:37:11 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0002_m_000003_0' 12/03/16 19:37:11 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0002_m_000004_0' 12/03/16 19:37:11 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20120316193605670_0002 12/03/16 19:37:11 INFO mapred.TaskRunner: attempt_20120316193605670_0002_m_000001_0 done; removing files. 12/03/16 19:37:11 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0002_m_000001_0 not found in cache 12/03/16 19:37:11 INFO mapred.TaskRunner: attempt_20120316193605670_0002_m_000002_0 done; removing files. 12/03/16 19:37:11 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0002_m_000002_0 not found in cache 12/03/16 19:37:11 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0002_conf.xml to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:37:11 INFO mapred.TaskRunner: attempt_20120316193605670_0002_m_000003_0 done; removing files. 12/03/16 19:37:11 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0002_m_000003_0 not found in cache 12/03/16 19:37:11 INFO mapred.TaskRunner: attempt_20120316193605670_0002_m_000000_0 done; removing files. 12/03/16 19:37:11 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0002_m_000000_0 not found in cache 12/03/16 19:37:11 INFO mapred.UserLogCleaner: Adding job_20120316193605670_0002 for user-log deletion with retainTimeStamp:1332013031903 12/03/16 19:37:12 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb0006, timeout of 40000ms exceeded 12/03/16 19:37:12 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0006 12/03/16 19:37:12 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0002_m_000003_0, Status : SUCCEEDED 12/03/16 19:37:12 INFO mapred.JobClient: Job complete: job_20120316193605670_0002 12/03/16 19:37:12 INFO mapred.JobClient: Counters: 11 12/03/16 19:37:12 INFO mapred.JobClient: Job Counters 12/03/16 19:37:12 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=19138 12/03/16 19:37:12 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/03/16 19:37:12 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/03/16 19:37:12 INFO mapred.JobClient: Rack-local map tasks=3 12/03/16 19:37:12 INFO mapred.JobClient: Launched map tasks=3 12/03/16 19:37:12 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0 12/03/16 19:37:12 INFO mapred.JobClient: FileSystemCounters 12/03/16 19:37:12 INFO mapred.JobClient: FILE_BYTES_READ=2245 12/03/16 19:37:12 INFO mapred.JobClient: Map-Reduce Framework 12/03/16 19:37:12 INFO mapred.JobClient: Map input records=3 12/03/16 19:37:12 INFO mapred.JobClient: Spilled Records=0 12/03/16 19:37:12 INFO mapred.JobClient: Map output records=3 12/03/16 19:37:12 INFO mapred.JobClient: SPLIT_RAW_BYTES=687 12/03/16 19:37:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@113ea1a 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:37:12 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:37:12 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40379 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:37:12 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40379 12/03/16 19:37:12 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000e with negotiated timeout 40000 for client /127.0.0.1:40379 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb000e, negotiated timeout = 40000 12/03/16 19:37:12 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:37:12 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000e 12/03/16 19:37:12 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb000e closed 12/03/16 19:37:12 INFO snapshot.ZKUtil: Disconnected to ZooKeeper 12/03/16 19:37:12 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40379 which had sessionid 0x1361d028cfb000e 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: EventThread shut down Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1622788896.txt 12/03/16 19:37:12 INFO exec.HiveHistory: Hive history file=/tmp/hortonal/hive_job_log_hortonal_201203161936_1622788896.txt 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO parse.ParseDriver: Parsing command: CREATE DATABASE IF NOT EXISTS directmodeaborttest LOCATION '/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directModeAbortTest/DB_directModeAbortTest' 12/03/16 19:37:12 INFO parse.ParseDriver: Parse Completed 12/03/16 19:37:12 INFO metastore.HiveMetaStore: 0: get_databases: directmodeaborttest 12/03/16 19:37:12 INFO ql.Driver: Semantic Analysis Completed 12/03/16 19:37:12 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: Starting command: CREATE DATABASE IF NOT EXISTS directmodeaborttest LOCATION '/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directModeAbortTest/DB_directModeAbortTest' 12/03/16 19:37:12 INFO metastore.HiveMetaStore: 0: create_database: directmodeaborttest /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/directModeAbortTest/DB_directModeAbortTest null 12/03/16 19:37:12 INFO metastore.HiveMetaStore: 0: get_database: directmodeaborttest 12/03/16 19:37:12 INFO ql.Driver: OK 12/03/16 19:37:12 INFO ql.Driver: OK 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO parse.ParseDriver: Parsing command: CREATE TABLE directmodeaborttest.directmodeaborttest_718236726961197219(key int, english string, spanish string) STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'TBLPROPERTIES ('hbase.columns.mapping'=':key,my_family:english,my_family:spanish') 12/03/16 19:37:12 INFO parse.ParseDriver: Parse Completed 12/03/16 19:37:12 INFO parse.SemanticAnalyzer: Starting Semantic Analysis 12/03/16 19:37:12 INFO parse.SemanticAnalyzer: Creating table directmodeaborttest.directmodeaborttest_718236726961197219 position=13 12/03/16 19:37:12 INFO ql.Driver: Semantic Analysis Completed 12/03/16 19:37:12 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: 12/03/16 19:37:12 INFO ql.Driver: Starting command: CREATE TABLE directmodeaborttest.directmodeaborttest_718236726961197219(key int, english string, spanish string) STORED BY 'org.apache.hcatalog.hbase.HBaseHCatStorageHandler'TBLPROPERTIES ('hbase.columns.mapping'=':key,my_family:english,my_family:spanish') 12/03/16 19:37:12 INFO exec.DDLTask: Use StorageHandler-supplied org.apache.hadoop.hive.hbase.HBaseSerDe for table directmodeaborttest.directmodeaborttest_718236726961197219 12/03/16 19:37:12 INFO hive.log: DDL: struct directmodeaborttest_718236726961197219 { i32 key, string english, string spanish} 12/03/16 19:37:12 INFO hive.log: DDL: struct directmodeaborttest_718236726961197219 { i32 key, string english, string spanish} 12/03/16 19:37:12 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@10dd4c8 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:37:12 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:37:12 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40382 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:37:12 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40382 12/03/16 19:37:12 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 13124@hrt8n25.cc1.ygridcore.net 12/03/16 19:37:12 INFO server.ZooKeeperServer: Established session 0x1361d028cfb000f with negotiated timeout 40000 for client /127.0.0.1:40382 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb000f, negotiated timeout = 40000 12/03/16 19:37:12 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000f 12/03/16 19:37:12 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40382 which had sessionid 0x1361d028cfb000f 12/03/16 19:37:12 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb000f closed 12/03/16 19:37:12 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:37:12 INFO handler.CreateTableHandler: Attemping to create the table directmodeaborttest.directmodeaborttest_718236726961197219 12/03/16 19:37:12 INFO regionserver.HRegion: creating HRegion directmodeaborttest.directmodeaborttest_718236726961197219 HTD == {NAME => 'directmodeaborttest.directmodeaborttest_718236726961197219', FAMILIES => [{NAME => 'my_family', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '2147483647', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} RootDir = file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase Table name == directmodeaborttest.directmodeaborttest_718236726961197219 12/03/16 19:37:12 INFO wal.HLog: HLog configuration: blocksize=32 MB, rollsize=30.4 MB, enabled=true, optionallogflushinternal=1000ms 12/03/16 19:37:12 INFO wal.HLog: for /home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directmodeaborttest.directmodeaborttest_718236726961197219/9982529655840a96c1394f5007843e79/.logs/hlog.1331926632831 12/03/16 19:37:12 INFO wal.HLog: getNumCurrentReplicas--HDFS-826 not available; hdfs_out=org.apache.hadoop.fs.FSDataOutputStream@177e3d4, exception=org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.getNumCurrentReplicas() 12/03/16 19:37:12 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:37:12 INFO regionserver.HRegion: Onlined directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79.; next sequenceid=1 12/03/16 19:37:12 INFO catalog.MetaEditor: Added 1 regions in META 12/03/16 19:37:12 INFO regionserver.HRegion: Closed directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. 12/03/16 19:37:12 INFO wal.HLog: MASTER_TABLE_OPERATIONS-hrt8n25.cc1.ygridcore.net,55642,1331926568427-0.logSyncer exiting 12/03/16 19:37:12 INFO master.AssignmentManager: Bulk assigning 1 region(s) round-robin across 1 server(s) 12/03/16 19:37:12 INFO master.AssignmentManager: Bulk assigning done 12/03/16 19:37:12 INFO master.AssignmentManager: hrt8n25.cc1.ygridcore.net,48635,1331926568651 unassigned znodes=1 of total=1 12/03/16 19:37:12 INFO regionserver.HRegionServer: Received request to open 1 region(s) 12/03/16 19:37:12 INFO regionserver.HRegionServer: Received request to open region: directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. 12/03/16 19:37:12 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/03/16 19:37:12 INFO regionserver.HRegion: Onlined directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79.; next sequenceid=1 12/03/16 19:37:12 INFO regionserver.HRegionServer: Post open deploy tasks for region=directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79., daughter=false 12/03/16 19:37:12 INFO catalog.MetaEditor: Updated row directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. with server=hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:37:12 INFO regionserver.HRegionServer: Done with post open deploy task for region=directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79., daughter=false 12/03/16 19:37:12 INFO master.AssignmentManager: The master has opened the region directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. that was online on hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:37:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@da88c2 12/03/16 19:37:13 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:37:13 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:37:13 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:37:13 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40384 12/03/16 19:37:13 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40384 12/03/16 19:37:13 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0010 with negotiated timeout 40000 for client /127.0.0.1:40384 12/03/16 19:37:13 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0010, negotiated timeout = 40000 12/03/16 19:37:13 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:37:13 INFO metastore.HiveMetaStore: 0: create_table: db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:13 INFO ql.Driver: OK 12/03/16 19:37:13 INFO ql.Driver: OK 12/03/16 19:37:13 INFO ql.Driver: 12/03/16 19:37:13 INFO ql.Driver: 12/03/16 19:37:13 INFO metastore.HiveMetaStore: 0: get_table : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:13 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:37:13 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:37:13 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:37:13 INFO metastore.HiveMetaStore: 0: get_index_names : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:13 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@2a519b 12/03/16 19:37:13 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:37:13 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:37:13 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:37:13 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40385 12/03/16 19:37:13 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40385 12/03/16 19:37:14 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0011 with negotiated timeout 40000 for client /127.0.0.1:40385 12/03/16 19:37:14 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0011, negotiated timeout = 40000 12/03/16 19:37:14 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:37:14 INFO snapshot.ZKUtil: Added transaction : revision: 1 ts: 1331941034033 12/03/16 19:37:14 INFO snapshot.ZKUtil: Transaction list stored at /revision-management/data/directmodeaborttest.directmodeaborttest_718236726961197219/my_family/runningTxns. 12/03/16 19:37:14 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0011 12/03/16 19:37:14 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0011 closed 12/03/16 19:37:14 INFO snapshot.ZKUtil: Disconnected to ZooKeeper 12/03/16 19:37:14 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb0011, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:37:14 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40385 which had sessionid 0x1361d028cfb0011 12/03/16 19:37:14 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:37:14 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: get_table : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:37:14 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:37:14 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: get_index_names : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:14 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: get_table : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 12/03/16 19:37:14 INFO metastore.ObjectStore: ObjectStore, initialize called 12/03/16 19:37:14 INFO metastore.ObjectStore: Initialized ObjectStore 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: get_index_names : db=directmodeaborttest tbl=directmodeaborttest_718236726961197219 12/03/16 19:37:14 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Shutting down the object store... 12/03/16 19:37:14 INFO metastore.HiveMetaStore: 0: Metastore shutdown complete. 12/03/16 19:37:14 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/03/16 19:37:14 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0002_m_204405682 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:17 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String). 12/03/16 19:37:17 WARN mapreduce.TableMapReduceUtil: Could not find jar for class class org.apache.hcatalog.hbase.HBaseHCatStorageHandler in order to ship it to the cluster. 12/03/16 19:37:17 INFO input.FileInputFormat: Total input paths to process : 3 12/03/16 19:37:17 INFO mapred.JobTracker: Job job_20120316193605670_0003 added successfully for user 'hortonal' to queue 'default' 12/03/16 19:37:17 INFO mapred.AuditLogger: USER=hortonal IP=127.0.0.1 OPERATION=SUBMIT_JOB TARGET=job_20120316193605670_0003 RESULT=SUCCESS 12/03/16 19:37:17 INFO mapred.JobTracker: Initializing job_20120316193605670_0003 12/03/16 19:37:17 INFO mapred.JobInProgress: Initializing job_20120316193605670_0003 12/03/16 19:37:17 INFO mapred.JobClient: Running job: job_20120316193605670_0003 12/03/16 19:37:17 INFO mapred.JobInProgress: jobToken generated and stored with users keys in /tmp/hadoop-hortonal/mapred/system/job_20120316193605670_0003/jobToken 12/03/16 19:37:17 INFO mapred.JobInProgress: Input size for job job_20120316193605670_0003 = 81. Number of splits = 3 12/03/16 19:37:17 INFO mapred.JobInProgress: tip:task_20120316193605670_0003_m_000000 has split on node:/default-rack/localhost 12/03/16 19:37:17 INFO mapred.JobInProgress: tip:task_20120316193605670_0003_m_000001 has split on node:/default-rack/localhost 12/03/16 19:37:17 INFO mapred.JobInProgress: tip:task_20120316193605670_0003_m_000002 has split on node:/default-rack/localhost 12/03/16 19:37:17 INFO mapred.JobInProgress: Job job_20120316193605670_0003 initialized successfully with 3 map tasks and 0 reduce tasks. 12/03/16 19:37:17 INFO mapred.JobTracker: Adding task (JOB_SETUP) 'attempt_20120316193605670_0003_m_000004_0' to tip task_20120316193605670_0003_m_000004, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:17 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000004_0 task's state:UNASSIGNED 12/03/16 19:37:17 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000004_0 which needs 1 slots 12/03/16 19:37:17 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0003_m_000004_0 which needs 1 slots 12/03/16 19:37:17 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:17 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:17 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_1950769915 12/03/16 19:37:17 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_1950769915 spawned. 12/03/16 19:37:18 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0003_m_1950769915 given task: attempt_20120316193605670_0003_m_000004_0 12/03/16 19:37:18 INFO mapred.JobClient: map 0% reduce 0% 12/03/16 19:37:18 INFO mapred.TaskTracker: attempt_20120316193605670_0003_m_000004_0 0.0% setup 12/03/16 19:37:18 INFO mapred.TaskTracker: Task attempt_20120316193605670_0003_m_000004_0 is done. 12/03/16 19:37:18 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0003_m_000004_0 was -1 12/03/16 19:37:18 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:37:18 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -15184: No such process 12/03/16 19:37:18 INFO util.ProcessTree: Killing all processes in the process group 15184 with SIGTERM. Exit code 1 12/03/16 19:37:20 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0003_m_000004_0' has completed task_20120316193605670_0003_m_000004 successfully. 12/03/16 19:37:20 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000000_0' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:20 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000000 12/03/16 19:37:20 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000001_0' to tip task_20120316193605670_0003_m_000001, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:20 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000001 12/03/16 19:37:20 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_0 task's state:UNASSIGNED 12/03/16 19:37:20 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000001_0 task's state:UNASSIGNED 12/03/16 19:37:20 INFO mapred.TaskTracker: Received KillTaskAction for task: attempt_20120316193605670_0003_m_000004_0 12/03/16 19:37:20 INFO mapred.TaskTracker: About to purge task: attempt_20120316193605670_0003_m_000004_0 12/03/16 19:37:20 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000004_0 done; removing files. 12/03/16 19:37:20 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_0 which needs 1 slots 12/03/16 19:37:20 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0003_m_000000_0 which needs 1 slots 12/03/16 19:37:20 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:20 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0003_m_000004_0 not found in cache 12/03/16 19:37:20 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000001_0 which needs 1 slots 12/03/16 19:37:20 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000001_0 which needs 1 slots 12/03/16 19:37:20 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:20 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:20 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-508602961 12/03/16 19:37:20 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-508602961 spawned. 12/03/16 19:37:20 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0003_m_1950769915 12/03/16 19:37:21 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000004_0, Status : SUCCEEDED 12/03/16 19:37:23 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-733613634 12/03/16 19:37:23 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-733613634 spawned. 12/03/16 19:37:23 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_1950769915 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:23 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0003_m_-508602961 given task: attempt_20120316193605670_0003_m_000000_0 12/03/16 19:37:24 INFO mapred.TaskTracker: JVM with ID: jvm_20120316193605670_0003_m_-733613634 given task: attempt_20120316193605670_0003_m_000001_0 12/03/16 19:37:24 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40396 12/03/16 19:37:24 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40396 12/03/16 19:37:24 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0012 with negotiated timeout 40000 for client /127.0.0.1:40396 12/03/16 19:37:24 INFO mapred.TaskTracker: attempt_20120316193605670_0003_m_000000_0 0.0% 12/03/16 19:37:25 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:40402 12/03/16 19:37:25 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:40402 12/03/16 19:37:25 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0013 with negotiated timeout 40000 for client /127.0.0.1:40402 12/03/16 19:37:25 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb0012, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:37:25 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40396 which had sessionid 0x1361d028cfb0012 12/03/16 19:37:25 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -15259: No such process 12/03/16 19:37:25 INFO util.ProcessTree: Killing all processes in the process group 15259 with SIGTERM. Exit code 1 12/03/16 19:37:25 INFO mapred.TaskTracker: attempt_20120316193605670_0003_m_000001_0 1.0% 12/03/16 19:37:25 INFO mapred.TaskTracker: Task attempt_20120316193605670_0003_m_000001_0 is done. 12/03/16 19:37:25 INFO mapred.TaskTracker: reported output size for attempt_20120316193605670_0003_m_000001_0 was -1 12/03/16 19:37:25 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:26 WARN server.NIOServerCnxn: caught end of stream exception EndOfStreamException: Unable to read additional data from client sessionid 0x1361d028cfb0013, likely client has closed socket at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:220) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:224) at java.lang.Thread.run(Thread.java:619) 12/03/16 19:37:26 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40402 which had sessionid 0x1361d028cfb0013 12/03/16 19:37:26 WARN util.ProcessTree: Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill -15437: No such process 12/03/16 19:37:26 INFO util.ProcessTree: Killing all processes in the process group 15437 with SIGTERM. Exit code 1 12/03/16 19:37:27 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_0: java.io.IOException: Failing map to test abort at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat$MapWriteAbortTransaction.map(TestHBaseDirectOutputFormat.java:455) at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat$MapWriteAbortTransaction.map(TestHBaseDirectOutputFormat.java:445) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:638) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:314) at org.apache.hadoop.mapred.Child$4.run(Child.java:217) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1062) at org.apache.hadoop.mapred.Child.main(Child.java:211) 12/03/16 19:37:27 INFO mapred.JobInProgress: Task 'attempt_20120316193605670_0003_m_000001_0' has completed task_20120316193605670_0003_m_000001 successfully. 12/03/16 19:37:27 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000002_0' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:27 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000002 12/03/16 19:37:27 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_0 task's state:UNASSIGNED 12/03/16 19:37:27 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_0 which needs 1 slots 12/03/16 19:37:27 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_0 which needs 1 slots 12/03/16 19:37:27 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:27 INFO mapred.JvmManager: Killing JVM: jvm_20120316193605670_0003_m_-733613634 12/03/16 19:37:27 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000001_0, Status : SUCCEEDED 12/03/16 19:37:28 INFO mapred.JobClient: map 33% reduce 0% 12/03/16 19:37:31 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_1934615116 12/03/16 19:37:31 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_1934615116 spawned. 12/03/16 19:37:31 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-733613634 exited with exit code 0. Number of tasks it ran: 1 12/03/16 19:37:31 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-508602961 exited with exit code 0. Number of tasks it ran: 0 12/03/16 19:37:34 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_0 done; removing files. 12/03/16 19:37:34 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:36 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000000_1' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:36 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000000 12/03/16 19:37:36 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_0' 12/03/16 19:37:36 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_1 task's state:UNASSIGNED 12/03/16 19:37:36 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_1 which needs 1 slots 12/03/16 19:37:36 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_1 which needs 1 slots 12/03/16 19:37:36 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:36 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:36 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_391433626 12/03/16 19:37:36 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_391433626 spawned. 12/03/16 19:37:36 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_1934615116 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:37:36 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_0 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:36 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000000_0, Status : FAILED 12/03/16 19:37:38 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb000a, timeout of 40000ms exceeded 12/03/16 19:37:38 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000a 12/03/16 19:37:39 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_0: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:39 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_0 done; removing files. 12/03/16 19:37:39 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:40 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb000b, timeout of 40000ms exceeded 12/03/16 19:37:40 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000b 12/03/16 19:37:41 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_391433626 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:37:41 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_1 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:42 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_0' 12/03/16 19:37:42 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_1: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:42 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000002_0' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:42 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_0 task's state:FAILED_UNCLEAN 12/03/16 19:37:42 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_0 which needs 1 slots 12/03/16 19:37:42 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_0 which needs 1 slots 12/03/16 19:37:42 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:42 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-696168343 12/03/16 19:37:42 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-696168343 spawned. 12/03/16 19:37:44 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_1 done; removing files. 12/03/16 19:37:44 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:45 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_1' 12/03/16 19:37:45 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000000_1' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:45 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_1 task's state:FAILED_UNCLEAN 12/03/16 19:37:45 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_1 which needs 1 slots 12/03/16 19:37:45 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_1 which needs 1 slots 12/03/16 19:37:45 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:45 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:45 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-627480334 12/03/16 19:37:45 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-627480334 spawned. 12/03/16 19:37:46 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb000c, timeout of 40000ms exceeded 12/03/16 19:37:46 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb000c 12/03/16 19:37:47 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-696168343 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:37:47 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_0 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:48 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_0: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:50 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-627480334 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:37:50 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_1 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:50 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_0 done; removing files. 12/03/16 19:37:50 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:51 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_1: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:51 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000002_1' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:51 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000002 12/03/16 19:37:51 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_0' 12/03/16 19:37:51 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_1 task's state:UNASSIGNED 12/03/16 19:37:51 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_1 which needs 1 slots 12/03/16 19:37:51 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_1 which needs 1 slots 12/03/16 19:37:51 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:51 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-412420154 12/03/16 19:37:51 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-412420154 spawned. 12/03/16 19:37:51 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000002_0, Status : FAILED 12/03/16 19:37:51 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:37:51 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_0&filter=stdout 12/03/16 19:37:51 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_0&filter=stderr 12/03/16 19:37:51 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:37:53 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_1 done; removing files. 12/03/16 19:37:53 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:54 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000000_2' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:37:54 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000000 12/03/16 19:37:54 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_1' 12/03/16 19:37:54 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_2 task's state:UNASSIGNED 12/03/16 19:37:54 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_2 which needs 1 slots 12/03/16 19:37:54 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_2 which needs 1 slots 12/03/16 19:37:54 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:37:54 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:37:54 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-194467116 12/03/16 19:37:54 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-194467116 spawned. 12/03/16 19:37:54 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000000_1, Status : FAILED 12/03/16 19:37:54 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_1&filter=stdout 12/03/16 19:37:54 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:37:54 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_1&filter=stderr 12/03/16 19:37:54 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:37:56 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-412420154 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:37:56 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_1 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:57 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_1: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:59 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_1 done; removing files. 12/03/16 19:37:59 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:37:59 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_2 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:37:59 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-194467116 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:00 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_1' 12/03/16 19:38:00 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_2: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:00 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000002_1' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:00 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_1 task's state:FAILED_UNCLEAN 12/03/16 19:38:00 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_1 which needs 1 slots 12/03/16 19:38:00 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_1 which needs 1 slots 12/03/16 19:38:00 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:00 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_1309269461 12/03/16 19:38:00 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_1309269461 spawned. 12/03/16 19:38:02 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_2 done; removing files. 12/03/16 19:38:02 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:03 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_2' 12/03/16 19:38:03 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000000_2' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:03 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_2 task's state:FAILED_UNCLEAN 12/03/16 19:38:03 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_2 which needs 1 slots 12/03/16 19:38:03 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_2 which needs 1 slots 12/03/16 19:38:03 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:03 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:03 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-479748458 12/03/16 19:38:03 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-479748458 spawned. 12/03/16 19:38:05 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_1309269461 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:05 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_1 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:06 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb0012, timeout of 40000ms exceeded 12/03/16 19:38:06 INFO server.ZooKeeperServer: Expiring session 0x1361d028cfb0013, timeout of 40000ms exceeded 12/03/16 19:38:06 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0012 12/03/16 19:38:06 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0013 12/03/16 19:38:06 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_1: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:08 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_2 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:08 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-479748458 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:08 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_1 done; removing files. 12/03/16 19:38:08 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:09 INFO mapred.JobInProgress: TaskTracker at 'host0.foo.com' turned 'flaky' 12/03/16 19:38:09 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_2: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:09 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000002_2' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:09 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000002 12/03/16 19:38:09 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_1' 12/03/16 19:38:09 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_2 task's state:UNASSIGNED 12/03/16 19:38:09 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_2 which needs 1 slots 12/03/16 19:38:09 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_2 which needs 1 slots 12/03/16 19:38:09 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:09 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:09 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_1580567717 12/03/16 19:38:09 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_1580567717 spawned. 12/03/16 19:38:09 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000002_1, Status : FAILED 12/03/16 19:38:09 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:09 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_1&filter=stdout 12/03/16 19:38:09 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:09 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_1&filter=stderr 12/03/16 19:38:11 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_2 done; removing files. 12/03/16 19:38:11 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:12 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000000_3' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:12 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000000 12/03/16 19:38:12 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_2' 12/03/16 19:38:12 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_3 task's state:UNASSIGNED 12/03/16 19:38:12 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_3 which needs 1 slots 12/03/16 19:38:12 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_3 which needs 1 slots 12/03/16 19:38:12 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:12 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:12 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_941913314 12/03/16 19:38:12 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_941913314 spawned. 12/03/16 19:38:12 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000000_2, Status : FAILED 12/03/16 19:38:12 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_2&filter=stdout 12/03/16 19:38:12 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:12 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_2&filter=stderr 12/03/16 19:38:12 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:14 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_1580567717 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:14 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_2 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:15 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_2: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:17 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_941913314 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:17 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_3 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:17 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_2 done; removing files. 12/03/16 19:38:17 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:18 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_2' 12/03/16 19:38:18 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_3: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:18 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000002_2' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:18 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_2 task's state:FAILED_UNCLEAN 12/03/16 19:38:18 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_2 which needs 1 slots 12/03/16 19:38:18 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_2 which needs 1 slots 12/03/16 19:38:18 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:18 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:18 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_1568519851 12/03/16 19:38:18 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_1568519851 spawned. 12/03/16 19:38:20 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_3 done; removing files. 12/03/16 19:38:20 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:21 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_3' 12/03/16 19:38:21 INFO mapred.JobTracker: Adding task (TASK_CLEANUP) 'attempt_20120316193605670_0003_m_000000_3' to tip task_20120316193605670_0003_m_000000, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:21 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000000_3 task's state:FAILED_UNCLEAN 12/03/16 19:38:21 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000000_3 which needs 1 slots 12/03/16 19:38:21 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000000_3 which needs 1 slots 12/03/16 19:38:21 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:21 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:21 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_850296772 12/03/16 19:38:21 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_850296772 spawned. 12/03/16 19:38:23 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_1568519851 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:23 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_2 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:24 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000002_2: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:26 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_2 done; removing files. 12/03/16 19:38:26 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:26 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_850296772 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:26 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_3 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:27 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000000_3: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:27 INFO mapred.JobTracker: Adding task (MAP) 'attempt_20120316193605670_0003_m_000002_3' to tip task_20120316193605670_0003_m_000002, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:27 INFO mapred.JobInProgress: Choosing rack-local task task_20120316193605670_0003_m_000002 12/03/16 19:38:27 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_2' 12/03/16 19:38:27 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000002_3 task's state:UNASSIGNED 12/03/16 19:38:27 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000002_3 which needs 1 slots 12/03/16 19:38:27 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000002_3 which needs 1 slots 12/03/16 19:38:27 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:27 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:27 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-917240261 12/03/16 19:38:27 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-917240261 spawned. 12/03/16 19:38:27 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000002_2, Status : FAILED 12/03/16 19:38:27 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_2&filter=stdout 12/03/16 19:38:27 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:27 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_2&filter=stderr 12/03/16 19:38:27 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:29 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000000_3 done; removing files. 12/03/16 19:38:29 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:30 INFO mapred.TaskInProgress: TaskInProgress task_20120316193605670_0003_m_000000 has failed 4 times. 12/03/16 19:38:30 INFO mapred.JobInProgress: Aborting job job_20120316193605670_0003 12/03/16 19:38:30 INFO mapred.JobInProgress: Killing job 'job_20120316193605670_0003' 12/03/16 19:38:30 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0003_m_000003_0' to tip task_20120316193605670_0003_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:30 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000000_3' 12/03/16 19:38:30 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000003_0 task's state:UNASSIGNED 12/03/16 19:38:30 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000003_0 which needs 1 slots 12/03/16 19:38:30 INFO mapred.TaskTracker: Received KillTaskAction for task: attempt_20120316193605670_0003_m_000000_3 12/03/16 19:38:30 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_20120316193605670_0003_m_000003_0 which needs 1 slots 12/03/16 19:38:30 INFO mapred.TaskTracker: Received KillTaskAction for task: attempt_20120316193605670_0003_m_000002_3 12/03/16 19:38:30 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:30 INFO mapred.TaskTracker: About to purge task: attempt_20120316193605670_0003_m_000002_3 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:30 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:30 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000000_3, Status : TIPFAILED 12/03/16 19:38:30 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_3&filter=stdout 12/03/16 19:38:30 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:30 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:30 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000000_3&filter=stderr 12/03/16 19:38:32 INFO mapred.TaskTracker: addFreeSlot : current free slots : 1 12/03/16 19:38:32 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-917240261 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:32 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_3 done; removing files. 12/03/16 19:38:32 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-1401879945 12/03/16 19:38:32 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-1401879945 spawned. 12/03/16 19:38:32 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0003_m_000002_3 not found in cache 12/03/16 19:38:33 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000002_3' 12/03/16 19:38:33 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000002_3, Status : TIPFAILED 12/03/16 19:38:33 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_3&filter=stdout 12/03/16 19:38:33 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:33 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000002_3&filter=stderr 12/03/16 19:38:33 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:35 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000002_3 done; removing files. 12/03/16 19:38:38 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-1401879945 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:38 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_0 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:39 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000003_0: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:41 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_0 done; removing files. 12/03/16 19:38:41 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:38:42 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0003_m_000003_1' to tip task_20120316193605670_0003_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:42 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000003_0' 12/03/16 19:38:42 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000003_1 task's state:UNASSIGNED 12/03/16 19:38:42 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000003_1 which needs 1 slots 12/03/16 19:38:42 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0003_m_000003_1 which needs 1 slots 12/03/16 19:38:42 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:42 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:42 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_797513192 12/03/16 19:38:42 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_797513192 spawned. 12/03/16 19:38:42 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000003_0, Status : FAILED 12/03/16 19:38:42 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_0&filter=stdout 12/03/16 19:38:42 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:42 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_0&filter=stderr 12/03/16 19:38:42 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:47 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_797513192 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:47 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_1 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:48 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000003_1: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:50 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_1 done; removing files. 12/03/16 19:38:50 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:38:51 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0003_m_000003_2' to tip task_20120316193605670_0003_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:38:51 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000003_1' 12/03/16 19:38:51 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000003_2 task's state:UNASSIGNED 12/03/16 19:38:51 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000003_2 which needs 1 slots 12/03/16 19:38:51 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0003_m_000003_2 which needs 1 slots 12/03/16 19:38:51 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:38:51 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:38:51 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-744714493 12/03/16 19:38:51 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-744714493 spawned. 12/03/16 19:38:52 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000003_1, Status : FAILED 12/03/16 19:38:52 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_1&filter=stdout 12/03/16 19:38:52 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:52 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:38:52 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_1&filter=stderr 12/03/16 19:38:56 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-744714493 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:38:56 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_2 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:57 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000003_2: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:38:59 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_2 done; removing files. 12/03/16 19:38:59 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:39:00 INFO mapred.JobTracker: Adding task (JOB_CLEANUP) 'attempt_20120316193605670_0003_m_000003_3' to tip task_20120316193605670_0003_m_000003, for tracker 'tracker_host0.foo.com:localhost/127.0.0.1:52380' 12/03/16 19:39:00 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000003_2' 12/03/16 19:39:00 INFO mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_20120316193605670_0003_m_000003_3 task's state:UNASSIGNED 12/03/16 19:39:00 INFO mapred.TaskTracker: Trying to launch : attempt_20120316193605670_0003_m_000003_3 which needs 1 slots 12/03/16 19:39:00 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20120316193605670_0003_m_000003_3 which needs 1 slots 12/03/16 19:39:00 INFO tasktracker.Localizer: User-directories for the user hortonal are already initialized on this TT. Not doing anything. 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/6195076953059477397_-844550846_403331036/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ivy/lib/default/libfb303-0.7.0.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/689420435899933364_-1259474216_2047401154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/zookeeper-3.4.3.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-4037416361959646795_997223141_484656860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/build/hcatalog/hcatalog-0.4.0-dev.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7040085976925331976_-1008753114_484646860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/hbase-handler/hive-hbase-handler-0.8.1.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/9162017560593005236_-706506138_484628860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/metastore/hive-metastore-0.8.1.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-2246713213602092165_-1280022664_1392357448/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/guava-11.0.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/-6504864768753063746_-803894434_1698515154/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/build/ivy/lib/hbase-storage-handler/hbase-0.92.1-SNAPSHOT.jar 12/03/16 19:39:00 INFO filecache.TrackerDistributedCacheManager: Using existing cache of file:///home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar->/tmp/hadoop-hortonal/mapred/local/0_0/taskTracker/distcache/7255632108140423909_-1364211090_484637860/file/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/hive/external/build/ql/hive-exec-0.8.1.jar 12/03/16 19:39:00 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20120316193605670_0003_m_-757205208 12/03/16 19:39:00 INFO mapred.JvmManager: JVM Runner jvm_20120316193605670_0003_m_-757205208 spawned. 12/03/16 19:39:01 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000003_2, Status : FAILED 12/03/16 19:39:01 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:39:01 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_2&filter=stdout 12/03/16 19:39:01 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_2&filter=stderr 12/03/16 19:39:01 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:39:05 INFO mapred.JvmManager: JVM : jvm_20120316193605670_0003_m_-757205208 exited with exit code 1. Number of tasks it ran: 0 12/03/16 19:39:05 WARN mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_3 : Child Error java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:39:06 INFO mapred.TaskInProgress: Error from attempt_20120316193605670_0003_m_000003_3: java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228) Caused by: java.io.IOException: Task process exit with nonzero status of 1. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:215) 12/03/16 19:39:08 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000003_3 done; removing files. 12/03/16 19:39:08 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2 12/03/16 19:39:09 INFO mapred.TaskInProgress: TaskInProgress task_20120316193605670_0003_m_000003 has failed 4 times. 12/03/16 19:39:09 INFO mapred.JobInProgress: Aborting job job_20120316193605670_0003 12/03/16 19:39:09 INFO mapred.JobInProgress$JobSummary: jobId=job_20120316193605670_0003,submitTime=1331926637615,launchTime=1331926637651,finishTime=1331926749186,numMaps=3,numSlotsPerMap=1,numReduces=0,numSlotsPerReduce=1,user=hortonal,queue=default,status=FAILED,mapSlotSeconds=112,reduceSlotsSeconds=0,clusterMapCapacity=2,clusterReduceCapacity=2 12/03/16 19:39:09 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0003_hortonal_directModeAbortTest to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:39:09 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000001_0' 12/03/16 19:39:09 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000003_3' 12/03/16 19:39:09 INFO mapred.JobTracker: Removing task 'attempt_20120316193605670_0003_m_000004_0' 12/03/16 19:39:09 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20120316193605670_0003 12/03/16 19:39:09 INFO mapred.TaskRunner: attempt_20120316193605670_0003_m_000001_0 done; removing files. 12/03/16 19:39:09 INFO mapred.IndexCache: Map ID attempt_20120316193605670_0003_m_000001_0 not found in cache 12/03/16 19:39:09 INFO mapred.UserLogCleaner: Adding job_20120316193605670_0003 for user-log deletion with retainTimeStamp:1332013149190 12/03/16 19:39:09 INFO mapred.JobHistory: Moving file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/localhost_1331926566105_job_20120316193605670_0003_conf.xml to file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/logs/history/done 12/03/16 19:39:10 INFO mapred.JobClient: Task Id : attempt_20120316193605670_0003_m_000003_3, Status : TIPFAILED 12/03/16 19:39:10 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_3&filter=stdout 12/03/16 19:39:10 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:39:10 WARN mapred.JobClient: Error reading task outputhttp://localhost:59933/tasklog?plaintext=true&attemptid=attempt_20120316193605670_0003_m_000003_3&filter=stderr 12/03/16 19:39:10 WARN mortbay.log: /tasklog: java.io.IOException: Closed 12/03/16 19:39:10 INFO mapred.JobClient: Job complete: job_20120316193605670_0003 12/03/16 19:39:10 INFO mapred.JobClient: Counters: 12 12/03/16 19:39:10 INFO mapred.JobClient: Job Counters 12/03/16 19:39:10 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=112739 12/03/16 19:39:10 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/03/16 19:39:10 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/03/16 19:39:10 INFO mapred.JobClient: Rack-local map tasks=9 12/03/16 19:39:10 INFO mapred.JobClient: Launched map tasks=9 12/03/16 19:39:10 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0 12/03/16 19:39:10 INFO mapred.JobClient: Failed map tasks=1 12/03/16 19:39:10 INFO mapred.JobClient: FileSystemCounters 12/03/16 19:39:10 INFO mapred.JobClient: FILE_BYTES_READ=727 12/03/16 19:39:10 INFO mapred.JobClient: Map-Reduce Framework 12/03/16 19:39:10 INFO mapred.JobClient: Map input records=1 12/03/16 19:39:10 INFO mapred.JobClient: Spilled Records=0 12/03/16 19:39:10 INFO mapred.JobClient: Map output records=1 12/03/16 19:39:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=222 12/03/16 19:39:10 INFO mapred.JobClient: Running job: job_20120316193605670_0003 12/03/16 19:39:10 INFO mapred.JobClient: Job complete: job_20120316193605670_0003 12/03/16 19:39:10 INFO mapred.JobClient: Counters: 12 12/03/16 19:39:10 INFO mapred.JobClient: Job Counters 12/03/16 19:39:10 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=112739 12/03/16 19:39:10 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0 12/03/16 19:39:10 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0 12/03/16 19:39:10 INFO mapred.JobClient: Rack-local map tasks=9 12/03/16 19:39:10 INFO mapred.JobClient: Launched map tasks=9 12/03/16 19:39:10 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0 12/03/16 19:39:10 INFO mapred.JobClient: Failed map tasks=1 12/03/16 19:39:10 INFO mapred.JobClient: FileSystemCounters 12/03/16 19:39:10 INFO mapred.JobClient: FILE_BYTES_READ=727 12/03/16 19:39:10 INFO mapred.JobClient: Map-Reduce Framework 12/03/16 19:39:10 INFO mapred.JobClient: Map input records=1 12/03/16 19:39:10 INFO mapred.JobClient: Spilled Records=0 12/03/16 19:39:10 INFO mapred.JobClient: Map output records=1 12/03/16 19:39:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=222 12/03/16 19:39:10 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=127.0.0.1:55343 sessionTimeout=1000000 watcher=org.apache.hcatalog.hbase.snapshot.ZKUtil$ZKWatcher@d03877 12/03/16 19:39:10 INFO zookeeper.ClientCnxn: Opening socket connection to server /127.0.0.1:55343 12/03/16 19:39:10 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 12/03/16 19:39:10 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:55343, initiating session 12/03/16 19:39:10 INFO server.NIOServerCnxnFactory: Accepted socket connection from /127.0.0.1:53626 12/03/16 19:39:10 INFO server.ZooKeeperServer: Client attempting to establish new session at /127.0.0.1:53626 12/03/16 19:39:10 INFO server.ZooKeeperServer: Established session 0x1361d028cfb0014 with negotiated timeout 40000 for client /127.0.0.1:53626 12/03/16 19:39:10 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:55343, sessionid = 0x1361d028cfb0014, negotiated timeout = 40000 12/03/16 19:39:10 INFO snapshot.ZKBasedRevisionManager: Created root znodes for revision manager. 12/03/16 19:39:11 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0014 12/03/16 19:39:11 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0014 closed 12/03/16 19:39:11 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:39:11 INFO snapshot.ZKUtil: Disconnected to ZooKeeper 12/03/16 19:39:11 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:53626 which had sessionid 0x1361d028cfb0014 12/03/16 19:39:11 INFO master.HMaster: Cluster shutdown requested 12/03/16 19:39:11 INFO regionserver.HRegionServer: STOPPED: Shutdown requested 12/03/16 19:39:11 INFO master.SplitLogManager$TimeoutMonitor: hrt8n25.cc1.ygridcore.net,55642,1331926568427.splitLogManagerTimeoutMonitor exiting 12/03/16 19:39:12 INFO master.ServerManager: Waiting on regionserver(s) to go down hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:39:12 INFO master.HMaster$2: hrt8n25.cc1.ygridcore.net,55642,1331926568427-BalancerChore exiting 12/03/16 19:39:12 INFO master.CatalogJanitor: hrt8n25.cc1.ygridcore.net,55642,1331926568427-CatalogJanitor exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: Stopping server on 48635 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 0 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 0 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 3 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: Stopping IPC Server listener on 48635 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 6 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 2 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 4 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 5 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 5 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 1 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 7 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 8 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 8 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 9 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 1 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 9 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 6 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 2 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 3 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: PRI IPC Server handler 7 on 48635: exiting 12/03/16 19:39:12 INFO ipc.HBaseServer: IPC Server handler 4 on 48635: exiting 12/03/16 19:39:12 INFO regionserver.SplitLogWorker: Sending interrupt to stop the worker thread 12/03/16 19:39:12 INFO ipc.HBaseServer: Stopping IPC Server Responder 12/03/16 19:39:12 INFO ipc.HBaseServer: Stopping IPC Server Responder 12/03/16 19:39:12 INFO regionserver.SplitLogWorker: SplitLogWorker interrupted while waiting for task, exiting: java.lang.InterruptedException 12/03/16 19:39:12 INFO regionserver.SplitLogWorker: SplitLogWorker hrt8n25.cc1.ygridcore.net,48635,1331926568651 exiting 12/03/16 19:39:12 INFO regionserver.MemStoreFlusher: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651.cacheFlusher exiting 12/03/16 19:39:12 INFO regionserver.LogRoller: LogRoller exiting. 12/03/16 19:39:12 INFO regionserver.HRegionServer$CompactionChecker: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651.compactionChecker exiting 12/03/16 19:39:12 WARN regionserver.HRegionServer: Received close for region we are already opening or closing; 70236052 12/03/16 19:39:12 INFO regionserver.HRegionServer: stopping server hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:39:12 INFO regionserver.HRegionServer: Waiting on 5 regions to close 12/03/16 19:39:12 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directmodeaborttest.directmodeaborttest_718236726961197219/9982529655840a96c1394f5007843e79/my_family/b66ced5146ec4a63b15cbc1f44dba95f, entries=2, sequenceid=17, memsize=320.0, filesize=678.0 12/03/16 19:39:12 INFO regionserver.HRegion: Finished memstore flush of ~320.0/320, currentsize=0.0/0 for region directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. in 9ms, sequenceid=17, compaction requested=false 12/03/16 19:39:12 INFO regionserver.HRegion: Closed directmodeaborttest.directmodeaborttest_718236726961197219,,1331926632813.9982529655840a96c1394f5007843e79. 12/03/16 19:39:12 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/-ROOT-/70236052/info/4e46eee9b4684933be0d9513ea31b1eb, entries=2, sequenceid=18, memsize=368.0, filesize=737.0 12/03/16 19:39:12 INFO regionserver.HRegion: Finished memstore flush of ~368.0/368, currentsize=0.0/0 for region -ROOT-,,0.70236052 in 11ms, sequenceid=18, compaction requested=false 12/03/16 19:39:12 INFO regionserver.HRegion: Closed -ROOT-,,0.70236052 12/03/16 19:39:12 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directhcatoutputformattest.directhcatoutputformattest_4039773820211212643/5c27c681b665d178890076e4158076f1/my_family/8e861d63a8d94ae7a215922814498002, entries=6, sequenceid=19, memsize=968.0, filesize=843.0 12/03/16 19:39:12 INFO regionserver.HRegion: Finished memstore flush of ~968.0/968, currentsize=0.0/0 for region directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. in 19ms, sequenceid=19, compaction requested=false 12/03/16 19:39:12 INFO regionserver.HRegion: Closed directhcatoutputformattest.directhcatoutputformattest_4039773820211212643,,1331926600188.5c27c681b665d178890076e4158076f1. 12/03/16 19:39:12 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/.META./1028785192/info/5dd526df66fd422389b7f6a410cdca24, entries=9, sequenceid=20, memsize=3.0k, filesize=2.7k 12/03/16 19:39:12 INFO regionserver.HRegion: Finished memstore flush of ~3.0k/3048, currentsize=0.0/0 for region .META.,,1.1028785192 in 26ms, sequenceid=20, compaction requested=false 12/03/16 19:39:12 INFO regionserver.HRegion: Closed .META.,,1.1028785192 12/03/16 19:39:12 INFO regionserver.Store: Added file:/home/hortonal/src/hcat/branches/0.4/302/branch-0.4/storage-handlers/hbase/test_default_7863155576483800806/hbase/directoutputformattest_8162747622161253787/de3fa48cf8026e93eabf414ade76a454/my_family/2aba4642b3634997be3cfc54e59a042d, entries=6, sequenceid=21, memsize=968.0, filesize=843.0 12/03/16 19:39:12 INFO regionserver.HRegion: Finished memstore flush of ~968.0/968, currentsize=0.0/0 for region directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. in 30ms, sequenceid=21, compaction requested=false 12/03/16 19:39:12 INFO regionserver.HRegion: Closed directoutputformattest_8162747622161253787,,1331926580559.de3fa48cf8026e93eabf414ade76a454. 12/03/16 19:39:13 INFO master.ServerManager: Waiting on regionserver(s) to go down hrt8n25.cc1.ygridcore.net,48635,1331926568651 12/03/16 19:39:13 INFO regionserver.HRegionServer: stopping server hrt8n25.cc1.ygridcore.net,48635,1331926568651; all regions closed. 12/03/16 19:39:13 INFO wal.HLog: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651.logSyncer exiting 12/03/16 19:39:13 INFO regionserver.Leases: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651 closing leases 12/03/16 19:39:13 INFO regionserver.Leases: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651 closed leases 12/03/16 19:39:13 INFO zookeeper.RegionServerTracker: RegionServer ephemeral node deleted, processing expiration [hrt8n25.cc1.ygridcore.net,48635,1331926568651] 12/03/16 19:39:13 INFO master.ServerManager: Cluster shutdown set; hrt8n25.cc1.ygridcore.net,48635,1331926568651 expired; onlineServers=0 12/03/16 19:39:13 INFO server.PrepRequestProcessor: Processed session termination for sessionid: 0x1361d028cfb0001 12/03/16 19:39:13 INFO master.HMaster: Cluster shutdown set; onlineServer=0 12/03/16 19:39:13 INFO zookeeper.ZooKeeper: Session: 0x1361d028cfb0001 closed 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: EventThread shut down 12/03/16 19:39:13 INFO regionserver.HRegionServer: stopping server hrt8n25.cc1.ygridcore.net,48635,1331926568651; zookeeper connection closed. 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40268 which had sessionid 0x1361d028cfb0001 12/03/16 19:39:13 INFO regionserver.HRegionServer: RegionServer:0;hrt8n25.cc1.ygridcore.net,48635,1331926568651 exiting 12/03/16 19:39:13 INFO hbase.MiniHBaseCluster: Hook closing fs=org.apache.hadoop.fs.LocalFileSystem@91a4fb 12/03/16 19:39:13 INFO fs.FileSystem: Could not cancel cleanup thread, though no FileSystems are open 12/03/16 19:39:13 INFO util.JVMClusterUtil: Shutdown of 1 master(s) and 1 regionserver(s) complete 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40267 which had sessionid 0x1361d028cfb0000 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1361d028cfb0000, likely server has closed socket, closing socket connection and attempting reconnect 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40269 which had sessionid 0x1361d028cfb0002 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1361d028cfb0002, likely server has closed socket, closing socket connection and attempting reconnect 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40384 which had sessionid 0x1361d028cfb0010 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40338 which had sessionid 0x1361d028cfb0008 12/03/16 19:39:13 INFO server.NIOServerCnxn: Closed socket connection for client /127.0.0.1:40274 which had sessionid 0x1361d028cfb0003 12/03/16 19:39:13 INFO server.NIOServerCnxnFactory: NIOServerCnxn factory exited run method 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1361d028cfb0003, likely server has closed socket, closing socket connection and attempting reconnect 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1361d028cfb0010, likely server has closed socket, closing socket connection and attempting reconnect 12/03/16 19:39:13 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x1361d028cfb0008, likely server has closed socket, closing socket connection and attempting reconnect 12/03/16 19:39:13 INFO server.ZooKeeperServer: shutting down 12/03/16 19:39:13 INFO server.SessionTrackerImpl: Shutting down 12/03/16 19:39:13 INFO server.PrepRequestProcessor: Shutting down 12/03/16 19:39:13 INFO server.SyncRequestProcessor: Shutting down 12/03/16 19:39:13 INFO server.PrepRequestProcessor: PrepRequestProcessor exited loop! 12/03/16 19:39:13 INFO server.SyncRequestProcessor: SyncRequestProcessor exited! 12/03/16 19:39:13 INFO server.FinalRequestProcessor: shutdown of request processor complete 12/03/16 19:39:13 INFO zookeeper.MiniZooKeeperCluster: Shutdown MiniZK cluster with all ZK servers 12/03/16 19:39:13 INFO util.AsyncDiskService: Shutting down all AsyncDiskService threads... 12/03/16 19:39:13 INFO util.AsyncDiskService: All AsyncDiskService threads are terminated. 12/03/16 19:39:13 INFO util.MRAsyncDiskService: Deleting toBeDeleted directory. 12/03/16 19:39:13 INFO mapred.TaskTracker: Shutting down: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:52380 12/03/16 19:39:13 INFO ipc.Server: Stopping server on 52380 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 0 on 52380: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 3 on 52380: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 1 on 52380: exiting 12/03/16 19:39:13 INFO ipc.Server: Stopping IPC Server listener on 52380 12/03/16 19:39:13 INFO mapred.TaskTracker: Shutting down StatusHttpServer 12/03/16 19:39:13 INFO ipc.Server: Stopping IPC Server Responder 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 2 on 52380: exiting 12/03/16 19:39:13 INFO mapred.TaskTracker: Interrupted. Closing down. 12/03/16 19:39:13 INFO util.AsyncDiskService: Shutting down all AsyncDiskService threads... 12/03/16 19:39:13 INFO util.AsyncDiskService: All AsyncDiskService threads are terminated. 12/03/16 19:39:13 INFO util.MRAsyncDiskService: Deleting toBeDeleted directory. 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping pluginDispatcher 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping infoServer 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping interTrackerServer 12/03/16 19:39:13 INFO ipc.Server: Stopping server on 40861 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 2 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 4 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 5 on 40861: exiting 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping expireTrackers 12/03/16 19:39:13 INFO ipc.Server: Stopping IPC Server listener on 40861 12/03/16 19:39:13 INFO mapred.JobTracker: Stopped interTrackerServer 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 6 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 3 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 0 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 7 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 1 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 9 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: IPC Server handler 8 on 40861: exiting 12/03/16 19:39:13 INFO ipc.Server: Stopping IPC Server Responder 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping retirer 12/03/16 19:39:13 INFO mapred.EagerTaskInitializationListener: Stopping Job Init Manager thread 12/03/16 19:39:13 INFO mapred.EagerTaskInitializationListener: JobInitManagerThread interrupted. 12/03/16 19:39:13 INFO mapred.EagerTaskInitializationListener: Shutting down thread pool 12/03/16 19:39:13 INFO mapred.JobTracker: Stopping expireLaunchingTasks 12/03/16 19:39:13 INFO mapred.JobTracker: stopped all jobtracker services ------------- ---------------- --------------- Testcase: directOutputFormatTest took 18.759 sec Testcase: directHCatOutputFormatTest took 33.678 sec Testcase: directModeAbortTest took 118.342 sec FAILED expected:<1> but was:<0> junit.framework.AssertionFailedError: expected:<1> but was:<0> at org.apache.hcatalog.hbase.TestHBaseDirectOutputFormat.directModeAbortTest(TestHBaseDirectOutputFormat.java:316)