2012-10-31 09:46:43,146 INFO [main] hbase.HBaseTestingUtility(294): Created new mini-cluster data directory: /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0 2012-10-31 09:46:43,150 INFO [main] hbase.HBaseTestingUtility(527): Setting test.cache.data to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/cache_data in system properties and HBase conf 2012-10-31 09:46:43,151 INFO [main] hbase.HBaseTestingUtility(527): Setting hadoop.tmp.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/hadoop_tmp in system properties and HBase conf 2012-10-31 09:46:43,151 INFO [main] hbase.HBaseTestingUtility(527): Setting hadoop.log.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/hadoop_logs in system properties and HBase conf 2012-10-31 09:46:43,153 INFO [main] hbase.HBaseTestingUtility(527): Setting mapred.output.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/mapred_output in system properties and HBase conf 2012-10-31 09:46:43,154 INFO [main] hbase.HBaseTestingUtility(527): Setting mapred.local.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/mapred_local in system properties and HBase conf 2012-10-31 09:46:43,155 INFO [main] hbase.HBaseTestingUtility(527): Setting mapred.system.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/mapred_system in system properties and HBase conf 2012-10-31 09:46:43,155 INFO [main] hbase.HBaseTestingUtility(527): Setting mapred.temp.dir to /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/mapred_temp in system properties and HBase conf 2012-10-31 09:46:43,155 INFO [main] hbase.HBaseTestingUtility(510): read short circuit is ON for user zhihyu 2012-10-31 09:46:43,477 WARN [main] namenode.FSNamesystem(550): The dfs.support.append option is in your configuration, however append is not supported. This configuration option is no longer required to enable sync. 2012-10-31 09:46:43,699 WARN [main] impl.MetricsSystemImpl(137): Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-namenode.properties, hadoop-metrics2.properties 2012-10-31 09:46:43,768 WARN [main] namenode.FSNamesystem(550): The dfs.support.append option is in your configuration, however append is not supported. This configuration option is no longer required to enable sync. 2012-10-31 09:46:43,930 INFO [main] log.Slf4jLog(67): Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2012-10-31 09:46:44,002 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2012-10-31 09:46:44,036 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/zhihyu/.m2/repository/org/apache/hadoop/hadoop-core/1.1.0/hadoop-core-1.1.0.jar!/webapps/hdfs to /var/folders/ml/mkn4bk996wjgqmbtsxd0zndh38zm41/T/Jetty_localhost_59703_hdfs____.pp27sw/webapp 2012-10-31 09:46:44,296 INFO [main] log.Slf4jLog(67): Started SelectChannelConnector@localhost:59703 Starting DataNode 0 with dfs.data.dir: /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data1,/Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data2 2012-10-31 09:46:44,359 WARN [main] impl.MetricsSystemImpl(137): Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties 2012-10-31 09:46:44,360 WARN [main] util.MBeans(59): Hadoop:service=DataNode,name=MetricsSystem,sub=Control javax.management.InstanceAlreadyExistsException: MXBean already registered with name Hadoop:service=NameNode,name=MetricsSystem,sub=Control at com.sun.jmx.mbeanserver.MXBeanLookup.addReference(MXBeanLookup.java:120) at com.sun.jmx.mbeanserver.MXBeanSupport.register(MXBeanSupport.java:143) at com.sun.jmx.mbeanserver.MBeanSupport.preRegister2(MBeanSupport.java:183) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:941) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.initSystemMBean(MetricsSystemImpl.java:500) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:140) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1582) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1558) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:420) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:283) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:432) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:397) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.setUpBeforeClass(TestHLogSplit.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:226) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:133) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) 2012-10-31 09:46:45,002 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2012-10-31 09:46:45,016 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/zhihyu/.m2/repository/org/apache/hadoop/hadoop-core/1.1.0/hadoop-core-1.1.0.jar!/webapps/datanode to /var/folders/ml/mkn4bk996wjgqmbtsxd0zndh38zm41/T/Jetty_localhost_59709_datanode____jcr0oz/webapp 2012-10-31 09:46:45,176 INFO [main] log.Slf4jLog(67): Started SelectChannelConnector@localhost:59709 Starting DataNode 1 with dfs.data.dir: /Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data3,/Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data4 2012-10-31 09:46:45,194 WARN [main] impl.MetricsSystemImpl(137): Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties 2012-10-31 09:46:45,194 WARN [main] util.MBeans(59): Hadoop:service=DataNode,name=MetricsSystem,sub=Control javax.management.InstanceAlreadyExistsException: MXBean already registered with name Hadoop:service=NameNode,name=MetricsSystem,sub=Control at com.sun.jmx.mbeanserver.MXBeanLookup.addReference(MXBeanLookup.java:120) at com.sun.jmx.mbeanserver.MXBeanSupport.register(MXBeanSupport.java:143) at com.sun.jmx.mbeanserver.MBeanSupport.preRegister2(MBeanSupport.java:183) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:941) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.initSystemMBean(MetricsSystemImpl.java:500) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:140) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1582) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1558) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:420) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:283) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:432) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:397) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.setUpBeforeClass(TestHLogSplit.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:226) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:133) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) 2012-10-31 09:46:45,321 WARN [main] util.MBeans(59): Hadoop:service=DataNode,name=DataNodeInfo javax.management.InstanceAlreadyExistsException: Hadoop:service=DataNode,name=DataNodeInfo at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56) at org.apache.hadoop.hdfs.server.datanode.DataNode.registerMXBean(DataNode.java:547) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:405) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:307) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1644) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1583) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1558) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:420) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:283) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:432) at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBaseTestingUtility.java:397) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.setUpBeforeClass(TestHLogSplit.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:226) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:133) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) 2012-10-31 09:46:45,326 INFO [main] log.Slf4jLog(67): jetty-6.1.26 2012-10-31 09:46:45,346 INFO [main] log.Slf4jLog(67): Extract jar:file:/Users/zhihyu/.m2/repository/org/apache/hadoop/hadoop-core/1.1.0/hadoop-core-1.1.0.jar!/webapps/datanode to /var/folders/ml/mkn4bk996wjgqmbtsxd0zndh38zm41/T/Jetty_localhost_59712_datanode____.q2kecl/webapp 2012-10-31 09:46:45,490 INFO [main] log.Slf4jLog(67): Started SelectChannelConnector@localhost:59712 Cluster is active 2012-10-31 09:46:57,438 INFO [main] hbase.ResourceChecker(139): before: regionserver.wal.TestHLogSplit#testLogCannotBeWrittenOnceParsed Thread=57, OpenFileDescriptor=161, MaxFileDescriptor=10240, ConnectionCount=0 Cleaning up cluster for new test -------------------------- Num entries in /:0 Creating dir for region bbb Creating dir for region ccc 2012-10-31 09:46:57,502 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:46:57,503 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.0, compression=false Closing writer 0 2012-10-31 09:46:57,981 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:46:57,981 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.1, compression=false Closing writer 1 2012-10-31 09:46:58,412 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:46:58,412 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.2, compression=false Closing writer 2 2012-10-31 09:46:58,437 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:46:58,437 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.3, compression=false Closing writer 3 2012-10-31 09:46:59,777 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:46:59,777 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.4, compression=false Closing writer 4 2012-10-31 09:47:00,701 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:00,701 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.5, compression=false Closing writer 5 2012-10-31 09:47:00,726 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:00,726 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.6, compression=false Closing writer 6 2012-10-31 09:47:00,752 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:00,752 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.7, compression=false Closing writer 7 2012-10-31 09:47:00,776 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:00,776 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.8, compression=false Closing writer 8 2012-10-31 09:47:00,798 DEBUG [main] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:00,799 DEBUG [main] wal.SequenceFileLogWriter(193): Path=/hbase/hlog/hlog.dat.9, compression=false starting 2012-10-31 09:47:00,859 INFO [main] wal.HLogSplitter(231): Splitting 10 hlog(s) in /hbase/hlog 2012-10-31 09:47:00,860 DEBUG [WriterThread-0] wal.HLogSplitter$WriterThread(957): Writer thread Thread[WriterThread-0,5,main]: starting 2012-10-31 09:47:00,862 DEBUG [WriterThread-2] wal.HLogSplitter$WriterThread(957): Writer thread Thread[WriterThread-2,5,main]: starting 2012-10-31 09:47:00,862 INFO [main] wal.HLogSplitter(231): Splitting hlog 1 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.0, length=1754 2012-10-31 09:47:00,860 DEBUG [WriterThread-1] wal.HLogSplitter$WriterThread(957): Writer thread Thread[WriterThread-1,5,main]: starting 2012-10-31 09:47:00,866 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.0 2012-10-31 09:47:01,868 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.0 2012-10-31 09:47:01,894 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.0 2012-10-31 09:47:01,894 INFO [main] wal.HLogSplitter(231): Splitting hlog 2 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.1, length=1754 2012-10-31 09:47:01,894 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.1 2012-10-31 09:47:01,948 DEBUG [WriterThread-2] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:01,949 DEBUG [WriterThread-2] wal.SequenceFileLogWriter(193): Path=/hbase/t1/bbb/recovered.edits/0000000000000000001.temp, compression=false 2012-10-31 09:47:01,949 DEBUG [WriterThread-2] wal.HLogSplitter(1046): Creating writer path=/hbase/t1/bbb/recovered.edits/0000000000000000001.temp region=bbb 2012-10-31 09:47:01,949 DEBUG [WriterThread-0] wal.SequenceFileLogWriter(189): using new createWriter -- HADOOP-6840 2012-10-31 09:47:01,949 DEBUG [WriterThread-0] wal.SequenceFileLogWriter(193): Path=/hbase/t1/ccc/recovered.edits/0000000000000000001.temp, compression=false 2012-10-31 09:47:01,950 DEBUG [WriterThread-0] wal.HLogSplitter(1046): Creating writer path=/hbase/t1/ccc/recovered.edits/0000000000000000001.temp region=ccc 2012-10-31 09:47:02,895 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.1 2012-10-31 09:47:02,905 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.1 2012-10-31 09:47:02,905 INFO [main] wal.HLogSplitter(231): Splitting hlog 3 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.2, length=1754 2012-10-31 09:47:02,905 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.2 2012-10-31 09:47:03,907 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.2 2012-10-31 09:47:03,913 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.2 2012-10-31 09:47:03,913 INFO [main] wal.HLogSplitter(231): Splitting hlog 4 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.3, length=1754 2012-10-31 09:47:03,913 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.3 2012-10-31 09:47:04,914 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.3 2012-10-31 09:47:04,919 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.3 2012-10-31 09:47:04,919 INFO [main] wal.HLogSplitter(231): Splitting hlog 5 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.4, length=1754 2012-10-31 09:47:04,919 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.4 2012-10-31 09:47:05,920 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.4 2012-10-31 09:47:05,924 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.4 2012-10-31 09:47:05,924 INFO [main] wal.HLogSplitter(231): Splitting hlog 6 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.5, length=1754 2012-10-31 09:47:05,924 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.5 2012-10-31 09:47:06,925 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.5 2012-10-31 09:47:06,930 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.5 2012-10-31 09:47:06,930 INFO [main] wal.HLogSplitter(231): Splitting hlog 7 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.6, length=1754 2012-10-31 09:47:06,931 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.6 2012-10-31 09:47:07,931 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.6 2012-10-31 09:47:07,935 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.6 2012-10-31 09:47:07,935 INFO [main] wal.HLogSplitter(231): Splitting hlog 8 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.7, length=1754 2012-10-31 09:47:07,935 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.7 2012-10-31 09:47:08,937 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.7 2012-10-31 09:47:08,941 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.7 2012-10-31 09:47:08,941 INFO [main] wal.HLogSplitter(231): Splitting hlog 9 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.8, length=1754 2012-10-31 09:47:08,941 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.8 2012-10-31 09:47:09,942 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.8 2012-10-31 09:47:09,946 DEBUG [main] wal.HLogSplitter(683): Pushed=20 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.8 2012-10-31 09:47:09,946 INFO [main] wal.HLogSplitter(231): Splitting hlog 10 of 10: hdfs://localhost:59702/hbase/hlog/hlog.dat.9, length=0 2012-10-31 09:47:09,946 WARN [main] wal.HLogSplitter(709): File hdfs://localhost:59702/hbase/hlog/hlog.dat.9 might be still open, length is 0 2012-10-31 09:47:09,947 INFO [main] util.FSHDFSUtils(70): Recovering file hdfs://localhost:59702/hbase/hlog/hlog.dat.9 2012-10-31 09:47:10,949 INFO [main] util.FSHDFSUtils(120): Finished lease recover attempt for hdfs://localhost:59702/hbase/hlog/hlog.dat.9 2012-10-31 09:47:11,305 DEBUG [main] wal.HLogSplitter(683): Pushed=5575 entries from hdfs://localhost:59702/hbase/hlog/hlog.dat.9 2012-10-31 09:47:11,307 INFO [main] wal.HLogSplitter$OutputSink(1180): Waiting for split writer threads to finish 2012-10-31 09:47:11,308 INFO [main] wal.HLogSplitter$OutputSink(1199): Split writers finished 2012-10-31 09:47:11,319 INFO [split-log-closeStream-1] wal.HLogSplitter$OutputSink$2(1243): Closed path /hbase/t1/bbb/recovered.edits/0000000000000000001.temp (wrote 5655 edits in 149ms) 2012-10-31 09:47:11,324 DEBUG [split-log-closeStream-1] wal.HLogSplitter$OutputSink$2(1265): Rename /hbase/t1/bbb/recovered.edits/0000000000000000001.temp to /hbase/t1/bbb/recovered.edits/0000000000000000001 2012-10-31 09:47:11,728 INFO [split-log-closeStream-2] wal.HLogSplitter$OutputSink$2(1243): Closed path /hbase/t1/ccc/recovered.edits/0000000000000000001.temp (wrote 100 edits in 63ms) 2012-10-31 09:47:11,731 DEBUG [split-log-closeStream-2] wal.HLogSplitter$OutputSink$2(1265): Rename /hbase/t1/ccc/recovered.edits/0000000000000000001.temp to /hbase/t1/ccc/recovered.edits/0000000000000000001 2012-10-31 09:47:11,738 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.0 to /hbase/hlog.old/hlog.dat.0 2012-10-31 09:47:11,740 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.1 to /hbase/hlog.old/hlog.dat.1 2012-10-31 09:47:11,742 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.2 to /hbase/hlog.old/hlog.dat.2 2012-10-31 09:47:11,744 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.3 to /hbase/hlog.old/hlog.dat.3 2012-10-31 09:47:11,746 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.4 to /hbase/hlog.old/hlog.dat.4 2012-10-31 09:47:11,748 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.5 to /hbase/hlog.old/hlog.dat.5 2012-10-31 09:47:11,750 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.6 to /hbase/hlog.old/hlog.dat.6 2012-10-31 09:47:11,752 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.7 to /hbase/hlog.old/hlog.dat.7 2012-10-31 09:47:11,753 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.8 to /hbase/hlog.old/hlog.dat.8 2012-10-31 09:47:11,755 DEBUG [main] wal.HLogSplitter(570): Archived processed log hdfs://localhost:59702/hbase/hlog/hlog.dat.9 to /hbase/hlog.old/hlog.dat.9 2012-10-31 09:47:11,756 INFO [main] wal.HLogSplitter(225): hlog file splitting completed in 10905 ms for /hbase/hlog 2012-10-31 09:47:11,885 INFO [main] hbase.ResourceChecker(157): after: regionserver.wal.TestHLogSplit#testLogCannotBeWrittenOnceParsed Thread=70 (was 57) - Thread LEAK? -, OpenFileDescriptor=196 (was 161) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=10240 (was 10240), ConnectionCount=0 (was 0) Shutting down the Mini HDFS Cluster Shutting down DataNode 1 2012-10-31 09:47:11,889 INFO [main] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2012-10-31 09:47:11,994 WARN [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@6dcd2197] datanode.DataXceiverServer(138): DatanodeRegistration(127.0.0.1:59711, storageID=DS-1973354951-10.246.204.25-59711-1351702005493, infoPort=59712, ipcPort=59713):DataXceiveServer:java.nio.channels.AsynchronousCloseException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:157) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131) at java.lang.Thread.run(Thread.java:680) 2012-10-31 09:47:11,994 WARN [ResponseProcessor for block blk_1804084299426336625_1010] hdfs.DFSClient$DFSOutputStream$ResponseProcessor(3180): DFSOutputStream ResponseProcessor exception for block blk_1804084299426336625_1010java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198) at sun.nio.ch.IOUtil.read(IOUtil.java:171) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:245) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128) at java.io.DataInputStream.readFully(DataInputStream.java:178) at java.io.DataInputStream.readLong(DataInputStream.java:399) at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:124) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:3132) 2012-10-31 09:47:11,994 ERROR [org.apache.hadoop.hdfs.server.datanode.DataXceiver@45a81bd5] datanode.DataXceiver(136): DatanodeRegistration(127.0.0.1:59711, storageID=DS-1973354951-10.246.204.25-59711-1351702005493, infoPort=59712, ipcPort=59713):DataXceiver java.io.InterruptedIOException: Interruped while waiting for IO on channel java.nio.channels.SocketChannel[closed]. 0 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.DataInputStream.read(DataInputStream.java:132) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:284) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:331) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:395) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:573) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112) at java.lang.Thread.run(Thread.java:680) 2012-10-31 09:47:11,995 WARN [DataStreamer for file /hbase/hlog/hlog.dat.9 block blk_1804084299426336625_1010] hdfs.DFSClient$DFSOutputStream(3216): Error Recovery for block blk_1804084299426336625_1010 bad datanode[0] 127.0.0.1:59711 2012-10-31 09:47:11,995 ERROR [org.apache.hadoop.hdfs.server.datanode.DataXceiver@1547a16f] datanode.DataXceiver(136): DatanodeRegistration(127.0.0.1:59708, storageID=DS-241817021-10.246.204.25-59708-1351702005182, infoPort=59709, ipcPort=59710):DataXceiver java.io.EOFException: while trying to read 644 bytes at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:287) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:331) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:395) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:573) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:406) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112) at java.lang.Thread.run(Thread.java:680) 2012-10-31 09:47:11,995 WARN [DataStreamer for file /hbase/hlog/hlog.dat.9 block blk_1804084299426336625_1010] hdfs.DFSClient$DFSOutputStream(3267): Error Recovery for block blk_1804084299426336625_1010 in pipeline 127.0.0.1:59711, 127.0.0.1:59708: bad datanode 127.0.0.1:59711 2012-10-31 09:47:11,999 ERROR [IPC Server handler 8 on 59702] security.UserGroupInformation(1139): PriviledgedActionException as:zhihyu cause:java.io.IOException: blk_1804084299426336625_1010is being recovered by NameNode, ignoring the request from a client 2012-10-31 09:47:12,000 ERROR [IPC Server handler 0 on 59710] security.UserGroupInformation(1139): PriviledgedActionException as:blk_1804084299426336625_1010 cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: blk_1804084299426336625_1010is being recovered by NameNode, ignoring the request from a client at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5484) at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:781) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) 2012-10-31 09:47:12,001 WARN [DataStreamer for file /hbase/hlog/hlog.dat.9 block blk_1804084299426336625_1010] hdfs.DFSClient$DFSOutputStream(3292): Failed recovery attempt #0 from primary datanode 127.0.0.1:59708 org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.ipc.RemoteException: java.io.IOException: blk_1804084299426336625_1010is being recovered by NameNode, ignoring the request from a client at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5484) at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:781) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1092) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at $Proxy12.nextGenerationStamp(Unknown Source) at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:2059) at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2027) at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2107) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1092) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at $Proxy17.recoverBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3290) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2300(DFSClient.java:2754) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2958) 2012-10-31 09:47:12,001 WARN [DataStreamer for file /hbase/hlog/hlog.dat.9 block blk_1804084299426336625_1010] hdfs.DFSClient$DFSOutputStream(3331): Error Recovery for block blk_1804084299426336625_1010 failed because recovery from primary datanode 127.0.0.1:59708 failed 1 times. Pipeline was 127.0.0.1:59711, 127.0.0.1:59708. Will retry... 2012-10-31 09:47:12,995 WARN [main] util.MBeans(73): Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-513119224 javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-513119224 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2066) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:860) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:569) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:553) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:539) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.tearDownAfterClass(TestHLogSplit.java:131) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:226) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:133) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) 2012-10-31 09:47:12,996 WARN [main] datanode.FSDatasetAsyncDiskService(121): AsyncDiskService has already shut down. Shutting down DataNode 0 2012-10-31 09:47:12,996 INFO [main] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2012-10-31 09:47:13,099 WARN [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@182153fe] datanode.DataXceiverServer(138): DatanodeRegistration(127.0.0.1:59708, storageID=DS-241817021-10.246.204.25-59708-1351702005182, infoPort=59709, ipcPort=59710):DataXceiveServer:java.nio.channels.AsynchronousCloseException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:157) at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131) at java.lang.Thread.run(Thread.java:680) 2012-10-31 09:47:14,100 WARN [DataNode: [/Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data1,/Users/zhihyu/trunk-hbase/hbase-server/target/test-data/59fb2d94-1d21-44d2-9508-19eabc11b5ce/dfscluster_6d7d9887-aecb-4216-b07b-ab6153ac83c0/dfs/data/data2]] util.MBeans(73): Hadoop:service=DataNode,name=DataNodeInfo javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:552) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:798) at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1533) at java.lang.Thread.run(Thread.java:680) 2012-10-31 09:47:14,101 WARN [main] util.MBeans(73): Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-752510089 javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-752510089 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2066) at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:860) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:569) at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:553) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniDFSCluster(HBaseTestingUtility.java:539) at org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit.tearDownAfterClass(TestHLogSplit.java:131) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:36) at org.junit.runners.ParentRunner.run(ParentRunner.java:300) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:226) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:133) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:188) at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:166) at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:86) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:101) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74) 2012-10-31 09:47:14,103 WARN [main] datanode.FSDatasetAsyncDiskService(121): AsyncDiskService has already shut down. 2012-10-31 09:47:14,103 INFO [main] log.Slf4jLog(67): Stopped SelectChannelConnector@localhost:0 2012-10-31 09:47:14,206 WARN [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$ReplicationMonitor@2afb6c5f] namenode.FSNamesystem$ReplicationMonitor(2779): ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted