Description
IntegrationTestTableSnapshotInputFormat fails with
first:
2015-01-15 03:56:36,175 INFO [main] mapreduce.Job: Task Id : attempt_1420685782128_0080_m_000014_2, Status : FAILED Error: java.io.IOException: java.lang.NoClassDefFoundError: com/yammer/metrics/core/MetricsRegistry at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:858) at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:756) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:729) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4885) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4851) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4822) at org.apache.hadoop.hbase.client.ClientSideRegionScanner.<init>(ClientSideRegionScanner.java:60) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl$RecordReader.initialize(TableSnapshotInputFormatImpl.java:190) at org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormat$TableSnapshotRegionRecordReader.initialize(TableSnapshotInputFormat.java:139) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:545) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:783) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
and then when that is fixed with:
2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|Error: java.io.IOException: java.lang.IllegalStateException: bucketCacheSize <= 0; Check hbase.bucketcache.size setting and/or server java heap size 2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:858) 2015-01-15 04:15:58,576|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:756) 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:729) 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4885) 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4851) 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4822) 2015-01-15 04:15:58,577|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.client.ClientSideRegionScanner.<init>(ClientSideRegionScanner.java:60) 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:491) 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:536) 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:186) 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:250) 2015-01-15 04:16:22,764|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:3762) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:832) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:829) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.FutureTask.run(FutureTask.java:262) 2015-01-15 04:16:22,765|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 2015-01-15 04:16:22,766|beaver.machine|INFO|28451|139674165233408|MainThread|at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 2015-01-15 04:16:22,766|beaver.machine|INFO|28451|139674165233408|MainThread|at java.lang.Thread.run(Thread.java:745)
ndimiduk do you know about the second failure? We can setting block cache size to 0.