RegionServer status for cvp328.sjc.aristanetworks.com,16201,1646451103340 as of Tue Mar 08 08:36:41 UTC 2022 Version Info: =========================================================== HBase 2.4.8 Source code repository git://buildkitsandbox/hbase-src revision=cb19adc2e3254e1e67bc5509e436fb407959232f Compiled by root on Sat Nov 6 00:59:03 UTC 2021 From source with checksum 6bd029f558ce15696daeaa252cc84fc34376feb5ae5f3cfeb21ba8313b6eae624a3d13f771c2e661025d17c04a89fbbb05c36baa9680b429ab9e268ce3d05366 Hadoop 3.1.4 Source code repository Unknown revision=Unknown Compiled by root on 2021-10-12T10:22Z Tasks: =========================================================== Task: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277484s Task: RpcServer.default.FPBQ.Fifo.handler=15,queue=1,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40998: Multi Running for 277483s Task: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277314s Task: RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277305s Task: RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277304s Task: RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277304s Task: RpcServer.priority.RWQ.Fifo.read.handler=6,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277304s Task: RpcServer.priority.RWQ.Fifo.read.handler=7,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277294s Task: RpcServer.priority.RWQ.Fifo.read.handler=8,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277294s Task: RpcServer.default.FPBQ.Fifo.handler=14,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.155:38540: Multi Running for 277290s Task: RpcServer.default.FPBQ.Fifo.handler=13,queue=1,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40944: Multi Running for 277236s Task: RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16201 Status: RUNNING:Servicing call from 172.31.0.219:57522: Multi Running for 277236s Task: RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201 Status: RUNNING:Servicing call from 172.31.0.219:57526: Multi Running for 277236s Task: RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.155:38522: Multi Running for 277236s Task: RpcServer.default.FPBQ.Fifo.handler=9,queue=1,port=16201 Status: RUNNING:Servicing call from 172.30.41.155:38564: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40998: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40944: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=16201 Status: RUNNING:Servicing call from 172.31.0.219:57562: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=8,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.155:38538: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=5,queue=1,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40992: Multi Running for 277233s Task: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40952: Multi Running for 277232s Task: RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:40992: Multi Running for 277232s Task: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=16201 Status: RUNNING:Servicing call from 172.30.41.118:41050: Multi Running for 277232s Task: RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201 Status: RUNNING:Servicing call from 172.31.0.219:57562: Multi Running for 277232s Task: RpcServer.priority.RWQ.Fifo.read.handler=9,queue=1,port=16201 Status: WAITING:Waiting for a call Running for 277186s RowLocks: =========================================================== aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x15tag-device-bgp-status, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@3b82aa4e[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x16tag-device-ospf-status, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@2273cb1d[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4&event-cusum-stats-connectivity-monitor, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@7dbcfb41[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Bversion-connectivitymonitor, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@67f36848[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x17aggregate-disk-usage-1m, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@490d7e22[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x19event-vxlan-config-sanity, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@751a6f3d[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x0Ftag-device-mpls, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@6fe6e42c[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x16version-lldp-neighbors, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@132b70b0[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=16201} aeris_v2,9b121c22e18472616debd0996ee2a4cb,RowLockContext{row=\x09\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4 event-threshold-analytics-errors, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@3175e099[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201} aeris_v2,80a1ffe1dc2d8f258a6367d204dc64df,RowLockContext{row=\x12\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Eevent-out-of-config-compliance, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@1730120d[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=9,queue=1,port=16201} aeris_v2,80a1ffe1dc2d8f258a6367d204dc64df,RowLockContext{row=\x12\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x13version-xcvr-status, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@4faf4343[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=9,queue=1,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x12version-lldp-state, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@74904f6[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Eevent-link-administrative-down, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@19264dae[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Aversion-environment-status, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@e9184fd[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x19network-combined-topology, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@3b917b37[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x0Fcount-intf-role, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@7430a607[Write locks = 0, Read locks = 3], count=3, threadName=RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=16201} aeris_v2,7534b3b1812808d2d3c16342fbf347a5,RowLockContext{row=\x01\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Bevent-threshold-outdiscards, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@5fd0ce12[Write locks = 0, Read locks = 3], count=3, threadName=RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Daggregate-total-cpu-usage-15m, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@5d641f79[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Dtag-device-terminattr-version, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@6d9c9f21[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4#event-threshold-intf-in-utilization, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@1ced1c0b[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x12version-intf-rates, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@4594f1da[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x13event-cpu-threshold, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@3876b1c6[Write locks = 0, Read locks = 2], count=2, threadName=RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201} aeris_v2,7d1b53259f1d8280a47f7704f98f2c9f,RowLockContext{row=\x02\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x0Bbugexposure, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@311a447e[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201} aeris_v2,ccdacf4b414cb8d1bc548efc77c663cb,RowLockContext{row=\x0C\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x12bug-alerts-devices, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@45e30b12[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201} aeris_v2,ccdacf4b414cb8d1bc548efc77c663cb,RowLockContext{row=\x0C\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x1Faggregate-per-core-cpu-usage-1m, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@271b2855[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201} aeris_v2,ccdacf4b414cb8d1bc548efc77c663cb,RowLockContext{row=\x0C\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4\x15version-nexthop-group, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@2a6e55f1[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201} aeris_v2,ccdacf4b414cb8d1bc548efc77c663cb,RowLockContext{row=\x0C\xCFb"\xDAX\x00\x00\x00\x01\x93\xC4\x08Turbines\xC4\x06status\xC4#event-threshold-ptp-mean-path-delay, readWriteLock=java.util.concurrent.locks.ReentrantReadWriteLock@49b4cd76[Write locks = 0, Read locks = 1], count=1, threadName=RpcServer.default.FPBQ.Fifo.handler=8,queue=0,port=16201} Executors: =========================================================== Status for executor: Executor-7-RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-10-RS_SWITCH_RPC_THROTTLE-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-8-RS_REGION_REPLICA_FLUSH_OPS-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-3-RS_OPEN_PRIORITY_REGION-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-4-RS_CLOSE_REGION-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-5-RS_CLOSE_META-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-6-RS_LOG_REPLAY_OPS-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-2-RS_OPEN_META-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-1-RS_OPEN_REGION-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Status for executor: Executor-9-RS_REFRESH_PEER-regionserver/cvp328:16201 ======================================= 0 events queued, 0 running Process Thread Dump: 125 active threads Thread 11792 (qtp1526296937-11792): State: TIMED_WAITING Blocked count: 0 Waited count: 3 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:973) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1023) java.lang.Thread.run(Thread.java:748) Thread 11791 (qtp1526296937-11791): State: TIMED_WAITING Blocked count: 0 Waited count: 2 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:973) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1023) java.lang.Thread.run(Thread.java:748) Thread 11783 (Timer for 'HBase' metrics system): State: TIMED_WAITING Blocked count: 0 Waited count: 25 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 11356 (IPC Parameter Sending Thread #3): State: TIMED_WAITING Blocked count: 42 Waited count: 399 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 10508 (Session-Scheduler-a33b4e3-1): State: WAITING Blocked count: 0 Waited count: 4 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@52223312 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 10480 (Connector-Scheduler-289778cd-1): State: TIMED_WAITING Blocked count: 0 Waited count: 70 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 587 (regionserver/cvp328:16201-shortCompactions-0): State: WAITING Blocked count: 2618 Waited count: 5107 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@f7be27d Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 435 (Log-Archiver-0): State: WAITING Blocked count: 370 Waited count: 780 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@185835f8 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 343 (regionserver/cvp328:16201.Chore.3): State: WAITING Blocked count: 12 Waited count: 50894 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@10f2b20f Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 342 (regionserver/cvp328:16201.Chore.2): State: TIMED_WAITING Blocked count: 8 Waited count: 50981 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 261 (Log-Archiver-0): State: WAITING Blocked count: 442 Waited count: 971 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3703a348 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 250 (pool-1-thread-5): State: WAITING Blocked count: 0 Waited count: 923 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71d15c4e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 245 (pool-1-thread-4): State: WAITING Blocked count: 0 Waited count: 924 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71d15c4e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 242 (pool-1-thread-3): State: WAITING Blocked count: 0 Waited count: 924 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71d15c4e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 239 (pool-1-thread-2): State: WAITING Blocked count: 0 Waited count: 924 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71d15c4e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 236 (pool-1-thread-1): State: WAITING Blocked count: 0 Waited count: 924 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71d15c4e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 234 (ShortCircuitCache_SlotReleaser): State: WAITING Blocked count: 0 Waited count: 88 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6e5b9f8c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 232 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-9): State: WAITING Blocked count: 113 Waited count: 2665 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 231 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-8): State: WAITING Blocked count: 106 Waited count: 2656 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 225 (AsyncFSWAL-0-hdfs://mycluster/hbase): State: WAITING Blocked count: 0 Waited count: 369 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1fe96086 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 224 (RS_OPEN_META-regionserver/cvp328:16201-0): State: WAITING Blocked count: 27 Waited count: 56 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1820054b Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 220 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-7): State: WAITING Blocked count: 72 Waited count: 2587 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 219 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-6): State: WAITING Blocked count: 119 Waited count: 2668 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 218 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-5): State: WAITING Blocked count: 118 Waited count: 2681 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 217 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-4): State: WAITING Blocked count: 134 Waited count: 2692 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 216 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-3): State: WAITING Blocked count: 60 Waited count: 2576 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 215 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-2): State: WAITING Blocked count: 114 Waited count: 2656 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 214 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-1): State: WAITING Blocked count: 78 Waited count: 2588 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 144 (RS_CLOSE_REGION-regionserver/cvp328:16201-2): State: WAITING Blocked count: 239 Waited count: 161 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@e30d88a Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 143 (RS_CLOSE_REGION-regionserver/cvp328:16201-1): State: WAITING Blocked count: 200 Waited count: 177 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@e30d88a Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 142 (RS_CLOSE_REGION-regionserver/cvp328:16201-0): State: WAITING Blocked count: 193 Waited count: 102 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@e30d88a Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 128 (RS_OPEN_REGION-regionserver/cvp328:16201-2): State: WAITING Blocked count: 131 Waited count: 172 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@10d3ac01 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 127 (RS_OPEN_REGION-regionserver/cvp328:16201-1): State: WAITING Blocked count: 163 Waited count: 244 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@10d3ac01 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 123 (RS_COMPACTED_FILES_DISCHARGER-regionserver/cvp328:16201-0): State: WAITING Blocked count: 107 Waited count: 2649 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@392c7985 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 120 (RS-EventLoopGroup-1-16): State: RUNNABLE Blocked count: 21 Waited count: 11 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 119 (RS-EventLoopGroup-1-15): State: RUNNABLE Blocked count: 27 Waited count: 15 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 117 (RS-EventLoopGroup-1-14): State: RUNNABLE Blocked count: 24 Waited count: 14 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 116 (RS-EventLoopGroup-1-13): State: RUNNABLE Blocked count: 13 Waited count: 9 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 115 (RS-EventLoopGroup-1-12): State: RUNNABLE Blocked count: 16 Waited count: 9 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 113 (RS-EventLoopGroup-1-11): State: RUNNABLE Blocked count: 20 Waited count: 4 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 112 (RS-EventLoopGroup-1-10): State: RUNNABLE Blocked count: 19 Waited count: 4 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 111 (RS-EventLoopGroup-1-9): State: RUNNABLE Blocked count: 15 Waited count: 7 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 108 (RS_OPEN_REGION-regionserver/cvp328:16201-0): State: WAITING Blocked count: 149 Waited count: 238 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@10d3ac01 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 107 (ShortCircuitCache_Cleaner): State: TIMED_WAITING Blocked count: 0 Waited count: 3700 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 106 (org.apache.hadoop.hdfs.PeerCache@2f0cf5cf): State: TIMED_WAITING Blocked count: 0 Waited count: 91759 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:748) Thread 105 (Monitor thread for TaskMonitor): State: TIMED_WAITING Blocked count: 0 Waited count: 27658 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:302) java.lang.Thread.run(Thread.java:748) Thread 104 (RS-EventLoopGroup-1-8): State: RUNNABLE Blocked count: 12 Waited count: 8 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 103 (AsyncFSWAL-0-hdfs://mycluster/hbase): State: WAITING Blocked count: 41 Waited count: 341034 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@278d97bc Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 102 (RS-EventLoopGroup-1-7): State: RUNNABLE Blocked count: 48 Waited count: 9 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 101 (RS-EventLoopGroup-1-6): State: RUNNABLE Blocked count: 45 Waited count: 10 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 100 (RS-EventLoopGroup-1-5): State: RUNNABLE Blocked count: 46 Waited count: 10 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 99 (LeaseRenewer:cvp@mycluster): State: TIMED_WAITING Blocked count: 9549 Waited count: 291364 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:412) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:308) java.lang.Thread.run(Thread.java:748) Thread 98 (RS-EventLoopGroup-1-4): State: RUNNABLE Blocked count: 12 Waited count: 9 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 97 (cvp328:16201Replication Statistics #0): State: TIMED_WAITING Blocked count: 7 Waited count: 925 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 96 (ReplicationExecutor-0): State: WAITING Blocked count: 1 Waited count: 2 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@536bdc5e Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 95 (regionserver/cvp328:16201-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 925 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 93 (regionserver/cvp328:16201-MemStoreChunkPool Statistics): State: TIMED_WAITING Blocked count: 0 Waited count: 925 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 85 (regionserver/cvp328:16201.leaseChecker): State: TIMED_WAITING Blocked count: 0 Waited count: 27657 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.regionserver.LeaseManager.run(LeaseManager.java:90) Thread 87 (regionserver/cvp328:16201.procedureResultReporter): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@c952902 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.regionserver.RemoteProcedureResultReporter.run(RemoteProcedureResultReporter.java:77) Thread 90 (MemStoreFlusher.1): State: TIMED_WAITING Blocked count: 1900 Waited count: 33192 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:334) Thread 88 (MemStoreFlusher.0): State: TIMED_WAITING Blocked count: 1945 Waited count: 33339 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) java.util.concurrent.DelayQueue.poll(DelayQueue.java:70) org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:334) Thread 86 (regionserver/cvp328:16201.logRoller): State: WAITING Blocked count: 2054 Waited count: 12669 Waiting on java.util.concurrent.CompletableFuture$Signaller@40442c6b Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707) java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742) java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.write(AsyncProtobufLogWriter.java:189) org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.writeMagicAndWALHeader(AsyncProtobufLogWriter.java:202) org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:170) org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:113) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:669) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:130) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:841) org.apache.hadoop.hbase.wal.AbstractWALRoller$RollController.rollWal(AbstractWALRoller.java:268) org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:187) Thread 84 (regionserver/cvp328:16201.Chore.1): State: WAITING Blocked count: 11 Waited count: 51996 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@10f2b20f Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 83 (regionserver/cvp328:16201-longCompactions-0): State: WAITING Blocked count: 348 Waited count: 793 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@350a442a Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:106) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 82 (JvmPauseMonitor): State: TIMED_WAITING Blocked count: 46 Waited count: 543673 Stack: java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:154) java.lang.Thread.run(Thread.java:748) Thread 76 (RpcClient-timer-pool-0): State: TIMED_WAITING Blocked count: 0 Waited count: 26805030 Stack: java.lang.Thread.sleep(Native Method) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:566) org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:462) java.lang.Thread.run(Thread.java:748) Thread 79 (RS-EventLoopGroup-1-3): State: RUNNABLE Blocked count: 22 Waited count: 13 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 77 (Idle-Rpc-Conn-Sweeper-pool-0): State: TIMED_WAITING Blocked count: 0 Waited count: 4543 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 73 (ReadOnlyZKClient-cvp328.sjc.aristanetworks.com:2181,cvp365.sjc.aristanetworks.com:2181,cvp90.sjc.aristanetworks.com:2181@0x7dafa7e3): State: TIMED_WAITING Blocked count: 2 Waited count: 4626 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$63/1842805498.run(Unknown Source) java.lang.Thread.run(Thread.java:748) Thread 17 (regionserver/cvp328:16201): State: TIMED_WAITING Blocked count: 55070 Waited count: 110127 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:84) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:67) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1096) Thread 72 (Session-HouseKeeper-7e1f584d-1): State: TIMED_WAITING Blocked count: 0 Waited count: 421 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 71 (qtp1526296937-71): State: TIMED_WAITING Blocked count: 62 Waited count: 4739 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:973) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1023) java.lang.Thread.run(Thread.java:748) Thread 70 (qtp1526296937-70): State: TIMED_WAITING Blocked count: 6 Waited count: 4761 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:973) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1023) java.lang.Thread.run(Thread.java:748) Thread 68 (qtp1526296937-68): State: RUNNABLE Blocked count: 3 Waited count: 4657 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:360) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:184) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036) java.lang.Thread.run(Thread.java:748) Thread 67 (qtp1526296937-67): State: TIMED_WAITING Blocked count: 0 Waited count: 4636 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:312) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:374) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036) java.lang.Thread.run(Thread.java:748) Thread 66 (qtp1526296937-66): State: TIMED_WAITING Blocked count: 23 Waited count: 4709 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:973) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1023) java.lang.Thread.run(Thread.java:748) Thread 65 (qtp1526296937-65-acceptor-0@7a3ccc0d-ServerConnector@289778cd{HTTP/1.1, (http/1.1)}{0.0.0.0:16301}): State: RUNNABLE Blocked count: 1 Waited count: 2 Stack: sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:419) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:247) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:702) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036) java.lang.Thread.run(Thread.java:748) Thread 64 (qtp1526296937-64): State: RUNNABLE Blocked count: 0 Waited count: 574 Stack: sun.management.ThreadImpl.getThreadInfo1(Native Method) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:178) sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:139) org.apache.hadoop.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:169) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.hbase.util.Threads$PrintThreadInfoLazyHolder$1.printThreadInfo(Threads.java:220) org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:267) org.apache.hadoop.hbase.regionserver.RSDumpServlet.doGet(RSDumpServlet.java:82) javax.servlet.http.HttpServlet.service(HttpServlet.java:687) javax.servlet.http.HttpServlet.service(HttpServlet.java:790) org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626) org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112) org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) org.apache.hadoop.hbase.http.SecurityHeadersFilter.doFilter(SecurityHeadersFilter.java:66) org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) Thread 63 (RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=16201): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@10208c81 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:109) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 62 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=16201): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5aac0b45 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:109) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 61 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16201): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@a4a4081 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:109) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 60 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=16201): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.Semaphore$NonfairSync@5dfa596c Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:109) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 59 (RpcServer.priority.RWQ.Fifo.read.handler=9,queue=1,port=16201): State: WAITING Blocked count: 85 Waited count: 26643 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 58 (RpcServer.priority.RWQ.Fifo.read.handler=8,queue=1,port=16201): State: WAITING Blocked count: 109 Waited count: 26664 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 57 (RpcServer.priority.RWQ.Fifo.read.handler=7,queue=1,port=16201): State: WAITING Blocked count: 120 Waited count: 26714 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 56 (RpcServer.priority.RWQ.Fifo.read.handler=6,queue=1,port=16201): State: WAITING Blocked count: 84 Waited count: 26546 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 55 (RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16201): State: WAITING Blocked count: 96 Waited count: 26646 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 54 (RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16201): State: WAITING Blocked count: 81 Waited count: 26760 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 53 (RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16201): State: WAITING Blocked count: 103 Waited count: 26766 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 52 (RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=16201): State: WAITING Blocked count: 90 Waited count: 26746 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 51 (RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=16201): State: WAITING Blocked count: 83 Waited count: 26824 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@492a82d6 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 50 (RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16201): State: WAITING Blocked count: 0 Waited count: 1 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@73665e95 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:325) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 49 (RpcServer.default.FPBQ.Fifo.handler=15,queue=1,port=16201): State: BLOCKED Blocked count: 159 Waited count: 8764 Blocked on java.util.LinkedList@4d5f750 Blocked by 36 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 48 (RpcServer.default.FPBQ.Fifo.handler=14,queue=0,port=16201): State: BLOCKED Blocked count: 189 Waited count: 16257 Blocked on java.util.LinkedList@73838ccb Blocked by 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 47 (RpcServer.default.FPBQ.Fifo.handler=13,queue=1,port=16201): State: BLOCKED Blocked count: 146 Waited count: 9528 Blocked on java.util.LinkedList@1fd108ec Blocked by 45 (RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 46 (RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16201): State: BLOCKED Blocked count: 276 Waited count: 55284 Blocked on java.util.LinkedList@4d5f750 Blocked by 36 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 45 (RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201): State: TIMED_WAITING Blocked count: 258 Waited count: 551374199 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$2(AbstractFSWAL.java:1086) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$182/206585124.run(Unknown Source) org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:145) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) Thread 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201): State: RUNNABLE Blocked count: 253 Waited count: 550912646 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$2(AbstractFSWAL.java:1086) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$182/206585124.run(Unknown Source) org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:145) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) Thread 43 (RpcServer.default.FPBQ.Fifo.handler=9,queue=1,port=16201): State: TIMED_WAITING Blocked count: 178 Waited count: 546654326 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$2(AbstractFSWAL.java:1086) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$182/206585124.run(Unknown Source) org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:145) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) Thread 42 (RpcServer.default.FPBQ.Fifo.handler=8,queue=0,port=16201): State: BLOCKED Blocked count: 185 Waited count: 17304 Blocked on java.util.LinkedList@335790bb Blocked by 37 (RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 41 (RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=16201): State: BLOCKED Blocked count: 185 Waited count: 16309 Blocked on java.util.LinkedList@73838ccb Blocked by 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 40 (RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=16201): State: BLOCKED Blocked count: 205 Waited count: 16722 Blocked on java.util.LinkedList@73838ccb Blocked by 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 39 (RpcServer.default.FPBQ.Fifo.handler=5,queue=1,port=16201): State: BLOCKED Blocked count: 233 Waited count: 59279 Blocked on java.util.LinkedList@73838ccb Blocked by 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 38 (RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=16201): State: BLOCKED Blocked count: 168 Waited count: 10980 Blocked on java.util.LinkedList@73838ccb Blocked by 44 (RpcServer.default.FPBQ.Fifo.handler=10,queue=0,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:142) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 37 (RpcServer.default.FPBQ.Fifo.handler=3,queue=1,port=16201): State: TIMED_WAITING Blocked count: 162 Waited count: 551515253 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$2(AbstractFSWAL.java:1086) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$182/206585124.run(Unknown Source) org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:145) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) Thread 36 (RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=16201): State: TIMED_WAITING Blocked count: 189 Waited count: 551728673 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:338) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:136) com.lmax.disruptor.MultiProducerSequencer.next(MultiProducerSequencer.java:105) com.lmax.disruptor.RingBuffer.next(RingBuffer.java:263) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.lambda$stampSequenceIdAndPublishToRingBuffer$2(AbstractFSWAL.java:1086) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL$$Lambda$182/206585124.run(Unknown Source) org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:145) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) Thread 35 (RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16201): State: BLOCKED Blocked count: 176 Waited count: 13642 Blocked on java.util.LinkedList@1fd108ec Blocked by 45 (RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.complete(MultiVersionConcurrencyControl.java:187) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8706) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 34 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=16201): State: BLOCKED Blocked count: 194 Waited count: 13006 Blocked on java.util.LinkedList@1fd108ec Blocked by 45 (RpcServer.default.FPBQ.Fifo.handler=11,queue=1,port=16201) Stack: org.apache.hadoop.hbase.regionserver.MultiVersionConcurrencyControl.begin(MultiVersionConcurrencyControl.java:142) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.stampSequenceIdAndPublishToRingBuffer(AbstractFSWAL.java:1085) org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.append(AsyncFSWAL.java:607) org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.appendData(AbstractFSWAL.java:1139) org.apache.hadoop.hbase.regionserver.HRegion.doWALAppend(HRegion.java:8698) org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4687) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4612) org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4541) org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1020) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:933) org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:897) org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2899) org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45265) org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:392) org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:354) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:334) Thread 33 (zk-event-processor-pool-0): State: WAITING Blocked count: 2 Waited count: 5 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@2cb7c459 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 32 (main-EventThread): State: WAITING Blocked count: 0 Waited count: 4 Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7b243fcf Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Thread 31 (main-SendThread(cvp365.sjc.aristanetworks.com:2181)): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Thread 28 (client DomainSocketWatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method) org.apache.hadoop.net.unix.DomainSocketWatcher.access$900(DomainSocketWatcher.java:52) org.apache.hadoop.net.unix.DomainSocketWatcher$2.run(DomainSocketWatcher.java:503) java.lang.Thread.run(Thread.java:748) Thread 27 (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner): State: WAITING Blocked count: 2 Waited count: 3 Waiting on java.lang.ref.ReferenceQueue$Lock@35560464 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3760) java.lang.Thread.run(Thread.java:748) Thread 26 (MobFileCache #0): State: TIMED_WAITING Blocked count: 0 Waited count: 78 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 24 (LruBlockCacheStatsExecutor): State: TIMED_WAITING Blocked count: 4 Waited count: 925 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 22 (main.LruBlockCache.EvictionThread): State: TIMED_WAITING Blocked count: 0 Waited count: 27659 Stack: java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:942) Thread 21 (RS-EventLoopGroup-1-2): State: RUNNABLE Blocked count: 14 Waited count: 5 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 20 (RS-EventLoopGroup-1-1): State: RUNNABLE Blocked count: 37 Waited count: 5 Stack: org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:192) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:185) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:748) Thread 19 (HBase-Metrics2-1): State: TIMED_WAITING Blocked count: 0 Waited count: 137067 Stack: sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) Thread 9 (Thread-2): State: RUNNABLE Blocked count: 3 Waited count: 0 Stack: sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.java:352) java.lang.Thread.run(Thread.java:748) Thread 7 (server-timer): State: TIMED_WAITING Blocked count: 1 Waited count: 27654 Stack: java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Thread 5 (Signal Dispatcher): State: RUNNABLE Blocked count: 0 Waited count: 0 Stack: Thread 3 (Finalizer): State: WAITING Blocked count: 1087 Waited count: 774 Waiting on java.lang.ref.ReferenceQueue$Lock@4c012563 Stack: java.lang.Object.wait(Native Method) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144) java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165) java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:216) Thread 2 (Reference Handler): State: WAITING Blocked count: 839 Waited count: 802 Waiting on java.lang.ref.Reference$Lock@14a50707 Stack: java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) java.lang.ref.Reference.tryHandlePending(Reference.java:191) java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Thread 1 (main): State: WAITING Blocked count: 5 Waited count: 8 Waiting on org.apache.hadoop.hbase.regionserver.HRegionServer@58fe4ad4 Stack: java.lang.Object.wait(Native Method) java.lang.Thread.join(Thread.java:1252) java.lang.Thread.join(Thread.java:1326) org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:65) org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3184) Stacks: =========================================================== RS Configuration: =========================================================== mapreduce.jobhistory.jhist.formatbinaryfalsemapred-default.xml fs.s3a.retry.interval500msfalsecore-default.xml dfs.block.access.token.lifetime600falsehdfs-default.xml mapreduce.job.heap.memory-mb.ratio0.8falsemapred-default.xml mapreduce.map.log.levelINFOfalsemapred-default.xml dfs.namenode.lazypersist.file.scrub.interval.sec300falsehdfs-default.xml file.bytes-per-checksum512falsecore-default.xml mapreduce.client.completion.pollinterval5000falsemapred-default.xml fs.azure.secure.modefalsefalsecore-default.xml yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usagefalsefalseyarn-default.xml yarn.log-aggregation-enablefalsefalseyarn-default.xml hbase.client.pause100falsehbase-default.xml yarn.nodemanager.aux-services.mapreduce_shuffle.classorg.apache.hadoop.mapred.ShuffleHandlerfalseyarn-default.xml dfs.namenode.edit.log.autoroll.check.interval.ms300000falsehdfs-default.xml mapreduce.job.speculative.retry-after-speculate15000falsemapred-default.xml ipc.client.fallback-to-simple-auth-allowedfalsefalsecore-default.xml dfs.client.failover.connection.retries0falsehdfs-default.xml yarn.scheduler.minimum-allocation-mb1024falseyarn-default.xml mapreduce.task.profile.map.params${mapreduce.task.profile.params}falsemapred-default.xml dfs.qjm.operations.timeout60sfalsehdfs-default.xml mapreduce.map.memory.mb-1falsemapred-default.xml hbase.mob.compaction.chore.period604800falsehbase-default.xml hbase.normalizer.merge.min_region_size.mb1falsehbase-default.xml dfs.datanode.transfer.socket.recv.buffer.size0falsehdfs-default.xml dfs.datanode.failed.volumes.tolerated0falsehdfs-default.xml yarn.dispatcher.print-events-info.threshold5000falseyarn-default.xml dfs.namenode.metrics.logger.period.seconds600falsehdfs-default.xml dfs.client.slow.io.warning.threshold.ms30000falsehdfs-default.xml yarn.resourcemanager.reservation-system.enablefalsefalseyarn-default.xml hadoop.security.groups.cache.secs300falsecore-default.xml yarn.resourcemanager.webapp.xfs-filter.xframe-optionsSAMEORIGINfalseyarn-default.xml yarn.nodemanager.env-whitelistJAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZfalseyarn-default.xml dfs.namenode.top.window.num.buckets10falsehdfs-default.xml dfs.client.hedged.read.threshold.millis500falsehdfs-default.xml map.sort.classorg.apache.hadoop.util.QuickSortfalsemapred-default.xml dfs.namenode.safemode.threshold-pct0.0ffalsehdfs-site.xml dfs.short.circuit.shared.memory.watcher.interrupt.check.ms60000falsehdfs-default.xml fs.AbstractFileSystem.s3a.implorg.apache.hadoop.fs.s3a.S3Afalsecore-default.xml hadoop.caller.context.enabledfalsefalsecore-default.xml hbase.mob.compaction.mergeable.threshold1342177280falsehbase-default.xml hbase.hregion.split.overallfilesfalsefalsehbase-default.xml dfs.provided.aliasmap.load.retries0falsehdfs-default.xml yarn.resourcemanager.client.thread-count50falseyarn-default.xml dfs.balancer.moverThreads1000falsehdfs-default.xml dfs.client.read.shortcircuittruefalsehdfs-site.xml mapreduce.job.end-notification.max.retry.interval5000truemapred-default.xml hadoop.security.authenticationsimplefalsecore-default.xml dfs.client.mmap.retry.timeout.ms300000falsehdfs-default.xml dfs.datanode.readahead.bytes4194304falsehdfs-default.xml mapreduce.jobhistory.max-age-ms604800000falsemapred-default.xml yarn.app.mapreduce.client-am.ipc.max-retries3falsemapred-default.xml yarn.nodemanager.sleep-delay-before-sigkill.ms250falseyarn-default.xml yarn.system-metrics-publisher.enabledfalsefalseyarn-default.xml hadoop.shell.missing.defaultFs.warningfalsefalsecore-default.xml hbase.rootdir.perms700falsehbase-default.xml fs.trash.interval0falsecore-default.xml dfs.datanode.max.locked.memory0falsehdfs-default.xml hadoop.http.filter.initializersorg.apache.hadoop.http.lib.StaticUserWebFilterfalsecore-default.xml mapreduce.jobhistory.always-scan-user-dirfalsefalsemapred-default.xml dfs.datanode.ipc.addresscvp328.sjc.aristanetworks.com:15020falsehdfs-site.xml dfs.namenode.delegation.token.renew-interval86400000falsehdfs-default.xml dfs.datanode.ec.reconstruction.stripedread.timeout.millis5000falsehdfs-default.xml yarn.resourcemanager.webapp.address${yarn.resourcemanager.hostname}:8088falseyarn-default.xml dfs.web.authentication.filterorg.apache.hadoop.hdfs.web.AuthFilterfalsehdfs-default.xml yarn.nodemanager.numa-awareness.numactl.cmd/usr/bin/numactlfalseyarn-default.xml mapreduce.shuffle.max.connections0falsemapred-default.xml yarn.nodemanager.local-cache.max-files-per-directory8192falseyarn-default.xml dfs.ha.fencing.ssh.private-key-files/home/cvp/.ssh/id_rsafalsehdfs-site.xml yarn.nodemanager.log-aggregation.num-log-files-per-app30falseyarn-default.xml hbase.cluster.distributedtruefalsehbase-site.xml dfs.namenode.lease-hard-limit-sec1200falsehdfs-default.xml dfs.namenode.replication.work.multiplier.per.iteration2falsehdfs-default.xml dfs.client.test.drop.namenode.response.number0falsehdfs-default.xml fs.ftp.implorg.apache.hadoop.fs.ftp.FTPFileSystemfalsecore-default.xml hbase.client.scanner.caching2147483647falsehbase-default.xml dfs.namenode.ec.userdefined.policy.allowedtruefalsehdfs-default.xml dfs.datanode.slow.io.warning.threshold.ms300falsehdfs-default.xml yarn.client.max-cached-nodemanagers-proxies0falseyarn-default.xml hbase.hregion.percolumnfamilyflush.size.lower.bound.min16777216falsehbase-default.xml dfs.client.server-defaults.validity.period.ms3600000falsehdfs-default.xml hadoop.registry.zk.quorumlocalhost:2181falsecore-default.xml yarn.app.mapreduce.am.job.committer.commit-window10000falsemapred-default.xml hadoop.registry.zk.session.timeout.ms60000falsecore-site.xml dfs.namenode.heartbeat.recheck-interval30000falsehdfs-site.xml mapreduce.job.encrypted-intermediate-data-key-size-bits128falsemapred-default.xml nfs.exports.allowed.hosts* rfalsehdfs-site.xml dfs.client.write.byte-array-manager.enabledfalsefalsehdfs-default.xml dfs.client.retry.policy.spec10000,6,60000,10falsehdfs-default.xml hbase.http.max.threads16falsehbase-default.xml hadoop.util.hash.typemurmurfalsecore-default.xml fs.s3a.committer.namefilefalsecore-default.xml dfs.namenode.replication.min2falsehdfs-site.xml hbase.regionserver.checksum.verifytruefalsehbase-default.xml yarn.timeline-service.app-collector.linger-period.ms60000falseyarn-default.xml hadoop.security.key.default.cipherAES/CTR/NoPaddingfalsecore-default.xml dfs.net.topology.implorg.apache.hadoop.hdfs.net.DFSNetworkTopologyfalsehdfs-default.xml hbase.master.loadbalance.bytablefalsefalsehbase-default.xml hbase.zookeeper.peerport2888falsehbase-default.xml yarn.resourcemanager.scheduler.address${yarn.resourcemanager.hostname}:8030falseyarn-default.xml hadoop.security.group.mapping.ldap.search.filter.user(&(objectClass=user)(sAMAccountName={0}))falsecore-default.xml dfs.namenode.lock.detailed-metrics.enabledfalsefalsehdfs-default.xml dfs.permissions.superusergroupsupergroupfalsehdfs-default.xml yarn.resourcemanager.resource-tracker.address${yarn.resourcemanager.hostname}:8031falseyarn-default.xml mapreduce.fileoutputcommitter.task.cleanup.enabledfalsefalsemapred-default.xml dfs.http.client.retry.max.attempts10falsehdfs-default.xml hbase.hregion.memstore.flush.size134217728falsehbase-site.xml yarn.resourcemanager.node-ip-cache.expiry-interval-secs-1falseyarn-default.xml hbase.hstore.time.to.purge.deletes0falsehbase-default.xml dfs.domain.socket.disable.interval.seconds600falsehdfs-default.xml yarn.resourcemanager.nm-container-queuing.load-comparatorQUEUE_LENGTHfalseyarn-default.xml hbase.offpeak.end.hour-1falsehbase-default.xml yarn.sharedcache.root-dir/sharedcachefalseyarn-default.xml dfs.xframe.enabledtruefalsehdfs-default.xml dfs.client.write.byte-array-manager.count-threshold128falsehdfs-default.xml hbase.regionserver.logroll.period600000falsehbase-site.xml dfs.namenode.http-address0.0.0.0:9870falsehdfs-default.xml yarn.nodemanager.resource.pcores-vcores-multiplier1.0falseyarn-default.xml hbase.auth.token.max.lifetime604800000falsehbase-default.xml yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds-1falseyarn-default.xml hbase.zookeeper.dns.nameserverdefaultfalsehbase-default.xml dfs.namenode.service.handler.count10falsehdfs-default.xml hbase.regionserver.slowlog.buffer.enabledfalsefalsehbase-default.xml yarn.timeline-service.webapp.rest-csrf.enabledfalsefalseyarn-default.xml yarn.nodemanager.default-container-executor.log-dirs.permissions710falseyarn-default.xml dfs.use.dfs.network.topologytruefalsehdfs-default.xml mapreduce.client.output.filterFAILEDfalsemapred-default.xml mapreduce.reduce.shuffle.memory.limit.percent0.25falsemapred-default.xml hbase.server.compactchecker.interval.multiplier1000falsehbase-default.xml yarn.resourcemanager.nm-container-queuing.min-queue-length5falseyarn-default.xml hbase.replication.rpc.codecorg.apache.hadoop.hbase.codec.KeyValueCodecWithTagsfalsehbase-default.xml nfs.rtmax1048576falsehdfs-default.xml hbase.data.umask.enablefalsefalsehbase-default.xml dfs.journalnode.edit-cache-size.bytes1048576falsehdfs-default.xml hadoop.ssl.client.confssl-client.xmlfalsecore-default.xml fs.ftp.host0.0.0.0falsecore-default.xml hadoop.http.authentication.simple.anonymous.allowedtruefalsecore-default.xml nfs.server.port2049falsehdfs-default.xml hbase.master.logcleaner.ttl600000falsehbase-default.xml dfs.namenode.write-lock-reporting-threshold-ms5000falsehdfs-default.xml dfs.ha.tail-edits.namenode-retries3falsehdfs-default.xml dfs.ha.log-roll.period120sfalsehdfs-default.xml hadoop.security.kms.client.authentication.retry-count1falsecore-default.xml mapreduce.task.io.sort.factor10falsemapred-default.xml dfs.datanode.https.address0.0.0.0:9865falsehdfs-default.xml hbase.hstore.compaction.min.size134217728falsehbase-default.xml yarn.sharedcache.uploader.server.address0.0.0.0:8046falseyarn-default.xml dfs.namenode.shared.edits.dirqjournal://cvp328.sjc.aristanetworks.com:8485;cvp365.sjc.aristanetworks.com:8485;cvp90.sjc.aristanetworks.com:8485/myclusterfalsehdfs-site.xml hbase.status.multicast.address.ip226.1.1.3falsehbase-default.xml mapreduce.job.reduces1falsemapred-default.xml yarn.nodemanager.recovery.compaction-interval-secs3600falseyarn-default.xml hbase.hregion.compacting.memstore.typeNONEfalsehbase-site.xml dfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}falsehdfs-default.xml mapreduce.jobhistory.cleaner.interval-ms86400000falsemapred-default.xml hbase.coprocessor.user.region.classesorg.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint,org.apache.hadoop.hbase.coprocessor.AggregateImplementationfalsehbase-site.xml yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs3600falseyarn-default.xml hbase.hstore.compaction.max.size9223372036854775807falsehbase-default.xml hbase.master.logcleaner.pluginsorg.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleanerfalsehbase-default.xml dfs.content-summary.sleep-microsec500falsehdfs-default.xml yarn.nodemanager.vmem-check-enabledtruefalseyarn-default.xml hadoop.security.group.mapping.ldap.num.attempts3falsecore-default.xml hbase.hstore.blockingWaitTime90000falsehbase-default.xml yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$falseyarn-default.xml mapreduce.task.local-fs.write-limit.bytes-1falsemapred-default.xml ha.health-monitor.sleep-after-disconnect.ms1000falsecore-site.xml dfs.namenode.file.close.num-committed-allowed0falsehdfs-default.xml mapreduce.job.counters.max120falsemapred-default.xml mapreduce.job.running.reduce.limit0falsemapred-default.xml yarn.timeline-service.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEADfalseyarn-default.xml yarn.resourcemanager.placement-constraints.algorithm.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.constraint.algorithm.DefaultPlacementAlgorithmfalseyarn-default.xml dfs.datanode.du.reserved.pct0falsehdfs-default.xml mapreduce.job.classloaderfalsefalsemapred-default.xml yarn.resourcemanager.connect.max-wait.ms900000falseyarn-default.xml yarn.resourcemanager.ha.automatic-failover.embeddedtruefalseyarn-default.xml dfs.client.block.write.locateFollowingBlock.initial.delay.ms400falsehdfs-default.xml mapreduce.jobhistory.joblist.cache.size20000falsemapred-default.xml dfs.qjournal.write-txns.timeout.ms20000falsehdfs-default.xml dfs.client.failover.proxy.provider.myclusterorg.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProviderfalsehdfs-site.xml mapreduce.task.profile.maps0-2falsemapred-default.xml dfs.client.short.circuit.replica.stale.threshold.ms1800000falsehdfs-default.xml dfs.namenode.reencrypt.batch.size1000falsehdfs-default.xml hadoop.security.kms.client.encrypted.key.cache.num.refill.threads2falsecore-default.xml dfs.namenode.maintenance.replication.min1falsehdfs-default.xml hadoop.ssl.server.confssl-server.xmlfalsecore-default.xml io.erasurecode.codec.rs-legacy.rawcodersrs-legacy_javafalsecore-default.xml yarn.timeline-service.state-store-classorg.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStorefalseyarn-default.xml fs.s3a.s3guard.ddb.max.retries9falsecore-default.xml yarn.timeline-service.generic-application-history.max-applications10000falseyarn-default.xml dfs.namenode.full.block.report.lease.length.ms300000falsehdfs-default.xml mapreduce.input.fileinputformat.list-status.num-threads1falsemapred-default.xml dfs.namenode.edits.dir${dfs.namenode.name.dir}falsehdfs-default.xml dfs.qjournal.accept-recovery.timeout.ms120000falsehdfs-default.xml yarn.timeline-service.version1.0ffalseyarn-default.xml fs.permissions.umask-mode022falsecore-default.xml yarn.resourcemanager.proxy-user-privileges.enabledfalsefalseyarn-default.xml dfs.client.socketcache.expiryMsec3000falsehdfs-default.xml yarn.nodemanager.runtime.linux.docker.host-pid-namespace.allowedfalsefalseyarn-default.xml yarn.resourcemanager.zk-max-znode-size.bytes1048576falseyarn-default.xml hbase.mob.compactor.classorg.apache.hadoop.hbase.mob.compactions.PartitionedMobCompactorfalsehbase-default.xml yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size10falsemapred-default.xml yarn.resourcemanager.webapp.rest-csrf.custom-headerX-XSRF-Headerfalseyarn-default.xml yarn.http.policyHTTP_ONLYfalseyarn-default.xml dfs.client.write.max-packets-in-flight80falsehdfs-default.xml dfs.namenode.path.based.cache.refresh.interval.ms30000falsehdfs-default.xml fs.s3a.committer.staging.conflict-modefailfalsecore-default.xml yarn.router.webapp.https.address0.0.0.0:8091falseyarn-default.xml yarn.nodemanager.linux-container-executor.cgroups.mountfalsefalseyarn-default.xml dfs.qjournal.get-journal-state.timeout.ms120000falsehdfs-default.xml ipc.client.connect.max.retries.on.timeouts30falsecore-site.xml hadoop.fuse.connection.timeout300falsehdfs-default.xml fs.AbstractFileSystem.adl.implorg.apache.hadoop.fs.adl.Adlfalsecore-default.xml fs.s3a.committer.staging.unique-filenamestruefalsecore-default.xml dfs.block.access.key.update.interval600falsehdfs-default.xml mapreduce.job.token.tracking.ids.enabledfalsefalsemapred-default.xml ha.health-monitor.rpc-timeout.ms1500falsecore-site.xml fs.azure.authorization.caching.enabletruefalsecore-default.xml yarn.resourcemanager.fail-fast${yarn.fail-fast}falseyarn-default.xml hadoop.http.cross-origin.allowed-origins*falsecore-default.xml yarn.timeline-service.entity-group-fs-store.summary-storeorg.apache.hadoop.yarn.server.timeline.LeveldbTimelineStorefalseyarn-default.xml yarn.resourcemanager.resource-profiles.source-fileresource-profiles.jsonfalseyarn-default.xml yarn.log-aggregation-status.time-out.ms600000falseyarn-default.xml mapreduce.client.submit.file.replication10falsemapred-default.xml hadoop.security.groups.shell.command.timeout0sfalsecore-default.xml hbase.hregion.majorcompaction.jitter0.50falsehbase-default.xml hadoop.ssl.require.client.certfalsefalsecore-default.xml dfs.datanode.cache.revocation.timeout.ms900000falsehdfs-default.xml dfs.ha.tail-edits.period60sfalsehdfs-default.xml yarn.client.nodemanager-connect.retry-interval-ms10000falseyarn-default.xml dfs.namenode.inotify.max.events.per.rpc1000falsehdfs-default.xml hadoop.rpc.protectionauthenticationfalsecore-default.xml yarn.resourcemanager.fs.state-store.uri${hadoop.tmp.dir}/yarn/system/rmstorefalseyarn-default.xml dfs.datanode.scan.period.hours504falsehdfs-default.xml dfs.datanode.block.id.layout.upgrade.threads12falsehdfs-default.xml hbase.security.exec.permission.checksfalsefalsehbase-default.xml dfs.client.read.shortcircuit.skip.checksumtruefalsehbase-site.xml dfs.namenode.rpc-address.mycluster.cvp365.sjc.aristanetworks.comcvp365.sjc.aristanetworks.com:9001falsehdfs-site.xml yarn.sharedcache.store.in-memory.initial-delay-mins10falseyarn-default.xml yarn.client.nodemanager-client-async.thread-pool-max-size500falseyarn-default.xml dfs.domain.socket.path/cvpi/hadoop/hadoop_dn_socketfalsehdfs-site.xml dfs.provided.storage.idDS-PROVIDEDfalsehdfs-default.xml mapreduce.map.skip.proc-count.auto-incrtruefalsemapred-default.xml dfs.namenode.stale.datanode.interval20000falsehdfs-site.xml dfs.block.invalidate.limit1000falsehdfs-default.xml yarn.timeline-service.address${yarn.timeline-service.hostname}:10200falseyarn-default.xml mapreduce.app-submission.cross-platformfalsefalsemapred-default.xml hbase.regionserver.region.split.policyorg.apache.hadoop.hbase.regionserver.SteppingSplitPolicyfalsehbase-default.xml dfs.http.policyHTTP_ONLYfalsehdfs-default.xml hbase.replication.source.maxthreads10falsehbase-default.xml mapreduce.map.output.compressfalsefalsemapred-default.xml mapreduce.shuffle.max.threads0falsemapred-default.xml zookeeper.session.timeout20000falsehbase-site.xml mapreduce.jobhistory.done-dir${yarn.app.mapreduce.am.staging-dir}/history/donefalsemapred-default.xml fs.azure.user.agent.prefixunknownfalsecore-default.xml dfs.qjournal.parallel-read.num-threads5falsehdfs-default.xml dfs.namenode.reject-unresolved-dn-topology-mappingfalsefalsehdfs-default.xml fs.AbstractFileSystem.swebhdfs.implorg.apache.hadoop.fs.SWebHdfsfalsecore-default.xml hadoop.ssl.enabledfalsefalsecore-default.xml fs.s3a.connection.establish.timeout5000falsecore-default.xml dfs.lock.suppress.warning.interval10sfalsehdfs-default.xml yarn.resourcemanager.reservation-system.planfollower.time-step1000falseyarn-default.xml mapreduce.jobhistory.recovery.enablefalsefalsemapred-default.xml hbase.tmp.dir/data/hbasefalsehbase-site.xml yarn.nodemanager.disk-validatorbasicfalseyarn-default.xml yarn.node-labels.configuration-typecentralizedfalseyarn-default.xml mapreduce.job.reduce.slowstart.completedmaps0.05falsemapred-default.xml mapreduce.output.fileoutputformat.compress.typeRECORDfalsemapred-default.xml hbase.client.registry.implorg.apache.hadoop.hbase.client.ZKConnectionRegistryfalseprogrammatically mapreduce.reduce.shuffle.parallelcopies5falsemapred-default.xml yarn.nodemanager.delete.thread-count4falseyarn-default.xml dfs.edit.log.transfer.bandwidthPerSec0falsehdfs-default.xml yarn.timeline-service.client.max-retries30falseyarn-default.xml yarn.resourcemanager.opportunistic-container-allocation.enabledfalsefalseyarn-default.xml hadoop.ssl.enabled.protocolsTLSv1,SSLv2Hello,TLSv1.1,TLSv1.2falsecore-default.xml dfs.datanode.pmem.cache.recoverytruefalsehdfs-default.xml yarn.nodemanager.resource.memory-mb-1falseyarn-default.xml hadoop.kerberos.kinit.commandkinitfalsecore-default.xml mapreduce.output.fileoutputformat.compress.codecorg.apache.hadoop.io.compress.DefaultCodecfalsemapred-default.xml dfs.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}falsehdfs-default.xml yarn.nodemanager.container-localizer.log.levelINFOfalseyarn-default.xml dfs.block.access.token.protobuf.enablefalsefalsehdfs-default.xml yarn.app.mapreduce.am.container.log.backups0falsemapred-default.xml mapreduce.task.profilefalsefalsemapred-default.xml dfs.disk.balancer.max.disk.errors5falsehdfs-default.xml ipc.client.rpc-timeout.ms0falsecore-default.xml mapreduce.job.running.map.limit0falsemapred-default.xml hadoop.ssl.hostname.verifierDEFAULTfalsecore-default.xml yarn.resourcemanager.auto-update.containersfalsefalseyarn-default.xml dfs.webhdfs.socket.read-timeout60sfalsehdfs-default.xml fs.s3a.connection.ssl.enabledtruefalsecore-default.xml yarn.timeline-service.hbase.coprocessor.app-final-value-retention-milliseconds259200000falseyarn-default.xml yarn.resourcemanager.history-writer.multi-threaded-dispatcher.pool-size10falseyarn-default.xml yarn.resourcemanager.scheduler.client.thread-count50falseyarn-default.xml io.seqfile.local.dir${hadoop.tmp.dir}/io/localfalsecore-default.xml dfs.datanode.directoryscan.threads1falsehdfs-default.xml dfs.client.read.shortcircuit.buffer.size131072falseprogrammatically yarn.timeline-service.client.best-effortfalsefalseyarn-default.xml yarn.sharedcache.cleaner.resource-sleep-ms0falseyarn-default.xml yarn.client.failover-retries0falseyarn-default.xml mapreduce.input.lineinputformat.linespermap1falsemapred-default.xml hadoop.security.group.mapping.ldap.posix.attr.uid.nameuidNumberfalsecore-default.xml mapreduce.job.queuenamedefaultfalsemapred-default.xml yarn.nodemanager.container-monitor.enabledtruefalseyarn-default.xml dfs.namenode.replication.max-streams-hard-limit4falsehdfs-default.xml dfs.datanode.dns.nameserverdefaultfalsehdfs-default.xml dfs.balancer.address0.0.0.0:0falsehdfs-default.xml yarn.nodemanager.webapp.rest-csrf.custom-headerX-XSRF-Headerfalseyarn-default.xml yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory10falseyarn-default.xml yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts3falsemapred-default.xml dfs.datanode.directoryscan.throttle.limit.ms.per.sec1000falsehdfs-default.xml yarn.resourcemanager.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEADfalseyarn-default.xml yarn.timeline-service.hbase-schema.prefixprod.falseyarn-default.xml hbase.normalizer.period300000falsehbase-default.xml dfs.namenode.reencrypt.sleep.interval1mfalsehdfs-default.xml yarn.resourcemanager.configuration.file-system-based-store/yarn/conffalseyarn-default.xml fs.automatic.closetruefalsecore-default.xml hbase.wal.async.event-loop.configglobal-event-loopfalseprogrammatically dfs.client.read.striped.threadpool.size18falsehdfs-default.xml mapreduce.framework.namelocalfalsemapred-default.xml hbase.master.info.bindAddress0.0.0.0falsehbase-default.xml dfs.qjournal.select-input-streams.timeout.ms20000falsehdfs-default.xml dfs.client.failover.sleep.max.millis5000falsehdfs-site.xml yarn.nodemanager.node-labels.provider.fetch-timeout-ms1200000falseyarn-default.xml yarn.resourcemanager.max-completed-applications1000falseyarn-default.xml fs.s3a.retry.throttle.limit${fs.s3a.attempts.maximum}falsecore-default.xml yarn.app.mapreduce.am.staging-dir/tmp/hadoop-yarn/stagingfalsemapred-default.xml hbase.client.write.buffer2097152falsehbase-default.xml yarn.nm.liveness-monitor.expiry-interval-ms600000falseyarn-default.xml mapreduce.reduce.shuffle.fetch.retry.enabled${yarn.nodemanager.recovery.enabled}falsemapred-default.xml dfs.datanode.disk.check.timeout10mfalsehdfs-default.xml fs.s3a.multiobjectdelete.enabletruefalsecore-default.xml dfs.datanode.peer.metrics.min.outlier.detection.samples1000falsehdfs-default.xml dfs.encrypt.data.overwrite.downstream.derived.qopfalsefalsehdfs-default.xml dfs.balancer.max-no-move-interval60000falsehdfs-default.xml hbase.hstore.compaction.max10falsehbase-default.xml ftp.client-write-packet-size65536falsecore-default.xml yarn.nodemanager.numa-awareness.enabledfalsefalseyarn-default.xml hbase.regionserver.dns.interfacedefaultfalsehbase-default.xml mapreduce.task.io.sort.mb100falsemapred-default.xml hbase.status.publishedfalsefalsehbase-default.xml hadoop.security.kms.client.encrypted.key.cache.expiry43200000falsecore-default.xml hbase.master.balancer.decision.buffer.enabledfalsefalsehbase-default.xml dfs.namenode.snapshot.skip.capture.accesstime-only-changefalsefalsehdfs-default.xml yarn.router.clientrm.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.clientrm.DefaultClientRequestInterceptorfalseyarn-default.xml yarn.nodemanager.amrmproxy.address0.0.0.0:8049falseyarn-default.xml ftp.blocksize67108864falsecore-default.xml hadoop.registry.jaas.contextClientfalsecore-default.xml yarn.nodemanager.container.stderr.pattern{*stderr*,*STDERR*}falseyarn-default.xml yarn.nodemanager.log-dirs${yarn.log.dir}/userlogsfalseyarn-default.xml hbase.defaults.for.version.skipfalsefalsehbase-default.xml dfs.datanode.disk.check.min.gap15mfalsehdfs-default.xml fs.df.interval60000falsecore-default.xml dfs.blockreport.incremental.intervalMsec0falsehdfs-default.xml io.skip.checksum.errorsfalsefalsecore-default.xml hadoop.jetty.logs.serve.aliasestruefalsecore-default.xml yarn.nodemanager.remote-app-log-dir-suffixlogsfalseyarn-default.xml hbase.hlog.split.skip.errorstruefalsehbase-site.xml dfs.namenode.max.op.size52428800falsehdfs-default.xml dfs.client.contextdefaultfalsehdfs-default.xml zookeeper.recovery.retry.maxsleeptime60000falsehbase-default.xml dfs.namenode.edits.noeditlogchannelflushfalsefalsehdfs-default.xml yarn.nodemanager.resource.memory.cgroups.swappiness0falseyarn-default.xml dfs.namenode.reencrypt.throttle.limit.handler.ratio1.0falsehdfs-default.xml hbase.hstore.compaction.kv.max10falsehbase-default.xml io.serializationsorg.apache.hadoop.io.serializer.WritableSerialization, org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization, org.apache.hadoop.io.serializer.avro.AvroReflectSerializationfalsecore-default.xml dfs.webhdfs.rest-csrf.custom-headerX-XSRF-HEADERfalsehdfs-default.xml mapreduce.reduce.skip.maxgroups0falsemapred-default.xml mapreduce.jobhistory.webapp.rest-csrf.custom-headerX-XSRF-Headerfalsemapred-default.xml dfs.ha.fencing.ssh.connect-timeout3000falsehdfs-site.xml mapreduce.jobhistory.client.thread-count10falsemapred-default.xml dfs.journalnode.sync.interval120000falsehdfs-default.xml dfs.namenode.missing.checkpoint.periods.before.shutdown3falsehdfs-default.xml hadoop.security.random.device.file.path/dev/urandomfalsecore-default.xml fs.s3a.max.total.tasks5falsecore-default.xml fs.s3a.fast.upload.bufferdiskfalsecore-default.xml hadoop.security.group.mapping.ldap.posix.attr.gid.namegidNumberfalsecore-default.xml yarn.nodemanager.windows-container.memory-limit.enabledfalsefalseyarn-default.xml yarn.nodemanager.node-labels.resync-interval-ms120000falseyarn-default.xml mapreduce.job.speculative.speculative-cap-total-tasks0.01falsemapred-default.xml dfs.client.retry.times.get-last-block-length3falsehdfs-default.xml yarn.federation.cache-ttl.secs300falseyarn-default.xml dfs.namenode.audit.log.token.tracking.idfalsefalsehdfs-default.xml dfs.datanode.bp-ready.timeout20sfalsehdfs-default.xml dfs.client-write-packet-size65536falsehdfs-default.xml dfs.journalnode.https-addresscvp328.sjc.aristanetworks.com:8481falsehdfs-site.xml dfs.namenode.enable.retrycachetruefalsehdfs-default.xml dfs.namenode.snapshot.max.limit65536falsehdfs-default.xml dfs.namenode.audit.log.asyncfalsefalsehdfs-default.xml hbase.regionserver.optionalcacheflushinterval3600000falsehbase-default.xml yarn.timeline-service.leveldb-state-store.path${hadoop.tmp.dir}/yarn/timelinefalseyarn-default.xml dfs.journalnode.rpc-addresscvp328.sjc.aristanetworks.com:8485falsehdfs-site.xml dfs.datanode.fileio.profiling.sampling.percentage0falsehdfs-default.xml dfs.http.client.failover.sleep.base.millis500falsehdfs-default.xml yarn.sharedcache.app-checker.classorg.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppCheckerfalseyarn-default.xml dfs.datanode.socket.write.timeout10000falsehdfs-site.xml yarn.app.mapreduce.am.container.log.limit.kb0falsemapred-default.xml yarn.resourcemanager.placement-constraints.retry-attempts3falseyarn-default.xml hbase.ipc.server.callqueue.read.ratio0falsehbase-default.xml hbase.master.fileSplitTimeout600000falsehbase-default.xml hadoop.security.crypto.cipher.suiteAES/CTR/NoPaddingfalsecore-default.xml hadoop.security.kms.client.failover.sleep.base.millis100falsecore-default.xml yarn.resourcemanager.placement-constraints.algorithm.pool-size1falseyarn-default.xml dfs.disk.balancer.plan.valid.interval1dfalsehdfs-default.xml hbase.regions.recovery.store.file.ref.count-1falsehbase-default.xml hbase.systemtables.compacting.memstore.typeNONEfalsehbase-default.xml yarn.app.mapreduce.am.hard-kill-timeout-ms10000falsemapred-default.xml dfs.encrypt.data.transfer.cipher.key.bitlength128falsehdfs-default.xml yarn.ipc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPCfalseyarn-default.xml hbase.dynamic.jars.dir${hbase.rootdir}/libfalsehbase-default.xml file.replication1falsecore-default.xml dfs.datanode.drop.cache.behind.writesfalsefalsehdfs-default.xml dfs.data.transfer.server.tcpnodelaytruefalsehdfs-default.xml hadoop.zk.timeout-ms10000falsecore-default.xml yarn.resourcemanager.decommissioning-nodes-watcher.poll-interval-secs20falseyarn-default.xml dfs.balancer.max-size-to-move10737418240falsehdfs-default.xml mapreduce.job.sharedcache.modedisabledfalsemapred-default.xml dfs.client.failover.max.attempts20falsehdfs-site.xml hbase.thrift.minWorkerThreads16falsehbase-default.xml rpc.metrics.quantile.enablefalsefalsecore-default.xml hbase.master.port16000falsehbase-site.xml hbase.regionserver.slowlog.systable.enabledfalsefalsehbase-default.xml yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb0falseyarn-default.xml dfs.namenode.snapshot.skiplist.interval10falsehdfs-default.xml dfs.namenode.support.allow.formattruefalsehdfs-default.xml fs.AbstractFileSystem.file.implorg.apache.hadoop.fs.local.LocalFsfalsecore-default.xml io.file.buffer.size131072falsecore-site.xml hbase.rest.filter.classesorg.apache.hadoop.hbase.rest.filter.GzipFilterfalsehbase-default.xml yarn.nodemanager.collector-service.address${yarn.nodemanager.hostname}:8048falseyarn-default.xml dfs.balancer.block-move.timeout0falsehdfs-default.xml dfs.namenode.lease-recheck-interval-ms2000falsehdfs-default.xml yarn.router.webapp.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.webapp.DefaultRequestInterceptorRESTfalseyarn-default.xml dfs.namenode.edits.asyncloggingtruefalsehdfs-default.xml dfs.client.mmap.cache.size256falsehdfs-default.xml dfs.namenode.snapshot.capture.openfilesfalsefalsehdfs-default.xml dfs.xframe.valueSAMEORIGINfalsehdfs-default.xml dfs.namenode.delegation.key.update-interval86400000falsehdfs-default.xml dfs.datanode.du.reserved.calculatororg.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReservedSpaceCalculator$ReservedSpaceCalculatorAbsolutefalsehdfs-default.xml hadoop.user.group.static.mapping.overridesdr.who=;falsecore-default.xml hbase.region.replica.replication.enabledfalsefalsehbase-default.xml yarn.resourcemanager.ha.automatic-failover.enabledtruefalseyarn-default.xml hbase.snapshot.master.timeout.millis300000falsehbase-default.xml yarn.sharedcache.client-server.thread-count50falseyarn-default.xml yarn.fail-fastfalsefalseyarn-default.xml dfs.namenode.read-lock-reporting-threshold-ms5000falsehdfs-default.xml mapreduce.job.end-notification.retry.attempts0falsemapred-default.xml hbase.ipc.server.callqueue.handler.factor0.1falsehbase-default.xml dfs.ha.zkfc.port8019falsehdfs-default.xml yarn.resourcemanager.application-timeouts.monitor.interval-ms3000falseyarn-default.xml dfs.namenode.redundancy.considerLoadtruefalsehdfs-default.xml yarn.client.load.resource-types.from-serverfalsefalseyarn-default.xml yarn.resourcemanager.webapp.rest-csrf.enabledfalsefalseyarn-default.xml yarn.nodemanager.distributed-scheduling.enabledfalsefalseyarn-default.xml yarn.resourcemanager.resource-profiles.enabledfalsefalseyarn-default.xml dfs.client.https.need-authfalsefalsehdfs-default.xml yarn.resourcemanager.system-metrics-publisher.enabledfalsefalseyarn-default.xml datanode.https.port50475falsehdfs-default.xml hbase.regionserver.slowlog.ringbuffer.size256falsehbase-default.xml yarn.timeline-service.entity-group-fs-store.leveldb-cache-read-cache-size10485760falseyarn-default.xml dfs.balancer.getBlocks.min-block-size10485760falsehdfs-default.xml hbase.client.localityCheck.threadPoolSize2falsehbase-default.xml dfs.client.https.keystore.resourcessl-client.xmlfalsehdfs-default.xml dfs.namenode.checkpoint.txns1000000falsehdfs-default.xml yarn.timeline-service.timeline-client.number-of-async-entities-to-merge10falseyarn-default.xml fs.AbstractFileSystem.webhdfs.implorg.apache.hadoop.fs.WebHdfsfalsecore-default.xml yarn.timeline-service.http-authentication.typesimplefalseyarn-default.xml mapreduce.jobhistory.loadedjobs.cache.size5falsemapred-default.xml yarn.nodemanager.resource.percentage-physical-cpu-limit100falseyarn-default.xml yarn.nodemanager.recovery.dir${hadoop.tmp.dir}/yarn-nm-recoveryfalseyarn-default.xml dfs.namenode.name.dirfile://${hadoop.tmp.dir}/dfs/namefalsehdfs-default.xml mapreduce.cluster.acls.enabledfalsefalsemapred-default.xml mapreduce.client.progressmonitor.pollinterval1000falsemapred-default.xml file.client-write-packet-size65536falsecore-default.xml hadoop.security.group.mapping.ldap.search.attr.group.namecnfalsecore-default.xml dfs.namenode.invalidate.work.pct.per.iteration0.32ffalsehdfs-default.xml dfs.namenode.name.cache.threshold10falsehdfs-default.xml dfs.namenode.redundancy.considerLoad.factor2.0falsehdfs-default.xml dfs.datanode.dns.interfacedefaultfalsehdfs-default.xml hbase.server.thread.wakefrequency10000falsehbase-default.xml yarn.nodemanager.resource.cpu-vcores-1falseyarn-default.xml dfs.http.client.failover.sleep.max.millis15000falsehdfs-default.xml yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size10falseyarn-default.xml mapreduce.outputcommitter.factory.scheme.s3aorg.apache.hadoop.fs.s3a.commit.S3ACommitterFactoryfalsemapred-default.xml dfs.namenode.secondary.https-address0.0.0.0:9869falsehdfs-default.xml yarn.timeline-service.client.retry-interval-ms1000falseyarn-default.xml dfs.namenode.blockreport.queue.size1024falsehdfs-default.xml ipc.client.bind.wildcard.addrfalsefalsecore-default.xml mapreduce.shuffle.port13562falsemapred-default.xml fs.s3a.path.style.accessfalsefalsecore-default.xml yarn.resourcemanager.container.liveness-monitor.interval-ms600000falseyarn-default.xml hbase.zookeeper.property.clientPort2181falsehbase-site.xml dfs.namenode.fslock.fairtruefalsehdfs-default.xml dfs.qjournal.start-segment.timeout.ms20000falsehdfs-default.xml hbase.hregion.memstore.mslab.enabledtruefalsehbase-site.xml hbase.offheapcache.percentage0.7falsehbase-site.xml yarn.scheduler.maximum-allocation-mb8192falseyarn-default.xml dfs.namenode.storageinfo.defragment.timeout.ms4falsehdfs-default.xml fs.s3a.s3guard.ddb.table.capacity.read500falsecore-default.xml mapreduce.job.speculative.minimum-allowed-tasks10falsemapred-default.xml hbase.regionserver.storefile.refresh.period0falsehbase-default.xml yarn.federation.enabledfalsefalseyarn-default.xml mapreduce.jobhistory.datestring.cache.size200000falsemapred-default.xml dfs.disk.balancer.enabledtruefalsehdfs-default.xml yarn.resourcemanager.admin.client.thread-count1falseyarn-default.xml dfs.datanode.network.counts.cache.max.size2147483647falsehdfs-default.xml mapreduce.shuffle.listen.queue.size128falsemapred-default.xml yarn.resourcemanager.nm-tokens.master-key-rolling-interval-secs86400falseyarn-default.xml yarn.timeline-service.leveldb-timeline-store.path${hadoop.tmp.dir}/yarn/timelinefalseyarn-default.xml hbase.rpc.client.event-loop.configglobal-event-loopfalseprogrammatically mapreduce.job.reducer.preempt.delay.sec0falsemapred-default.xml dfs.namenode.http-address.mycluster.cvp365.sjc.aristanetworks.comcvp365.sjc.aristanetworks.com:15070falsehdfs-site.xml hbase.http.staticuser.userdr.stackfalsehbase-default.xml hbase.regionserver.thrift.compactfalsefalsehbase-default.xml fs.wasb.implorg.apache.hadoop.fs.azure.NativeAzureFileSystemfalsecore-default.xml dfs.ha.standby.checkpointstruefalsehdfs-default.xml dfs.balancer.movedWinWidth5400000falsehdfs-default.xml yarn.nodemanager.localizer.client.thread-count5falseyarn-default.xml mapreduce.task.userlog.limit.kb0falsemapred-default.xml hbase.regionserver.hlog.writer.implorg.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriterfalsehbase-default.xml mapreduce.jobhistory.minicluster.fixed.portsfalsefalsemapred-default.xml yarn.resourcemanager.webapp.ui-actions.enabledtruefalseyarn-default.xml hadoop.security.group.mapping.providers.combinedtruefalsecore-default.xml hbase.client.keyvalue.maxsize104857600falsehbase-site.xml yarn.timeline-service.writer.flush-interval-seconds60falseyarn-default.xml dfs.namenode.block-placement-policy.default.prefer-local-nodetruefalsehdfs-default.xml fs.s3a.committer.staging.tmp.pathtmp/stagingfalsecore-default.xml fs.ftp.data.connection.modeACTIVE_LOCAL_DATA_CONNECTION_MODEfalsecore-default.xml dfs.namenode.edits.dir.minimum1falsehdfs-default.xml dfs.namenode.fs-limits.max-blocks-per-file10000falsehdfs-default.xml yarn.client.application-client-protocol.poll-interval-ms200falseyarn-default.xml yarn.nodemanager.runtime.linux.sandbox-mode.local-dirs.permissionsreadfalseyarn-default.xml hadoop.registry.securefalsefalsecore-default.xml dfs.provided.aliasmap.inmemory.batch-size500falsehdfs-default.xml hadoop.security.sensitive-config-keys secret$ password$ ssl.keystore.pass$ fs.s3a.server-side-encryption.key fs.s3a.*.server-side-encryption.key fs.s3a.secret.key fs.s3a.*.secret.key fs.s3a.session.key fs.s3a.*.session.key fs.s3a.session.token fs.s3a.*.session.token fs.azure.account.key.* fs.azure.oauth2.* fs.adl.oauth2.* credential$ oauth.*token$ hadoop.security.sensitive-config-keys falsecore-default.xml yarn.timeline-service.client.drain-entities.timeout.ms2000falseyarn-default.xml yarn.sharedcache.store.in-memory.staleness-period-mins10080falseyarn-default.xml fs.s3a.s3guard.ddb.background.sleep25falsecore-default.xml hbase.server.versionfile.writeattempts3falsehbase-default.xml yarn.resourcemanager.scheduler.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerfalseyarn-default.xml dfs.heartbeat.interval3sfalsehdfs-default.xml dfs.http.client.failover.max.attempts15falsehdfs-default.xml mapreduce.task.profile.params-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%sfalsemapred-default.xml yarn.nodemanager.address${yarn.nodemanager.hostname}:0falseyarn-default.xml dfs.datanode.synconclosetruefalsehdfs-site.xml dfs.namenode.fs-limits.max-xattr-size16384falsehdfs-default.xml yarn.nodemanager.resource-plugins.gpu.docker-pluginnvidia-docker-v1falseyarn-default.xml mapreduce.jobhistory.recovery.store.leveldb.path${hadoop.tmp.dir}/mapred/history/recoverystorefalsemapred-default.xml mapreduce.shuffle.connection-keep-alive.enablefalsefalsemapred-default.xml hbase.master.hfilecleaner.pluginsorg.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner,org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleanerfalsehbase-default.xml hbase.regionserver.info.port16301falseprogrammatically yarn.nodemanager.localizer.cache.cleanup.interval-ms600000falseyarn-default.xml hbase.offpeak.start.hour-1falsehbase-default.xml yarn.webapp.api-service.enablefalsefalseyarn-default.xml yarn.intermediate-data-encryption.enablefalsefalseyarn-default.xml yarn.resourcemanager.node-removal-untracked.timeout-ms60000falseyarn-default.xml dfs.namenode.write.stale.datanode.ratio0.5ffalsehdfs-default.xml yarn.resourcemanager.rm.container-allocation.expiry-interval-ms600000falseyarn-default.xml ha.failover-controller.cli-check.rpc-timeout.ms20000falsecore-site.xml yarn.sharedcache.cleaner.period-mins1440falseyarn-default.xml dfs.namenode.storage.dir.perm700falsehdfs-default.xml io.erasurecode.codec.rs.rawcodersrs_native,rs_javafalsecore-default.xml dfs.namenode.snapshot.skiplist.max.levels0falsehdfs-default.xml yarn.resourcemanager.placement-constraints.scheduler.pool-size1falseyarn-default.xml fs.s3a.threads.keepalivetime60falsecore-default.xml yarn.minicluster.use-rpcfalsefalseyarn-default.xml hbase.regionserver.handler.abort.on.error.percent0.5falsehbase-default.xml dfs.ha.tail-edits.in-progressfalsefalsehdfs-default.xml hbase.mob.compaction.threads.max1falsehbase-default.xml hfile.block.cache.policyLRUfalsehbase-default.xml yarn.federation.state-store.classorg.apache.hadoop.yarn.server.federation.store.impl.MemoryFederationStateStorefalseyarn-default.xml dfs.datanode.sync.behind.writes.in.backgroundfalsefalsehdfs-default.xml yarn.nodemanager.collector-service.thread-count5falseyarn-default.xml fs.s3a.committer.staging.abort.pending.uploadstruefalsecore-default.xml fs.s3a.committer.magic.enabledfalsefalsecore-default.xml yarn.nodemanager.node-labels.provider.fetch-interval-ms600000falseyarn-default.xml hbase.hstore.compaction.throughput.lower.bound52428800falsehbase-default.xml hbase.hstore.blockingStoreFiles16falsehbase-default.xml hbase.regionserver.metahandler.count10falsehbase-site.xml dfs.client.retry.policy.enabledfalsefalsehdfs-default.xml dfs.journalnode.edits.dir.perm700falsehdfs-default.xml yarn.resourcemanager.nodemanager-connect-retries10falseyarn-default.xml hbase.ipc.client.fallback-to-simple-auth-allowedfalsefalsehbase-default.xml hbase.regionserver.thread.compaction.throttle2684354560falsehbase-default.xml hbase.rpc.shortoperation.timeout10000falsehbase-default.xml mapreduce.jobhistory.jobname.limit50falsemapred-default.xml yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size10000falseyarn-default.xml dfs.webhdfs.ugi.expire.after.access600000falsehdfs-default.xml dfs.namenode.list.cache.directives.num.responses100falsehdfs-default.xml dfs.blockreport.initialDelay0sfalsehdfs-default.xml mapreduce.job.speculative.retry-after-no-speculate1000falsemapred-default.xml yarn.nodemanager.resource.memory.enabledfalsefalseyarn-default.xml hbase.rpc.rows.size.threshold.rejectfalsefalsehbase-default.xml mapreduce.jobhistory.intermediate-done-dir${yarn.app.mapreduce.am.staging-dir}/history/done_intermediatefalsemapred-default.xml mapreduce.map.cpu.vcores1falsemapred-default.xml fs.azure.sas.expiry.period90dfalsecore-default.xml yarn.timeline-service.leveldb-timeline-store.read-cache-size104857600falseyarn-default.xml dfs.blockreport.split.threshold1000000falsehdfs-default.xml fs.s3a.block.size32Mfalsecore-default.xml dfs.journalnode.edits.dir/data/journalnodefalsehdfs-site.xml hadoop.registry.zk.connection.timeout.ms15000falsecore-site.xml yarn.sharedcache.webapp.address0.0.0.0:8788falseyarn-default.xml dfs.namenode.edekcacheloader.interval.ms1000falsehdfs-default.xml fs.client.resolve.topology.enabledfalsefalsecore-default.xml dfs.mover.keytab.enabledfalsefalsehdfs-default.xml dfs.namenode.resource.checked.volumes.minimum1falsehdfs-default.xml hbase.wal.dir.perms700falsehbase-default.xml ftp.replication3falsecore-default.xml yarn.resourcemanager.delegation-token.max-conf-size-bytes12800falseyarn-default.xml io.compression.codec.bzip2.librarysystem-nativefalsecore-default.xml dfs.encrypt.data.transferfalsefalsehdfs-default.xml mapreduce.reduce.shuffle.retry-delay.max.ms60000falsemapred-default.xml yarn.nodemanager.disk-health-checker.min-free-space-per-disk-watermark-high-mb0falseyarn-default.xml yarn.app.mapreduce.shuffle.log.limit.kb0falsemapred-default.xml yarn.resourcemanager.metrics.runtime.buckets60,300,1440falseyarn-default.xml hbase.client.max.total.tasks100falsehbase-default.xml yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidia-docker-v1.endpointhttp://localhost:3476/v1.0/docker/clifalseyarn-default.xml yarn.timeline-service.entity-group-fs-store.cache-store-classorg.apache.hadoop.yarn.server.timeline.MemoryTimelineStorefalseyarn-default.xml hbase.http.filter.initializersorg.apache.hadoop.hbase.http.lib.StaticUserWebFilterfalsehbase-default.xml hadoop.security.key.default.bitlength128falsecore-default.xml mapreduce.task.timeout600000falsemapred-default.xml mapreduce.jobhistory.recovery.store.classorg.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreServicefalsemapred-default.xml hadoop.security.groups.cache.warn.after.ms5000falsecore-default.xml yarn.nodemanager.localizer.fetch.thread-count4falseyarn-default.xml mapreduce.reduce.shuffle.input.buffer.percent0.70falsemapred-default.xml dfs.datanode.data.dirfile://${hadoop.tmp.dir}/dfs/datafalsehdfs-default.xml hfile.block.bloom.cacheonwritefalsefalsehbase-default.xml dfs.namenode.accesstime.precision3600000falsehdfs-default.xml dfs.namenode.decommission.max.concurrent.tracked.nodes100falsehdfs-default.xml dfs.namenode.avoid.write.stale.datanodetruefalsehdfs-site.xml yarn.minicluster.yarn.nodemanager.resource.memory-mb4096falseyarn-default.xml yarn.nodemanager.container-executor.classorg.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutorfalseyarn-default.xml yarn.nodemanager.windows-container.cpu-limit.enabledfalsefalseyarn-default.xml dfs.client.write.exclude.nodes.cache.expiry.interval.millis600000falsehdfs-default.xml hbase.hstore.compaction.ratio.offpeak5.0Ffalsehbase-default.xml yarn.resourcemanager.fs.state-store.num-retries0falseyarn-default.xml hbase.security.visibility.mutations.checkauthsfalsefalsehbase-default.xml fs.s3a.assumed.role.credentials.providerorg.apache.hadoop.fs.s3a.SimpleAWSCredentialsProviderfalsecore-default.xml hbase.server.scanner.max.result.size104857600falsehbase-default.xml hbase.master.loadbalancer.classorg.apache.hadoop.hbase.master.balancer.StochasticLoadBalancerfalsehbase-default.xml mapreduce.job.local-fs.single-disk-limit.bytes-1falsemapred-default.xml dfs.datanode.use.datanode.hostnamefalsefalsehdfs-default.xml file.blocksize67108864falsecore-default.xml ipc.client.kill.max10falsecore-default.xml yarn.resourcemanager.nodemanager.minimum.versionNONEfalseyarn-default.xml dfs.namenode.list.cache.pools.num.responses100falsehdfs-default.xml dfs.datanode.cache.revocation.polling.ms500falsehdfs-default.xml hadoop.http.logs.enabledtruefalsecore-default.xml dfs.client.read.shortcircuit.streams.cache.size256falsehdfs-default.xml yarn.timeline-service.reader.webapp.address${yarn.timeline-service.webapp.address}falseyarn-default.xml yarn.router.pipeline.cache-max-size25falseyarn-default.xml dfs.ls.limit1000falsehdfs-default.xml io.mapfile.bloom.size1048576falsecore-default.xml yarn.scheduler.queue-placement-rulesuser-groupfalseyarn-default.xml seq.io.sort.mb100falsecore-default.xml mapreduce.task.exit.timeout60000falsemapred-default.xml net.topology.implorg.apache.hadoop.net.NetworkTopologyfalsecore-default.xml dfs.namenode.kerberos.principal.pattern*falsehdfs-default.xml yarn.node-labels.enabledfalsefalseyarn-default.xml fs.trash.checkpoint.interval0falsecore-default.xml yarn.app.mapreduce.client.max-retries3falsemapred-default.xml mapreduce.job.maps2falsemapred-default.xml hbase.coordinated.state.manager.classorg.apache.hadoop.hbase.coordination.ZkCoordinatedStateManagerfalsehbase-default.xml yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size10000falseyarn-default.xml yarn.nodemanager.linux-container-executor.resources-handler.classorg.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandlerfalseyarn-default.xml dfs.client.failover.random.orderfalsefalsehdfs-default.xml mapreduce.reduce.maxattempts4falsemapred-default.xml mapreduce.job.acl-view-job falsemapred-default.xml dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondaryfalsehdfs-default.xml dfs.webhdfs.use.ipc.callqtruefalsehdfs-default.xml hadoop.http.authentication.typesimplefalsecore-default.xml hadoop.security.java.secure.random.algorithmSHA1PRNGfalsecore-default.xml dfs.namenode.resource.du.reserved104857600falsehdfs-default.xml hadoop.security.auth_to_local.mechanismhadoopfalsecore-default.xml ipc.server.log.slow.rpcfalsefalsecore-default.xml yarn.nodemanager.disk-health-checker.min-healthy-disks0.25falseyarn-default.xml mapreduce.job.max.map-1falsemapred-default.xml dfs.client.retry.window.base3000falsehdfs-default.xml yarn.nodemanager.localizer.address${yarn.nodemanager.hostname}:8040falseyarn-default.xml fs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFsfalsecore-default.xml mapreduce.job.map.output.collector.classorg.apache.hadoop.mapred.MapTask$MapOutputBufferfalsemapred-default.xml dfs.namenode.state.context.enabledfalsefalsehdfs-default.xml yarn.timeline-service.ttl-ms604800000falseyarn-default.xml dfs.blocksize134217728falsehdfs-default.xml dfs.nameservicesmyclustertruehdfs-site.xml dfs.webhdfs.acl.provider.permission.pattern^(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?(,(default:)?(user|group|mask|other):[[A-Za-z_][A-Za-z0-9._-]]*:([rwx-]{3})?)*$falsehdfs-default.xml yarn.resourcemanager.opportunistic-container-allocation.nodes-used10falseyarn-default.xml yarn.nodemanager.runtime.linux.docker.userremapping-uid-threshold1falseyarn-default.xml yarn.sharedcache.admin.thread-count1falseyarn-default.xml tfile.io.chunk.size1048576falsecore-default.xml hfile.index.block.max.size131072falsehbase-default.xml mapreduce.task.combine.progress.records10000falsemapred-default.xml yarn.sharedcache.cleaner.initial-delay-mins10falseyarn-default.xml dfs.provided.aliasmap.text.delimiter,falsehdfs-default.xml fs.ftp.host.port21falsecore-default.xml hbase.hstore.compaction.throughput.higher.bound104857600falsehbase-default.xml mapreduce.job.committer.setup.cleanup.neededtruefalsemapred-default.xml yarn.timeline-service.hostname0.0.0.0falseyarn-default.xml hfile.format.version3falsehbase-default.xml dfs.client.write.byte-array-manager.count-limit2048falsehdfs-default.xml hbase.master.wait.on.service.seconds30falsehbase-default.xml hadoop.security.kms.client.timeout60falsecore-default.xml yarn.resourcemanager.leveldb-state-store.path${hadoop.tmp.dir}/yarn/system/rmstorefalseyarn-default.xml mapreduce.jobhistory.keytab/etc/security/keytab/jhs.service.keytabfalsemapred-default.xml yarn.resourcemanager.nm-container-queuing.sorting-nodes-interval-ms1000falseyarn-default.xml ha.health-monitor.connect-retry-interval.ms1000falsecore-site.xml yarn.nodemanager.opportunistic-containers-use-pause-for-preemptionfalsefalseyarn-default.xml dfs.client.key.provider.cache.expiry864000000falsehdfs-default.xml mapreduce.jobhistory.webapp.rest-csrf.enabledfalsefalsemapred-default.xml dfs.namenode.max.objects0falsehdfs-default.xml dfs.namenode.max.full.block.report.leases6falsehdfs-default.xml dfs.datanode.ec.reconstruction.threads8falsehdfs-default.xml yarn.resourcemanager.amlauncher.thread-count50falseyarn-default.xml hbase.client.scanner.timeout.period60000falsehbase-default.xml nfs.wtmax1048576falsehdfs-default.xml dfs.client.socketcache.capacity16falsehdfs-default.xml yarn.scheduler.configuration.store.classfilefalseyarn-default.xml yarn.nodemanager.webapp.xfs-filter.xframe-optionsSAMEORIGINfalseyarn-default.xml dfs.client.block.write.retries3falsehdfs-default.xml yarn.acl.enablefalsefalseyarn-default.xml yarn.nodemanager.resource-monitor.interval-ms3000falseyarn-default.xml fs.s3a.assumed.role.session.duration30mfalsecore-default.xml yarn.nodemanager.recovery.supervisedfalsefalseyarn-default.xml hbase.namedqueue.provider.classesorg.apache.hadoop.hbase.namequeues.impl.SlowLogQueueService,org.apache.hadoop.hbase.namequeues.impl.BalancerDecisionQueueService,org.apache.hadoop.hbase.namequeues.impl.BalancerRejectionQueueServicefalsehbase-default.xml fs.adl.oauth2.access.token.provider.typeClientCredentialfalsecore-default.xml fs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfsfalsecore-default.xml dfs.quota.by.storage.type.enabledtruefalsehdfs-default.xml dfs.block.scanner.volume.bytes.per.second1048576falsehdfs-default.xml yarn.resourcemanager.application.max-tag.length100falseyarn-default.xml dfs.namenode.fs-limits.max-component-length255falsehdfs-default.xml dfs.http.client.retry.policy.enabledfalsefalsehdfs-default.xml mapreduce.job.complete.cancel.delegation.tokenstruefalsemapred-default.xml mapreduce.job.cache.limit.max-resources0falsemapred-default.xml hbase.regionserver.global.memstore.size0.4falsehbase-site.xml yarn.resourcemanager.recovery.enabledfalsefalseyarn-default.xml yarn.resourcemanager.nodemanagers.heartbeat-interval-ms1000falseyarn-default.xml mapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecfalsemapred-default.xml dfs.ha.tail-edits.period.backoff-max0falsehdfs-default.xml fs.AbstractFileSystem.har.implorg.apache.hadoop.fs.HarFsfalsecore-default.xml hbase.defaults.for.version2.4.8falsehbase-default.xml yarn.resourcemanager.placement-constraints.algorithm.iteratorSERIALfalseyarn-default.xml yarn.nodemanager.process-kill-wait.ms5000falseyarn-default.xml ha.zookeeper.parent-znode/hadoop-hafalsecore-default.xml hbase.rest.support.proxyuserfalsefalsehbase-default.xml yarn.admin.acl*falseyarn-default.xml hbase.zookeeper.property.initLimit10falsehbase-default.xml hbase.auth.key.update.interval86400000falsehbase-default.xml dfs.namenode.available-space-block-placement-policy.balance-local-nodefalsefalsehdfs-default.xml yarn.resourcemanager.zk-appid-node.split-index0falseyarn-default.xml fs.s3a.threads.max10falsecore-default.xml yarn.resourcemanager.state-store.max-completed-applications${yarn.resourcemanager.max-completed-applications}falseyarn-default.xml fs.s3a.etag.checksum.enabledfalsefalsecore-default.xml dfs.datanode.peer.stats.enabledfalsefalsehdfs-default.xml fs.AbstractFileSystem.ftp.implorg.apache.hadoop.fs.ftp.FtpFsfalsecore-default.xml ipc.client.idlethreshold4000falsecore-default.xml zookeeper.znode.acl.parentaclfalsehbase-default.xml ftp.bytes-per-checksum512falsecore-default.xml dfs.client.socket.send.buffer.size0falsehdfs-default.xml hbase.normalizer.merge.min_region_age.days3falsehbase-default.xml dfs.provided.aliasmap.inmemory.leveldb.dir/tmpfalsehdfs-default.xml dfs.namenode.posix.acl.inheritance.enabledtruefalsehdfs-default.xml hbase.rest.port8080falsehbase-default.xml fs.s3a.metadatastore.authoritativefalsefalsecore-default.xml hadoop.proxyuser.cvp.hosts*falsecore-site.xml dfs.disk.balancer.block.tolerance.percent10falsehdfs-default.xml ipc.client.connection.maxidletime10000falsecore-default.xml hadoop.common.configuration.version3.0.0falsecore-default.xml dfs.mover.retry.max.attempts10falsehdfs-default.xml mapreduce.ifile.readahead.bytes4194304falsemapred-default.xml io.map.index.interval128falsecore-default.xml dfs.disk.balancer.max.disk.throughputInMBperSec10falsehdfs-default.xml dfs.mover.moverThreads1000falsehdfs-default.xml yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms300000falseyarn-default.xml yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devicesautofalseyarn-default.xml hbase.master.info.port16010falseprogrammatically tfile.fs.output.buffer.size262144falsecore-default.xml hadoop.security.group.mapping.ldap.search.attr.membermemberfalsecore-default.xml dfs.qjournal.queued-edits.limit.mb10falsehdfs-default.xml yarn.nodemanager.health-checker.script.timeout-ms1200000falseyarn-default.xml dfs.namenode.handler.count200falsehdfs-site.xml yarn.nodemanager.amrmproxy.ha.enablefalsefalseyarn-default.xml dfs.namenode.fs-limits.min-block-size1048576falsehdfs-default.xml yarn.timeline-service.entity-group-fs-store.with-user-dirfalsefalseyarn-default.xml dfs.datanode.handler.count10falsehdfs-site.xml dfs.datanode.http.internal-proxy.port0falsehdfs-default.xml yarn.app.attempt.diagnostics.limit.kc64falseyarn-default.xml yarn.resourcemanager.connect.retry-interval.ms30000falseyarn-default.xml hadoop.security.group.mapping.ldap.num.attempts.before.failover3falsecore-default.xml hbase.bulkload.retries.number10falsehbase-default.xml yarn.resourcemanager.hostname0.0.0.0falseyarn-default.xml dfs.datanode.socket.reuse.keepalive4000falsehdfs-default.xml nfs.dump.dir/tmp/.hdfs-nfsfalsehdfs-default.xml dfs.namenode.retrycache.heap.percent0.03ffalsehdfs-default.xml dfs.namenode.xattrs.enabledtruefalsehdfs-default.xml yarn.nodemanager.container-localizer.java.opts-Xmx256mfalseyarn-default.xml dfs.namenode.quota.init-threads4falsehdfs-default.xml yarn.resourcemanager.nm-container-queuing.queue-limit-stdev1.0ffalseyarn-default.xml mapreduce.jobhistory.webapp.address0.0.0.0:19888falsemapred-default.xml mapreduce.job.speculative.slowtaskthreshold1.0falsemapred-default.xml yarn.resourcemanager.address${yarn.resourcemanager.hostname}:8032falseyarn-default.xml fs.s3a.connection.timeout200000falsecore-default.xml yarn.resourcemanager.submission-preprocessor.enabledfalsefalseyarn-default.xml dfs.block.access.token.enablefalsefalsehdfs-default.xml yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs86400falseyarn-default.xml mapreduce.map.sort.spill.percent0.80falsemapred-default.xml mapreduce.job.end-notification.max.attempts5truemapred-default.xml dfs.namenode.safemode.extension0falsehdfs-site.xml yarn.log-aggregation.debug.filesize104857600falseyarn-default.xml dfs.ha.fencing.methodsshell(/bin/true)falsehdfs-site.xml dfs.namenode.redundancy.interval.seconds3sfalsehdfs-default.xml dfs.namenode.rpc-address.mycluster.cvp328.sjc.aristanetworks.comcvp328.sjc.aristanetworks.com:9001falsehdfs-site.xml dfs.content-summary.limit5000falsehdfs-default.xml yarn.timeline-service.store-classorg.apache.hadoop.yarn.server.timeline.LeveldbTimelineStorefalseyarn-default.xml mapreduce.task.profile.reduces0-2falsemapred-default.xml dfs.namenode.edits.journal-plugin.qjournalorg.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerfalsehdfs-default.xml dfs.namenode.reencrypt.throttle.limit.updater.ratio1.0falsehdfs-default.xml yarn.timeline-service.client.fd-retain-secs300falseyarn-default.xml hadoop.security.group.mapping.ldap.search.group.hierarchy.levels0falsecore-default.xml hbase.master.cleaner.snapshot.interval1800000falsehbase-default.xml hbase.zookeeper.leaderport3888falsehbase-default.xml ha.failover-controller.graceful-fence.connection.retries1falsecore-site.xml hbase.zookeeper.dns.interfacedefaultfalsehbase-default.xml dfs.namenode.get-blocks.max-qps20falsehdfs-default.xml dfs.webhdfs.rest-csrf.enabledfalsefalsehdfs-default.xml yarn.nodemanager.runtime.linux.docker.userremapping-gid-threshold1falseyarn-default.xml dfs.namenode.decommission.blocks.per.interval500000falsehdfs-default.xml dfs.image.compressfalsefalsehdfs-default.xml hbase.hstore.compactionThreshold3falsehbase-default.xml hbase.hregion.memstore.mslab.chunksize2097152falsehbase-default.xml hbase.balancer.period150000falsehbase-site.xml yarn.am.liveness-monitor.expiry-interval-ms600000falseyarn-default.xml mapreduce.job.ubertask.maxreduces1falsemapred-default.xml dfs.namenode.safemode.min.datanodes2falsehdfs-site.xml fs.s3a.attempts.maximum20falsecore-default.xml mapreduce.jobhistory.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEADfalsemapred-default.xml mapreduce.jobhistory.move.interval-ms180000falsemapred-default.xml hadoop.caller.context.max.size128falsecore-default.xml fs.s3a.multipart.size100Mfalsecore-default.xml yarn.registry.classorg.apache.hadoop.registry.client.impl.FSRegistryOperationsServicefalseyarn-default.xml dfs.edit.log.transfer.timeout30000falsehdfs-default.xml yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage90.0falseyarn-default.xml hbase.rest.readonlyfalsefalsehbase-default.xml mapreduce.shuffle.ssl.enabledfalsefalsemapred-default.xml dfs.namenode.backup.address0.0.0.0:50100falsehdfs-default.xml yarn.nodemanager.linux-container-executor.nonsecure-mode.local-usernobodyfalseyarn-default.xml dfs.namenode.checkpoint.check.period60sfalsehdfs-default.xml nfs.allow.insecure.portstruefalsehdfs-default.xml hbase.regionserver.regionSplitLimit1000falsehbase-default.xml dfs.datanode.transfer.socket.send.buffer.size0falsehdfs-default.xml net.topology.node.switch.mapping.implorg.apache.hadoop.net.ScriptBasedMappingfalsecore-default.xml dfs.namenode.delegation.token.always-usefalsefalsehdfs-default.xml yarn.nodemanager.resource.count-logical-processors-as-coresfalsefalseyarn-default.xml fs.swift.implorg.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemfalsecore-default.xml yarn.workflow-id.tag-prefixworkflowid:falseyarn-default.xml yarn.resourcemanager.delegation.token.max-lifetime604800000falseyarn-default.xml yarn.app.mapreduce.shuffle.log.backups0falsemapred-default.xml dfs.webhdfs.oauth2.enabledfalsefalsehdfs-default.xml dfs.namenode.edit.log.autoroll.multiplier.threshold0.5falsehdfs-default.xml hbase.master.procedurewalcleaner.ttl3600000falsehbase-site.xml yarn.nodemanager.container-metrics.unregister-delay-ms10000falseyarn-default.xml hadoop.http.sni.host.check.enabledfalsefalsecore-default.xml dfs.webhdfs.user.provider.user.pattern^[A-Za-z_][A-Za-z0-9._-]*[$]?$falsehdfs-default.xml fs.defaultFShdfs://myclusterfalseprogrammatically fs.s3a.socket.recv.buffer8192falsecore-default.xml mapreduce.reduce.merge.inmem.threshold1000falsemapred-default.xml dfs.provided.aliasmap.classorg.apache.hadoop.hdfs.server.common.blockaliasmap.impl.TextFileRegionAliasMapfalsehdfs-default.xml yarn.router.webapp.address0.0.0.0:8089falseyarn-default.xml yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds3600falseyarn-default.xml yarn.timeline-service.writer.classorg.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineWriterImplfalseyarn-default.xml dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction0.75ffalsehdfs-default.xml yarn.nodemanager.containers-launcher.classorg.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherfalseyarn-default.xml yarn.nodemanager.runtime.linux.allowed-runtimesdefaultfalseyarn-default.xml io.map.index.skip0falsecore-default.xml dfs.secondary.namenode.kerberos.internal.spnego.principal${dfs.web.authentication.kerberos.principal}falsehdfs-default.xml hbase.local.dir${hbase.tmp.dir}/local/falsehbase-default.xml yarn.nodemanager.linux-container-executor.cgroups.delete-delay-ms20falseyarn-default.xml yarn.app.mapreduce.am.resource.mb1536falsemapred-default.xml dfs.namenode.provided.enabledfalsefalsehdfs-default.xml yarn.nodemanager.disk-health-checker.interval-ms120000falseyarn-default.xml mapreduce.reduce.memory.mb-1falsemapred-default.xml mapreduce.job.maxtaskfailures.per.tracker3falsemapred-default.xml dfs.block.placement.ec.classnameorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerantfalsehdfs-default.xml mapreduce.jvm.system-properties-to-logos.name,os.version,java.home,java.runtime.version,java.vendor,java.version,java.vm.name,java.class.path,java.io.tmpdir,user.dir,user.namefalsemapred-default.xml hadoop.security.crypto.buffer.size8192falsecore-default.xml yarn.nodemanager.container-metrics.period-ms-1falseyarn-default.xml yarn.scheduler.minimum-allocation-vcores1falseyarn-default.xml yarn.resourcemanager.keytab/etc/krb5.keytabfalseyarn-default.xml yarn.nodemanager.linux-container-executor.cgroups.hierarchy/hadoop-yarnfalseyarn-default.xml yarn.resourcemanager.fs.state-store.retry-interval-ms1000falseyarn-default.xml ipc.maximum.response.length134217728falsecore-default.xml yarn.timeline-service.webapp.xfs-filter.xframe-optionsSAMEORIGINfalseyarn-default.xml mapreduce.job.split.metainfo.maxsize10000000falsemapred-default.xml yarn.scheduler.configuration.zk-store.parent-path/confstorefalseyarn-default.xml yarn.log-aggregation.file-formatsTFilefalseyarn-default.xml yarn.nodemanager.logaggregation.threadpool-size-max100falseyarn-default.xml yarn.resourcemanager.nm-container-queuing.max-queue-wait-time-ms100falseyarn-default.xml mapreduce.reduce.shuffle.read.timeout180000falsemapred-default.xml yarn.app.mapreduce.task.container.log.backups0falsemapred-default.xml fs.s3a.s3guard.ddb.table.capacity.write100falsecore-default.xml ipc.client.connect.max.retries30falsecore-site.xml hadoop.service.shutdown.timeout30sfalsecore-default.xml yarn.nodemanager.webapp.https.address0.0.0.0:8044falseyarn-default.xml yarn.resourcemanager.store.classorg.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStorefalseyarn-default.xml hbase.hstore.flusher.count2falsehbase-default.xml hbase.master.info.port.orig16010falseprogrammatically yarn.rm.system-metrics-publisher.emit-container-eventsfalsefalseyarn-default.xml dfs.datanode.max.transfer.threads4096falsehdfs-site.xml mapreduce.job.local-fs.single-disk-limit.check.kill-limit-exceedtruefalsemapred-default.xml dfs.datanode.metrics.logger.period.seconds600falsehdfs-default.xml hadoop.http.authentication.token.validity36000falsecore-default.xml dfs.namenode.secondary.http-address0.0.0.0:9868falsehdfs-default.xml hadoop.fuse.timer.period5falsehdfs-default.xml yarn.client.failover-proxy-providerorg.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProviderfalseyarn-default.xml hbase.ipc.client.tcpnodelaytruefalsehbase-default.xml yarn.nodemanager.container-diagnostics-maximum-size10000falseyarn-default.xml dfs.datanode.oob.timeout-ms1500,0,0,0falsehdfs-default.xml yarn.nodemanager.resourcemanager.minimum.versionNONEfalseyarn-default.xml mapreduce.map.skip.maxrecords0falsemapred-default.xml yarn.scheduler.configuration.leveldb-store.path${hadoop.tmp.dir}/yarn/system/confstorefalseyarn-default.xml dfs.namenode.top.num.users10falsehdfs-default.xml fs.s3a.metadatastore.implorg.apache.hadoop.fs.s3a.s3guard.NullMetadataStorefalsecore-default.xml hbase.hstore.bytes.per.checksum16384falsehbase-default.xml hbase.master.mob.ttl.cleaner.period86400falsehbase-default.xml yarn.resourcemanager.configuration.provider-classorg.apache.hadoop.yarn.LocalConfigurationProviderfalseyarn-default.xml hbase.meta.replicas.usefalsefalseprogrammatically hbase.hregion.majorcompaction0falsehbase-site.xml hbase.master.balancer.rejection.buffer.enabledfalsefalsehbase-default.xml dfs.webhdfs.enabledtruefalsehdfs-site.xml hadoop.tmp.dir/data/hdfsfalsecore-site.xml yarn.resourcemanager.webapp.https.address${yarn.resourcemanager.hostname}:8090falseyarn-default.xml dfs.namenode.ec.policies.max.cellsize4194304falsehdfs-default.xml yarn.timeline-service.entity-group-fs-store.done-dir/tmp/entity-file-history/done/falseyarn-default.xml yarn.nodemanager.linux-container-executor.cgroups.delete-timeout-ms1000falseyarn-default.xml dfs.namenode.avoid.read.stale.datanodetruefalsehdfs-site.xml mapreduce.cluster.local.dir${hadoop.tmp.dir}/mapred/localfalsemapred-default.xml mapreduce.jobhistory.move.thread-count3falsemapred-default.xml hbase.status.multicast.address.port16100falsehbase-default.xml yarn.sharedcache.store.in-memory.check-period-mins720falseyarn-default.xml fs.s3a.multipart.threshold2147483647falsecore-default.xml hbase.zookeeper.property.dataDir${hbase.tmp.dir}/zookeeperfalsehbase-default.xml hbase.snapshot.restore.failsafe.namehbase-failsafe-{snapshot.name}-{restore.timestamp}falsehbase-default.xml dfs.namenode.checkpoint.period3600sfalsehdfs-default.xml hadoop.security.kms.client.encrypted.key.cache.low-watermark0.3ffalsecore-default.xml dfs.replication3falsehdfs-site.xml dfs.namenode.datanode.registration.ip-hostname-checktruefalsehdfs-default.xml mapreduce.job.reducer.unconditional-preempt.delay.sec300falsemapred-default.xml dfs.datanode.shared.file.descriptor.paths/dev/shm,/tmpfalsehdfs-default.xml dfs.namenode.checkpoint.check.quiet-multiplier1.5falsehdfs-default.xml yarn.timeline-service.recovery.enabledfalsefalseyarn-default.xml hadoop.security.instrumentation.requires.adminfalsefalsecore-default.xml dfs.namenode.startup.delay.block.deletion.sec0falsehdfs-default.xml yarn.resourcemanager.resource-tracker.client.thread-count50falseyarn-default.xml hbase.ipc.server.callqueue.scan.ratio0falsehbase-default.xml ha.failover-controller.new-active.rpc-timeout.ms25000falsecore-site.xml ipc.server.max.connections0falsecore-default.xml yarn.app.mapreduce.am.job.task.listener.thread-count30falsemapred-default.xml yarn.nodemanager.resource.detect-hardware-capabilitiesfalsefalseyarn-default.xml yarn.scheduler.maximum-allocation-vcores4falseyarn-default.xml net.topology.script.number.args100falsecore-default.xml yarn.nodemanager.resource.system-reserved-memory-mb-1falseyarn-default.xml hbase.mob.compaction.batch.size100falsehbase-default.xml fs.s3a.socket.send.buffer8192falsecore-default.xml yarn.nodemanager.runtime.linux.docker.privileged-containers.allowedfalsefalseyarn-default.xml yarn.nodemanager.runtime.linux.docker.allowed-container-networkshost,none,bridgefalseyarn-default.xml mapreduce.reduce.cpu.vcores1falsemapred-default.xml ftp.stream-buffer-size4096falsecore-default.xml yarn.client.nodemanager-connect.max-wait-ms180000falseyarn-default.xml hbase.regionserver.offheap.global.memstore.size0falsehbase-default.xml dfs.namenode.snapshotdiff.listing.limit1000falsehdfs-default.xml hadoop.rpc.socket.factory.class.defaultorg.apache.hadoop.net.StandardSocketFactoryfalsecore-default.xml yarn.timeline-service.hbase.coprocessor.jar.hdfs.location/hbase/coprocessor/hadoop-yarn-server-timelineservice.jarfalseyarn-default.xml hbase.snapshot.region.timeout300000falsehbase-default.xml dfs.namenode.reconstruction.pending.timeout-sec300falsehdfs-default.xml hadoop.security.dns.log-slow-lookups.threshold.ms1000falsecore-default.xml ha.zookeeper.aclworld:anyone:rwcdafalsecore-default.xml hbase.rs.cacheblocksonwritefalsefalsehbase-default.xml hadoop.security.groups.cache.background.reloadfalsefalsecore-default.xml hbase.hstore.checksum.algorithmCRC32Cfalsehbase-default.xml hadoop.caller.context.signature.max.size40falsecore-default.xml ipc.client.connect.retry.interval1000falsecore-default.xml mapreduce.reduce.speculativetruefalsemapred-default.xml mapreduce.reduce.shuffle.merge.percent0.66falsemapred-default.xml mapreduce.job.finish-when-all-reducers-donetruefalsemapred-default.xml dfs.namenode.name.dir.restorefalsefalsehdfs-default.xml yarn.sharedcache.nested-level3falseyarn-default.xml hbase.regionserver.info.port.autofalsefalsehbase-default.xml yarn.webapp.filter-entity-list-by-userfalsefalseyarn-default.xml yarn.nodemanager.log-aggregation.policy.classorg.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AllContainerLogAggregationPolicyfalseyarn-default.xml hbase.coprocessor.abortonerrortruefalsehbase-default.xml dfs.namenode.max-corrupt-file-blocks-returned100falsehdfs-default.xml dfs.client.retry.interval-ms.get-last-block-length4000falsehdfs-default.xml yarn.nodemanager.webapp.cross-origin.enabledfalsefalseyarn-default.xml hadoop.security.group.mapping.ldap.read.timeout.ms60000falsecore-default.xml dfs.mover.movedWinWidth5400000falsehdfs-default.xml mapreduce.job.ubertask.enablefalsefalsemapred-default.xml hadoop.http.authentication.signature.secret.file${user.home}/hadoop-http-auth-signature-secretfalsecore-default.xml yarn.timeline-service.client.fd-clean-interval-secs60falseyarn-default.xml hbase.status.publisher.classorg.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisherfalsehbase-default.xml dfs.namenode.top.enabledtruefalsehdfs-default.xml hbase.hregion.preclose.flush.size5242880falsehbase-default.xml hbase.regionserver.minorcompaction.pagecache.droptruefalsehbase-default.xml dfs.client.read.short.circuit.replica.stale.threshold.ms1800000falsehdfs-default.xml dfs.bytes-per-checksum512falsehdfs-default.xml yarn.node-labels.fs-store.impl.classorg.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStorefalseyarn-default.xml dfs.namenode.fs-limits.max-xattrs-per-inode32falsehdfs-default.xml ipc.client.tcpnodelaytruefalsecore-default.xml hadoop.system.tagsYARN,HDFS,NAMENODE,DATANODE,REQUIRED,SECURITY,KERBEROS,PERFORMANCE,CLIENT ,SERVER,DEBUG,DEPRICATED,COMMON,OPTIONALfalsecore-default.xml dfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodecfalsehdfs-default.xml yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabledfalsefalseyarn-default.xml yarn.acl.reservation-enablefalsefalseyarn-default.xml dfs.namenode.stale.datanode.minimum.interval3falsehdfs-default.xml mapreduce.jobhistory.principaljhs/_HOST@REALM.TLDfalsemapred-default.xml hbase.client.operation.timeout1200000falsehbase-default.xml hbase.status.listener.classorg.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListenerfalsehbase-default.xml fs.adl.implorg.apache.hadoop.fs.adl.AdlFileSystemfalsecore-default.xml hadoop.policy.filehbase-policy.xmlfalsehbase-default.xml dfs.ha.zkfc.nn.http.timeout.ms20000falsehdfs-default.xml dfs.client.use.datanode.hostnamefalsefalsehdfs-default.xml fs.s3a.fast.upload.active.blocks4falsecore-default.xml yarn.timeline-service.webapp.https.address${yarn.timeline-service.hostname}:8190falseyarn-default.xml tfile.fs.input.buffer.size262144falsecore-default.xml dfs.storage.policy.enabledtruefalsehdfs-default.xml hbase.ipc.server.max.callqueue.length1024falsehbase-site.xml hadoop.ssl.keystores.factory.classorg.apache.hadoop.security.ssl.FileBasedKeyStoresFactoryfalsecore-default.xml yarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAXfalseyarn-default.xml yarn.resourcemanager.am.max-attempts2falseyarn-default.xml mapreduce.job.emit-timeline-datafalsefalsemapred-default.xml mapreduce.jobhistory.cleaner.enabletruefalsemapred-default.xml dfs.balancer.keytab.enabledfalsefalsehdfs-default.xml dfs.namenode.edekcacheloader.initial.delay.ms3000falsehdfs-default.xml yarn.nodemanager.log.deletion-threads-count4falseyarn-default.xml ha.failover-controller.graceful-fence.rpc-timeout.ms5000falsecore-site.xml dfs.datanode.balance.bandwidthPerSec10mfalsehdfs-default.xml io.storefile.bloom.block.size131072falsehbase-default.xml yarn.timeline-service.http-cross-origin.enabledfalsefalseyarn-default.xml io.mapfile.bloom.error.rate0.005falsecore-default.xml hfile.block.index.cacheonwritefalsefalsehbase-default.xml mapreduce.client.libjars.wildcardtruefalsemapred-default.xml dfs.client.block.write.replace-datanode-on-failure.enabletruefalsehdfs-default.xml dfs.client.mmap.cache.timeout.ms3600000falsehdfs-default.xml dfs.namenode.enable.log.stale.datanodefalsefalsehdfs-default.xml fs.s3a.s3guard.ddb.table.createfalsefalsecore-default.xml hbase.rpc.rows.warning.threshold5000falsehbase-default.xml dfs.namenode.blocks.per.postponedblocks.rescan10000falsehdfs-default.xml yarn.nodemanager.amrmproxy.client.thread-count25falseyarn-default.xml hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOSTfalsecore-default.xml yarn.sharedcache.client-server.address0.0.0.0:8045falseyarn-default.xml yarn.app.mapreduce.am.command-opts-Xmx1024mfalsemapred-default.xml yarn.federation.registry.base-diryarnfederation/falseyarn-default.xml dfs.namenode.checkpoint.max-retries3falsehdfs-default.xml dfs.namenode.path.based.cache.retry.interval.ms30000falsehdfs-default.xml hbase.regions.slop0.001falsehbase-default.xml dfs.image.transfer-bootstrap-standby.bandwidthPerSec0falsehdfs-default.xml dfs.block.replicator.classnameorg.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefaultfalsehdfs-default.xml hadoop.registry.system.aclssasl:yarn@, sasl:mapred@, sasl:hdfs@falsecore-default.xml yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devicesautofalseyarn-default.xml mapreduce.task.profile.reduce.params${mapreduce.task.profile.params}falsemapred-default.xml dfs.permissions.enabledtruefalsehdfs-default.xml hadoop.shell.safely.delete.limit.num.files100falsecore-default.xml dfs.balancer.getBlocks.size2147483648falsehdfs-default.xml ha.health-monitor.check-interval.ms1000falsecore-site.xml yarn.sharedcache.enabledfalsefalseyarn-default.xml dfs.namenode.http-address.mycluster.cvp328.sjc.aristanetworks.comcvp328.sjc.aristanetworks.com:15070falsehdfs-site.xml yarn.resourcemanager.placement-constraints.handlerdisabledfalseyarn-default.xml yarn.is.miniclusterfalsefalseyarn-default.xml yarn.nodemanager.recovery.enabledfalsefalseyarn-default.xml dfs.namenode.audit.loggersdefaultfalsehdfs-default.xml yarn.nodemanager.health-checker.interval-ms600000falseyarn-default.xml dfs.namenode.acls.enabledfalsefalsehdfs-default.xml mapreduce.job.acl-modify-job falsemapred-default.xml yarn.resourcemanager.work-preserving-recovery.enabledtruefalseyarn-default.xml yarn.app.mapreduce.am.staging-dir.erasurecoding.enabledfalsefalsemapred-default.xml hbase.table.normalization.enabledfalsefalsehbase-default.xml hadoop.kerberos.min.seconds.before.relogin60falsecore-default.xml yarn.timeline-service.handler-thread-count10falseyarn-default.xml hadoop.http.cross-origin.enabledfalsefalsecore-default.xml yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms1000falsemapred-default.xml mapreduce.jobhistory.webapp.xfs-filter.xframe-optionsSAMEORIGINfalsemapred-default.xml yarn.timeline-service.entity-group-fs-store.scan-interval-seconds60falseyarn-default.xml hbase.client.scanner.max.result.size2097152falsehbase-default.xml yarn.scheduler.configuration.leveldb-store.compaction-interval-secs86400falseyarn-default.xml mapreduce.shuffle.transfer.buffer.size131072falsemapred-default.xml yarn.nodemanager.log-container-debug-info.enabledtruefalseyarn-default.xml hbase.hregion.max.filesize107374182400falsehbase-site.xml hadoop.zk.retry-interval-ms1000falsecore-default.xml dfs.namenode.list.encryption.zones.num.responses100falsehdfs-default.xml dfs.namenode.hosts.provider.classnameorg.apache.hadoop.hdfs.server.blockmanagement.HostFileManagerfalsehdfs-default.xml yarn.app.mapreduce.am.job.committer.cancel-timeout60000falsemapred-default.xml hbase.table.lock.enabletruefalsehbase-default.xml hbase.regionserver.info.bindAddress0.0.0.0falsehbase-default.xml yarn.nodemanager.hostname0.0.0.0falseyarn-default.xml mapreduce.job.cache.limit.max-single-resource-mb0falsemapred-default.xml yarn.log-aggregation.retain-check-interval-seconds-1falseyarn-default.xml hbase.rootdirhdfs://mycluster/hbasefalseprogrammatically hbase.regionserver.hlog.tolerable.lowreplication2falsehbase-site.xml fs.har.impl.disable.cachetruefalsecore-default.xml yarn.scheduler.include-port-in-node-namefalsefalseyarn-default.xml yarn.timeline-service.http-authentication.simple.anonymous.allowedtruefalseyarn-default.xml dfs.pipeline.ecnfalsefalsehdfs-default.xml yarn.nodemanager.opportunistic-containers-max-queue-length0falseyarn-default.xml mapreduce.job.encrypted-intermediate-datafalsefalsemapred-default.xml hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)falsecore-default.xml hbase.regionserver.handler.count16falsehbase-site.xml dfs.checksum.typeCRC32Cfalsehdfs-default.xml yarn.log-aggregation.file-controller.TFile.classorg.apache.hadoop.yarn.logaggregation.filecontroller.tfile.LogAggregationTFileControllerfalseyarn-default.xml dfs.client.max.block.acquire.failures3falsehdfs-default.xml yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-userstruefalseyarn-default.xml dfs.reformat.disabledfalsefalsehdfs-default.xml yarn.resourcemanager.leveldb-state-store.compaction-interval-secs3600falseyarn-default.xml yarn.nodemanager.runtime.linux.sandbox-modedisabledfalseyarn-default.xml dfs.http.client.retry.policy.spec10000,6,60000,10falsehdfs-default.xml dfs.balancer.dispatcherThreads200falsehdfs-default.xml yarn.app.mapreduce.am.resource.cpu-vcores1falsemapred-default.xml dfs.balancer.max-iteration-time1200000falsehdfs-default.xml yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000falseyarn-default.xml hbase.rest.threads.min2falsehbase-default.xml yarn.nodemanager.vmem-pmem-ratio2.1falseyarn-default.xml yarn.webapp.xfs-filter.enabledtruefalseyarn-default.xml dfs.namenode.block.deletion.increment1000falsehdfs-default.xml yarn.nodemanager.localizer.cache.target-size-mb10240falseyarn-default.xml dfs.qjournal.http.read.timeout.ms60000falsehdfs-default.xml hadoop.zk.num-retries1000falsecore-default.xml yarn.resourcemanager.scheduler.monitor.enablefalsefalseyarn-default.xml dfs.client.write.byte-array-manager.count-reset-time-period-ms10000falsehdfs-default.xml fs.azure.saskey.usecontainersaskeyforallaccesstruefalsecore-default.xml dfs.client.domain.socket.data.trafficfalsefalsehdfs-default.xml yarn.timeline-service.reader.webapp.https.address${yarn.timeline-service.webapp.https.address}falseyarn-default.xml hbase.client.retries.number45falseprogrammatically adl.feature.ownerandgroup.enableupnfalsefalsecore-default.xml hbase.zookeeper.property.maxClientCnxns300falsehbase-default.xml httpfs.buffer.size4096falsehdfs-default.xml yarn.resourcemanager.zk-state-store.parent-path/rmstorefalseyarn-default.xml dfs.datanode.block-pinning.enabledfalsefalsehdfs-default.xml yarn.webapp.enable-rest-app-submissionstruefalseyarn-default.xml yarn.nodemanager.runtime.linux.docker.enable-userremapping.allowedtruefalseyarn-default.xml dfs.namenode.storageinfo.defragment.interval.ms600000falsehdfs-default.xml mapreduce.map.speculativetruefalsemapred-default.xml dfs.mover.max-no-move-interval60000falsehdfs-default.xml yarn.resourcemanager.webapp.delegation-token-auth-filter.enabledtruefalseyarn-default.xml mapreduce.reduce.shuffle.fetch.retry.interval-ms1000falsemapred-default.xml yarn.resourcemanager.delegation.key.update-interval86400000falseyarn-default.xml fs.azure.local.sas.key.modefalsefalsecore-default.xml fs.getspaceused.jitterMillis60000falsecore-default.xml io.erasurecode.codec.xor.rawcodersxor_native,xor_javafalsecore-default.xml dfs.cachereport.intervalMsec10000falsehdfs-default.xml ipc.maximum.data.length67108864falsecore-default.xml ha.failover-controller.active-standby-elector.zk.op.retries3falsecore-default.xml mapreduce.jobhistory.intermediate-user-done-dir.permissions770falsemapred-default.xml yarn.log-aggregation.retain-seconds-1falseyarn-default.xml mapreduce.job.encrypted-intermediate-data.buffer.kb128falsemapred-default.xml yarn.nodemanager.webapp.address${yarn.nodemanager.hostname}:8042falseyarn-default.xml hbase.rest.csrf.enabledfalsefalsehbase-default.xml yarn.sharedcache.store.classorg.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStorefalseyarn-default.xml hbase.snapshot.restore.take.failsafe.snapshottruefalsehbase-default.xml dfs.block.misreplication.processing.limit10000falsehdfs-default.xml yarn.timeline-service.client.fd-flush-interval-secs10falseyarn-default.xml yarn.nodemanager.local-dirs${hadoop.tmp.dir}/nm-local-dirfalseyarn-default.xml dfs.namenode.list.openfiles.num.responses1000falsehdfs-default.xml hbase.regionserver.port16201falsehbase-site.xml nfs.mountd.port4242falsehdfs-default.xml dfs.disk.balancer.plan.threshold.percent10falsehdfs-default.xml hadoop.security.credential.clear-text-fallbacktruefalsecore-default.xml hadoop.registry.zk.retry.interval.ms1000falsecore-default.xml hadoop.security.uid.cache.secs14400falsecore-default.xml dfs.webhdfs.rest-csrf.browser-useragents-regex^Mozilla.*,^Opera.*falsehdfs-default.xml yarn.client.application-client-protocol.poll-timeout-ms-1falseyarn-default.xml yarn.router.rmadmin.interceptor-class.pipelineorg.apache.hadoop.yarn.server.router.rmadmin.DefaultRMAdminRequestInterceptorfalseyarn-default.xml yarn.nodemanager.amrmproxy.interceptor-class.pipelineorg.apache.hadoop.yarn.server.nodemanager.amrmproxy.DefaultRequestInterceptorfalseyarn-default.xml dfs.webhdfs.netty.low.watermark32768falsehdfs-default.xml dfs.namenode.resource.check.interval5000falsehdfs-default.xml dfs.datanode.fsdatasetcache.max.threads.per.volume4falsehdfs-default.xml hbase.client.max.perregion.tasks1falsehbase-default.xml hbase.regionserver.hlog.reader.implorg.apache.hadoop.hbase.regionserver.wal.ProtobufLogReaderfalsehbase-default.xml hadoop.security.group.mapping.ldap.conversion.rulenonefalsecore-default.xml yarn.resourcemanager.nm-container-queuing.max-queue-length15falseyarn-default.xml dfs.datanode.outliers.report.interval30mfalsehdfs-default.xml yarn.resourcemanager.delegation-token-renewer.thread-count50falseyarn-default.xml fs.viewfs.rename.strategySAME_MOUNTPOINTfalsecore-default.xml dfs.qjournal.prepare-recovery.timeout.ms120000falsehdfs-default.xml hbase.regionserver.thrift.framedfalsefalsehbase-default.xml dfs.namenode.redundancy.queue.restart.iterations2400falsehdfs-default.xml yarn.nodemanager.resource-plugins.fpga.vendor-plugin.classorg.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPluginfalseyarn-default.xml hbase.master.infoserver.redirecttruefalsehbase-default.xml mapreduce.task.exit.timeout.check-interval-ms20000falsemapred-default.xml dfs.qjournal.http.open.timeout.ms60000falsehdfs-default.xml hadoop.security.group.mapping.ldap.connection.timeout.ms60000falsecore-default.xml mapreduce.reduce.shuffle.connect.timeout180000falsemapred-default.xml mapreduce.am.max-attempts2falsemapred-default.xml dfs.datanode.http.addresscvp328.sjc.aristanetworks.com:15075falsehdfs-site.xml hbase.server.keyvalue.maxsize10485760falsehbase-default.xml hadoop.security.authorizationfalsefalsecore-default.xml mapreduce.task.merge.progress.records10000falsemapred-default.xml dfs.qjournal.new-epoch.timeout.ms120000falsehdfs-default.xml dfs.image.transfer.timeout60000falsehdfs-default.xml fs.ftp.transfer.modeBLOCK_TRANSFER_MODEfalsecore-default.xml mapreduce.ifile.readaheadtruefalsemapred-default.xml mapreduce.task.skip.start.attempts2falsemapred-default.xml fs.s3a.committer.threads8falsecore-default.xml yarn.sharedcache.uploader.server.thread-count50falseyarn-default.xml yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms10000falseyarn-default.xml hbase.master.balancer.maxRitPercent1.0falsehbase-default.xml yarn.resourcemanager.ha.enabledfalsefalseyarn-default.xml mapreduce.job.cache.limit.max-resources-mb0falsemapred-default.xml hbase.mob.cache.evict.period3600falsehbase-default.xml dfs.namenode.available-space-block-placement-policy.balanced-space-preference-fraction0.6falsehdfs-default.xml mapreduce.input.fileinputformat.split.minsize0falsemapred-default.xml dfs.ha.tail-edits.rolledits.timeout60falsehdfs-default.xml hbase.rest.threads.max100falsehbase-default.xml yarn.timeline-service.ttl-enabletruefalseyarn-default.xml yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds.min3600falseyarn-default.xml yarn.resourcemanager.node-labels.provider.fetch-interval-ms1800000falseyarn-default.xml hbase.display.keystruefalsehbase-default.xml hbase.coprocessor.enabledtruefalsehbase-default.xml dfs.namenode.top.windows.minutes1,5,25falsehdfs-default.xml dfs.namenode.backup.http-address0.0.0.0:50105falsehdfs-default.xml yarn.nodemanager.container.stderr.tail.bytes4096falseyarn-default.xml dfs.namenode.delegation.token.max-lifetime604800000falsehdfs-default.xml hbase.regionserver.hlog.slowsync.ms500falsehbase-site.xml hfile.block.cache.size0.2falsehbase-site.xml mapreduce.map.maxattempts4falsemapred-default.xml dfs.datanode.lazywriter.interval.sec60falsehdfs-default.xml dfs.image.transfer.bandwidthPerSec0falsehdfs-default.xml hbase.unsafe.regionserver.hostname.disable.master.reversednsfalsefalsehbase-default.xml dfs.namenode.max.extra.edits.segments.retained10000falsehdfs-default.xml yarn.sharedcache.nm.uploader.replication.factor10falseyarn-default.xml yarn.federation.subcluster-resolver.classorg.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImplfalseyarn-default.xml dfs.client.mmap.enabledtruefalsehdfs-default.xml hbase.hregion.memstore.block.multiplier4falsehbase-default.xml mapreduce.job.ubertask.maxmaps9falsemapred-default.xml fs.client.resolve.remote.symlinkstruefalsecore-default.xml dfs.stream-buffer-size4096falsehdfs-default.xml dfs.client.block.write.replace-datanode-on-failure.policyNEVERfalsehdfs-site.xml yarn.app.mapreduce.shuffle.log.separatetruefalsemapred-default.xml hbase.normalizer.split.enabledtruefalsehbase-default.xml hbase.rest-csrf.browser-useragents-regex^Mozilla.*,^Opera.*falsehbase-default.xml hbase.data.umask000falsehbase-default.xml hadoop.security.group.mapping.ldap.sslfalsefalsecore-default.xml yarn.resourcemanager.application.max-tags10falseyarn-default.xml dfs.journalnode.enable.synctruefalsehdfs-default.xml mapreduce.task.files.preserve.failedtasksfalsefalsemapred-default.xml fs.s3a.paging.maximum5000falsecore-default.xml dfs.qjournal.finalize-segment.timeout.ms120000falsehdfs-default.xml dfs.namenode.max-num-blocks-to-log1000falsehdfs-default.xml mapreduce.job.reduce.shuffle.consumer.plugin.classorg.apache.hadoop.mapreduce.task.reduce.Shufflefalsemapred-default.xml yarn.cluster.max-application-priority0falseyarn-default.xml yarn.timeline-service.enabledfalsefalseyarn-default.xml dfs.journalnode.http-addresscvp328.sjc.aristanetworks.com:8480falsehdfs-site.xml yarn.nodemanager.resource.memory.cgroups.soft-limit-percentage90.0falseyarn-default.xml fs.s3a.retry.throttle.interval1000msfalsecore-default.xml hbase.regionserver.dns.nameserverdefaultfalsehbase-default.xml dfs.mover.address0.0.0.0:0falsehdfs-default.xml yarn.scheduler.configuration.store.max-logs1000falseyarn-default.xml yarn.nodemanager.keytab/etc/krb5.keytabfalseyarn-default.xml dfs.user.home.dir.prefix/userfalsehdfs-default.xml hadoop.http.staticuser.userdr.whofalsecore-default.xml dfs.ha.automatic-failover.enabledtruefalsehdfs-site.xml hbase.regionserver.logroll.errors.tolerated10falsehbase-site.xml dfs.datanode.cached-dfsused.check.interval.ms600000falsehdfs-default.xml hbase.client.perserver.requests.threshold2147483647falsehbase-default.xml mapreduce.jobhistory.http.policyHTTP_ONLYfalsemapred-default.xml dfs.blockreport.intervalMsec21600000falsehdfs-default.xml dfs.namenode.lifeline.handler.ratio0.10falsehdfs-default.xml io.seqfile.compress.blocksize1000000falsecore-default.xml hbase.regionserver.thrift.framed.max_frame_size_in_mb2falsehbase-default.xml yarn.resourcemanager.admin.address${yarn.resourcemanager.hostname}:8033falseyarn-default.xml dfs.client.failover.connection.retries.on.timeouts0falsehdfs-default.xml dfs.namenode.list.reencryption.status.num.responses100falsehdfs-default.xml ha.zookeeper.session-timeout.ms10000falsecore-default.xml yarn.sharedcache.checksum.algo.implorg.apache.hadoop.yarn.sharedcache.ChecksumSHA256Implfalseyarn-default.xml dfs.replication.max512falsehdfs-default.xml yarn.nodemanager.container-manager.thread-count20falseyarn-default.xml hadoop.security.groups.negative-cache.secs30falsecore-default.xml fs.s3a.implorg.apache.hadoop.fs.s3a.S3AFileSystemfalsecore-default.xml hadoop.registry.zk.retry.times3falsecore-site.xml file.stream-buffer-size4096falsecore-default.xml hadoop.security.group.mappingorg.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallbackfalsecore-default.xml mapreduce.client.genericoptionsparser.usedtruefalseprogrammatically mapreduce.jobhistory.recovery.store.fs.uri${hadoop.tmp.dir}/mapred/history/recoverystorefalsemapred-default.xml dfs.default.chunk.view.size32768falsehdfs-default.xml yarn.resourcemanager.scheduler.monitor.policiesorg.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicyfalseyarn-default.xml yarn.timeline-service.keytab/etc/krb5.keytabfalseyarn-default.xml hbase.zookeeper.quorumcvp328.sjc.aristanetworks.com,cvp365.sjc.aristanetworks.com,cvp90.sjc.aristanetworks.comfalsehbase-site.xml zookeeper.znode.parent/hbasefalsehbase-default.xml mapreduce.reduce.input.buffer.percent0.0falsemapred-default.xml yarn.timeline-service.entity-group-fs-store.app-cache-size10falseyarn-default.xml dfs.datanode.ec.reconstruction.stripedread.buffer.size65536falsehdfs-default.xml mapreduce.jobhistory.address0.0.0.0:10020falsemapred-default.xml hadoop.proxyuser.cvp.groups*falsecore-site.xml dfs.namenode.num.checkpoints.retained2falsehdfs-default.xml mapreduce.job.max.split.locations15falsemapred-default.xml hbase.client.max.perserver.tasks2falsehbase-default.xml mapreduce.reduce.log.levelINFOfalsemapred-default.xml yarn.timeline-service.webapp.address${yarn.timeline-service.hostname}:8188falseyarn-default.xml hbase.snapshot.enabledtruefalsehbase-site.xml yarn.nodemanager.resource.memory.enforcedtruefalseyarn-default.xml hbase.lease.recovery.timeout23000falsehbase-site.xml hbase.column.max.version1falsehbase-default.xml hadoop.security.groups.cache.background.reload.threads3falsecore-default.xml hadoop.workaround.non.threadsafe.getpwuidtruefalsecore-default.xml hbase.hstore.compaction.ratio1.2Ffalsehbase-default.xml dfs.client.read.shortcircuit.streams.cache.expiry.ms300000falsehdfs-default.xml hadoop.security.dns.log-slow-lookups.enabledfalsefalsecore-default.xml hadoop.security.crypto.codec.classes.aes.ctr.nopaddingorg.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodecfalsecore-default.xml hbase.regionserver.compaction.enabledtruefalsehbase-default.xml yarn.timeline-service.reader.classorg.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImplfalseyarn-default.xml hbase.security.authenticationsimplefalsehbase-default.xml hbase.zookeeper.property.tickTime6000falsehbase-site.xml mapreduce.job.end-notification.retry.interval1000falsemapred-default.xml dfs.webhdfs.socket.connect-timeout60sfalsehdfs-default.xml yarn.timeline-service.entity-group-fs-store.retain-seconds604800falseyarn-default.xml yarn.nodemanager.remote-app-log-dir/tmp/logsfalseyarn-default.xml yarn.app.mapreduce.am.log.levelINFOfalsemapred-default.xml hadoop.http.cross-origin.allowed-headersX-Requested-With,Content-Type,Accept,Originfalsecore-default.xml yarn.resourcemanager.nm-container-queuing.min-queue-wait-time-ms10falseyarn-default.xml hadoop.security.group.mapping.ldap.directory.search.timeout10000falsecore-default.xml hadoop.http.cross-origin.allowed-methodsGET,POST,HEADfalsecore-default.xml dfs.namenode.decommission.interval30sfalsehdfs-default.xml hbase.master.regions.recovery.check.interval1200000falsehbase-default.xml yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs86400falseyarn-default.xml dfs.client.retry.max.attempts10falsehdfs-default.xml yarn.nodemanager.delete.debug-delay-sec0falseyarn-default.xml dfs.namenode.https-address0.0.0.0:9871falsehdfs-default.xml dfs.namenode.send.qop.enabledfalsefalsehdfs-default.xml fs.s3a.s3guard.cli.prune.age86400000falsecore-default.xml dfs.provided.aliasmap.inmemory.dnrpc-address0.0.0.0:50200falsehdfs-default.xml mapreduce.reduce.skip.proc-count.auto-incrtruefalsemapred-default.xml dfs.webhdfs.rest-csrf.methods-to-ignoreGET,OPTIONS,HEAD,TRACEfalsehdfs-default.xml hbase.normalizer.merge.enabledtruefalsehbase-default.xml fs.azure.authorizationfalsefalsecore-default.xml hbase.regionserver.msginterval5000falsehbase-site.xml hadoop.http.cross-origin.max-age1800falsecore-default.xml dfs.https.server.keystore.resourcessl-server.xmlfalsehdfs-default.xml yarn.nodemanager.log-aggregation.compression-typenonefalseyarn-default.xml mapreduce.task.attempt.idhb_rs_cvp328.sjc.aristanetworks.com,16201,1646451103340falseprogrammatically yarn.resourcemanager.submission-preprocessor.file-refresh-interval-ms60000falseyarn-default.xml yarn.scheduler.configuration.mutation.acl-policy.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.DefaultConfigurationMutationACLPolicyfalseyarn-default.xml dfs.ha.namenodes.myclustercvp328.sjc.aristanetworks.com,cvp365.sjc.aristanetworks.comtruehdfs-site.xml ipc.client.connect.timeout1000falsecore-site.xml dfs.namenode.path.based.cache.block.map.allocation.percent0.25falsehdfs-default.xml ha.zookeeper.quorumcvp328.sjc.aristanetworks.com:2181,cvp365.sjc.aristanetworks.com:2181,cvp90.sjc.aristanetworks.com:2181falsecore-site.xml yarn.resourcemanager.webapp.cross-origin.enabledfalsefalseyarn-default.xml fs.wasbs.implorg.apache.hadoop.fs.azure.NativeAzureFileSystem$Securefalsecore-default.xml mapreduce.output.fileoutputformat.compressfalsefalsemapred-default.xml yarn.nodemanager.amrmproxy.enabledfalsefalseyarn-default.xml dfs.namenode.max-lock-hold-to-release-lease-ms25falsehdfs-default.xml yarn.client.failover-retries-on-socket-timeouts0falseyarn-default.xml fs.s3a.buffer.dir${hadoop.tmp.dir}/s3afalsecore-default.xml dfs.client.block.write.replace-datanode-on-failure.best-effortfalsefalsehdfs-default.xml yarn.resourcemanager.zk-delegation-token-node.split-index0falseyarn-default.xml hadoop.http.authentication.kerberos.keytab${user.home}/hadoop.keytabfalsecore-default.xml fs.s3a.retry.limit${fs.s3a.attempts.maximum}falsecore-default.xml hbase.normalizer.min.region.count3falsehbase-default.xml ipc.ping.interval60000falsecore-default.xml hbase.master.normalizer.classorg.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizerfalsehbase-default.xml yarn.sharedcache.nm.uploader.thread-count20falseyarn-default.xml dfs.namenode.storageinfo.defragment.ratio0.75falsehdfs-default.xml mapreduce.jobhistory.admin.address0.0.0.0:10033falsemapred-default.xml yarn.nodemanager.pmem-check-enabledtruefalseyarn-default.xml yarn.timeline-service.webapp.rest-csrf.custom-headerX-XSRF-Headerfalseyarn-default.xml dfs.namenode.upgrade.domain.factor${dfs.replication}falsehdfs-default.xml hadoop.security.kms.client.failover.sleep.max.millis2000falsecore-default.xml yarn.timeline-service.entity-group-fs-store.active-dir/tmp/entity-file-history/activefalseyarn-default.xml hbase.storescanner.parallel.seek.enablefalsefalsehbase-default.xml dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold10737418240falsehdfs-default.xml yarn.sharedcache.admin.address0.0.0.0:8047falseyarn-default.xml mapreduce.jobhistory.loadedjob.tasks.max-1falsemapred-default.xml dfs.client.cached.conn.retry3falsehdfs-default.xml fs.s3a.readahead.range64Kfalsecore-default.xml yarn.nodemanager.runtime.linux.docker.delayed-removal.allowedfalsefalseyarn-default.xml ipc.client.low-latencyfalsefalsecore-default.xml yarn.nodemanager.container-metrics.enabletruefalseyarn-default.xml dfs.client.block.write.locateFollowingBlock.retries5falsehdfs-default.xml hbase.mob.file.cache.size1000falsehbase-default.xml dfs.datanode.du.reserved54951591321falsehdfs-site.xml hadoop.registry.zk.retry.ceiling.ms60000falsecore-default.xml dfs.datanode.addresscvp328.sjc.aristanetworks.com:15010falsehdfs-site.xml yarn.resourcemanager.delegation.token.renew-interval86400000falseyarn-default.xml dfs.namenode.num.extra.edits.retained1000000falsehdfs-default.xml mapreduce.jobhistory.admin.acl*falsemapred-default.xml dfs.datanode.drop.cache.behind.readsfalsefalsehdfs-default.xml dfs.datanode.balance.max.concurrent.moves50falsehdfs-default.xml ipc.client.pingtruefalsecore-default.xml hbase.regionserver.majorcompaction.pagecache.droptruefalsehbase-default.xml hbase.master.snapshot.ttl0falsehbase-default.xml hbase.lease.recovery.dfs.timeout11000falsehbase-site.xml yarn.nodemanager.numa-awareness.read-topologyfalsefalseyarn-default.xml dfs.datanode.directoryscan.interval21600sfalsehdfs-default.xml dfs.client.socket-timeout10000falsehdfs-site.xml hbase.mob.cache.evict.remain.ratio0.5ffalsehbase-default.xml dfs.namenode.snapshotdiff.allow.snap-root-descendanttruefalsehdfs-default.xml mapreduce.job.local-fs.single-disk-limit.check.interval-ms5000falsemapred-default.xml hbase.storescanner.parallel.seek.threads10falsehbase-default.xml dfs.webhdfs.netty.high.watermark65535falsehdfs-default.xml yarn.nodemanager.disk-health-checker.enabletruefalseyarn-default.xml yarn.resourcemanager.ha.automatic-failover.zk-base-path/yarn-leader-electionfalseyarn-default.xml hbase.coprocessor.user.enabledtruefalsehbase-default.xml dfs.namenode.retrycache.expirytime.millis600000falsehdfs-default.xml hbase.table.max.rowsize1073741824falsehbase-default.xml yarn.nodemanager.webapp.rest-csrf.methods-to-ignoreGET,OPTIONS,HEADfalseyarn-default.xml fs.s3a.connection.maximum15falsecore-default.xml hbase.mob.delfile.max.count3falsehbase-default.xml mapreduce.jobhistory.webapp.https.address0.0.0.0:19890falsemapred-default.xml yarn.app.mapreduce.client.job.max-retries3falsemapred-default.xml seq.io.sort.factor100falsecore-default.xml yarn.timeline-service.client.internal-timers-ttl-secs420falseyarn-default.xml hadoop.zk.aclworld:anyone:rwcdafalsecore-default.xml yarn.minicluster.control-resource-monitoringfalsefalseyarn-default.xml dfs.datanode.transferTo.allowedtruefalsehdfs-default.xml hbase.hregion.memstore.mslab.max.allocation262144falsehbase-default.xml yarn.nodemanager.webapp.rest-csrf.enabledfalsefalseyarn-default.xml dfs.namenode.ec.system.default.policyXOR-2-1-1024kfalsehdfs-site.xml mapreduce.job.speculative.speculative-cap-running-tasks0.1falsemapred-default.xml mapreduce.job.hdfs-servers${fs.defaultFS}falsemapred-default.xml fs.s3a.multipart.purge.age86400falsecore-default.xml hbase.ipc.server.fallback-to-simple-auth-allowedfalsefalsehbase-default.xml dfs.client.use.legacy.blockreader.localfalsefalsehdfs-default.xml dfs.client.hedged.read.threadpool.size0falsehdfs-default.xml dfs.datanode.sync.behind.writestruefalsehdfs-site.xml dfs.client.failover.sleep.base.millis300falsehdfs-site.xml mapreduce.reduce.shuffle.fetch.retry.timeout-ms30000falsemapred-default.xml dfs.datanode.data.dir.perm700falsehdfs-default.xml dfs.checksum.combine.modeMD5MD5CRCfalsehdfs-default.xml mapreduce.fileoutputcommitter.algorithm.version2falsemapred-default.xml dfs.client.datanode-restart.timeout30sfalsehdfs-default.xml hadoop.hdfs.configuration.version1falsehdfs-default.xml hbase.dfs.client.read.shortcircuit.buffer.size131072falsehbase-default.xml yarn.minicluster.fixed.portsfalsefalseyarn-default.xml dfs.namenode.replication.max-streams2falsehdfs-default.xml yarn.nodemanager.container-retry-minimum-interval-ms1000falseyarn-default.xml dfs.namenode.reencrypt.edek.threads10falsehdfs-default.xml dfs.datanode.restart.replica.expiration50falsehdfs-default.xml ipc.server.listen.queue.size128falsecore-default.xml mapreduce.shuffle.ssl.file.buffer.size65536falsemapred-default.xml fs.s3a.multipart.purgefalsefalsecore-default.xml hbase.cells.scanned.per.heartbeat.check10000falsehbase-default.xml hbase.zookeeper.property.syncLimit5falsehbase-default.xml fs.s3a.list.version2falsecore-default.xml yarn.nodemanager.runtime.linux.docker.default-container-networkhostfalseyarn-default.xml hbase.thrift.maxQueuedRequests1000falsehbase-default.xml dfs.provided.aliasmap.inmemory.enabledfalsefalsehdfs-default.xml yarn.dispatcher.drain-events.timeout300000falseyarn-default.xml dfs.datanode.ec.reconstruction.xmits.weight0.5falsehdfs-default.xml yarn.webapp.ui2.enablefalsefalseyarn-default.xml yarn.nodemanager.runtime.linux.docker.capabilitiesCHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITEfalseyarn-default.xml dfs.namenode.fs-limits.max-directory-items1048576falsehdfs-default.xml hbase.thrift.maxWorkerThreads1000falsehbase-default.xml hbase.master.hfilecleaner.ttl120000falsehbase-site.xml yarn.nodemanager.log.retain-seconds10800falseyarn-default.xml dfs.image.transfer.chunksize65536falsehdfs-default.xml dfs.client.block.write.replace-datanode-on-failure.min-replication0falsehdfs-default.xml dfs.data.transfer.client.tcpnodelaytruefalsehdfs-default.xml fs.du.interval21600000falsecore-site.xml mapreduce.reduce.markreset.buffer.percent0.0falsemapred-default.xml mapreduce.shuffle.connection-keep-alive.timeout5falsemapred-default.xml hbase.rpc.timeout60000falsehbase-default.xml hadoop.security.kms.client.encrypted.key.cache.size500falsecore-default.xml dfs.client.refresh.read-block-locations.ms0falsehdfs-default.xml hadoop.registry.zk.root/registryfalsecore-default.xml yarn.app.mapreduce.client.job.retry-interval2000falsemapred-default.xml Logs =========================================================== +++++++++++++++++++++++++++++++ /cvpi/hbase/logs/hbase-cvp-regionserver-cvp328.sjc.aristanetworks.com.log +++++++++++++++++++++++++++++++ 2022-03-08 04:37:31,103 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 2 failover attempts. Trying to failover after sleeping for 1286ms. 2022-03-08 04:41:08,358 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3794ms No GCs detected 2022-03-08 04:41:12,810 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2885ms No GCs detected 2022-03-08 04:41:19,268 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1349ms No GCs detected 2022-03-08 04:41:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431286, hits=423176, hitRatio=98.12%, , cachingAccesses=423306, cachingHits=422867, cachingHitsRatio=99.90%, evictions=26258, evicted=0, evictedPerRun=0.0 2022-03-08 04:41:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 04:44:17,230 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 04:44:56,500 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1197ms No GCs detected 2022-03-08 04:46:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431413, hits=423303, hitRatio=98.12%, , cachingAccesses=423433, cachingHits=422994, cachingHitsRatio=99.90%, evictions=26288, evicted=0, evictedPerRun=0.0 2022-03-08 04:46:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 04:47:25,682 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1263ms No GCs detected 2022-03-08 04:51:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431544, hits=423434, hitRatio=98.12%, , cachingAccesses=423564, cachingHits=423125, cachingHitsRatio=99.90%, evictions=26317, evicted=0, evictedPerRun=0.0 2022-03-08 04:51:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 04:54:17,277 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1059ms No GCs detected 2022-03-08 04:54:50,996 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 04:55:00,346 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2456ms No GCs detected 2022-03-08 04:55:21,461 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1602ms No GCs detected 2022-03-08 04:56:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431678, hits=423568, hitRatio=98.12%, , cachingAccesses=423698, cachingHits=423259, cachingHitsRatio=99.90%, evictions=26347, evicted=0, evictedPerRun=0.0 2022-03-08 04:56:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:00:02,962 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1874ms No GCs detected 2022-03-08 05:01:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431816, hits=423706, hitRatio=98.12%, , cachingAccesses=423836, cachingHits=423397, cachingHitsRatio=99.90%, evictions=26377, evicted=0, evictedPerRun=0.0 2022-03-08 05:01:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:01:55,850 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 05:01:55,852 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 541ms. 2022-03-08 05:01:56,396 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1113ms. 2022-03-08 05:03:49,486 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1093ms No GCs detected 2022-03-08 05:03:58,250 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 05:05:20,336 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1732ms No GCs detected 2022-03-08 05:06:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=431930, hits=423820, hitRatio=98.12%, , cachingAccesses=423950, cachingHits=423511, cachingHitsRatio=99.90%, evictions=26407, evicted=0, evictedPerRun=0.0 2022-03-08 05:06:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:07:31,270 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 05:11:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432063, hits=423953, hitRatio=98.12%, , cachingAccesses=424083, cachingHits=423644, cachingHitsRatio=99.90%, evictions=26437, evicted=0, evictedPerRun=0.0 2022-03-08 05:11:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:15:35,722 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 05:16:26,010 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1084ms No GCs detected 2022-03-08 05:16:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432201, hits=424091, hitRatio=98.12%, , cachingAccesses=424221, cachingHits=423782, cachingHitsRatio=99.90%, evictions=26467, evicted=0, evictedPerRun=0.0 2022-03-08 05:16:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:21:25,655 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2692ms No GCs detected 2022-03-08 05:21:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432314, hits=424204, hitRatio=98.12%, , cachingAccesses=424334, cachingHits=423895, cachingHitsRatio=99.90%, evictions=26497, evicted=0, evictedPerRun=0.0 2022-03-08 05:21:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:26:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432447, hits=424337, hitRatio=98.12%, , cachingAccesses=424467, cachingHits=424028, cachingHitsRatio=99.90%, evictions=26527, evicted=0, evictedPerRun=0.0 2022-03-08 05:26:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:27:41,433 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 05:27:41,465 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 731ms. 2022-03-08 05:28:02,405 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 4210ms No GCs detected 2022-03-08 05:28:04,279 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1373ms No GCs detected 2022-03-08 05:28:13,240 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 05:28:13,243 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 349ms. 2022-03-08 05:28:13,597 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1781ms. 2022-03-08 05:31:19,617 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1065ms No GCs detected 2022-03-08 05:31:44,525 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2317ms No GCs detected 2022-03-08 05:31:46,628 INFO [MobFileCache #0] mob.MobFileCache: MobFileCache Statistics, access: 0, miss: 0, hit: 0, hit ratio: 0%, evicted files: 0 2022-03-08 05:31:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:31:57,001 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 9475ms No GCs detected 2022-03-08 05:31:57,029 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432579, hits=424469, hitRatio=98.13%, , cachingAccesses=424599, cachingHits=424160, cachingHitsRatio=99.90%, evictions=26557, evicted=0, evictedPerRun=0.0 2022-03-08 05:32:04,530 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 5257ms No GCs detected 2022-03-08 05:32:36,153 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: java.net.ConnectException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp328.sjc.aristanetworks.com:9001 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 05:36:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432699, hits=424589, hitRatio=98.13%, , cachingAccesses=424719, cachingHits=424280, cachingHitsRatio=99.90%, evictions=26586, evicted=0, evictedPerRun=0.0 2022-03-08 05:36:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:41:27,354 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2432ms No GCs detected 2022-03-08 05:41:31,784 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1429ms No GCs detected 2022-03-08 05:41:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432831, hits=424721, hitRatio=98.13%, , cachingAccesses=424851, cachingHits=424412, cachingHitsRatio=99.90%, evictions=26615, evicted=0, evictedPerRun=0.0 2022-03-08 05:41:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:46:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=432941, hits=424831, hitRatio=98.13%, , cachingAccesses=424961, cachingHits=424522, cachingHitsRatio=99.90%, evictions=26645, evicted=0, evictedPerRun=0.0 2022-03-08 05:46:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:51:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433078, hits=424968, hitRatio=98.13%, , cachingAccesses=425098, cachingHits=424659, cachingHitsRatio=99.90%, evictions=26675, evicted=0, evictedPerRun=0.0 2022-03-08 05:51:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:52:02,828 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1581ms No GCs detected 2022-03-08 05:52:05,530 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2201ms No GCs detected 2022-03-08 05:56:16,164 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1413ms No GCs detected 2022-03-08 05:56:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433199, hits=425089, hitRatio=98.13%, , cachingAccesses=425219, cachingHits=424780, cachingHitsRatio=99.90%, evictions=26705, evicted=0, evictedPerRun=0.0 2022-03-08 05:56:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 05:57:26,229 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 05:57:26,230 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: java.net.ConnectException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp328.sjc.aristanetworks.com:9001 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 814ms. 2022-03-08 05:58:46,321 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1467ms No GCs detected 2022-03-08 06:01:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433346, hits=425236, hitRatio=98.13%, , cachingAccesses=425366, cachingHits=424927, cachingHitsRatio=99.90%, evictions=26735, evicted=0, evictedPerRun=0.0 2022-03-08 06:01:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:02:29,461 INFO [regionserver/cvp328:16201.Chore.3] hbase.ScheduledChore: Chore: MemstoreFlusherChore missed its start time 2022-03-08 06:02:29,463 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 15227ms No GCs detected 2022-03-08 06:02:29,461 INFO [regionserver/cvp328:16201.Chore.1] hbase.ScheduledChore: Chore: CompactionChecker missed its start time 2022-03-08 06:02:59,642 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1196ms No GCs detected 2022-03-08 06:03:30,804 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:04:34,264 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2600ms No GCs detected 2022-03-08 06:04:55,639 INFO [regionserver/cvp328:16201.Chore.3] hbase.ScheduledChore: Chore: MemstoreFlusherChore missed its start time 2022-03-08 06:04:55,640 INFO [regionserver/cvp328:16201.Chore.3] hbase.ScheduledChore: Chore: CompactionChecker missed its start time 2022-03-08 06:04:56,039 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 4685ms No GCs detected 2022-03-08 06:05:04,180 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1313ms No GCs detected 2022-03-08 06:05:06,348 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1667ms No GCs detected 2022-03-08 06:05:20,079 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:05:24,102 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1746ms No GCs detected 2022-03-08 06:05:54,283 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:06:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433511, hits=425401, hitRatio=98.13%, , cachingAccesses=425531, cachingHits=425092, cachingHitsRatio=99.90%, evictions=26765, evicted=0, evictedPerRun=0.0 2022-03-08 06:06:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:08:32,081 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1911ms No GCs detected 2022-03-08 06:08:54,772 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:11:26,791 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:11:33,238 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 390ms. 2022-03-08 06:11:33,965 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1408ms. 2022-03-08 06:11:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433667, hits=425557, hitRatio=98.13%, , cachingAccesses=425687, cachingHits=425248, cachingHitsRatio=99.90%, evictions=26794, evicted=0, evictedPerRun=0.0 2022-03-08 06:11:50,735 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:11:53,009 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1777ms No GCs detected 2022-03-08 06:15:07,754 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:16:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433786, hits=425676, hitRatio=98.13%, , cachingAccesses=425806, cachingHits=425367, cachingHitsRatio=99.90%, evictions=26824, evicted=0, evictedPerRun=0.0 2022-03-08 06:16:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:18:40,748 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:18:42,211 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2025ms No GCs detected 2022-03-08 06:18:42,222 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 380ms. 2022-03-08 06:18:42,616 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1655ms. 2022-03-08 06:21:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=433908, hits=425798, hitRatio=98.13%, , cachingAccesses=425928, cachingHits=425489, cachingHitsRatio=99.90%, evictions=26854, evicted=0, evictedPerRun=0.0 2022-03-08 06:21:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:23:32,953 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1127ms No GCs detected 2022-03-08 06:23:46,207 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:25:27,815 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1722ms No GCs detected 2022-03-08 06:25:57,296 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1097ms No GCs detected 2022-03-08 06:26:00,926 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1397ms No GCs detected 2022-03-08 06:26:02,471 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1045ms No GCs detected 2022-03-08 06:26:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434032, hits=425922, hitRatio=98.13%, , cachingAccesses=426052, cachingHits=425613, cachingHitsRatio=99.90%, evictions=26884, evicted=0, evictedPerRun=0.0 2022-03-08 06:26:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:27:20,176 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1337ms No GCs detected 2022-03-08 06:27:21,214 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:27:23,109 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2432ms No GCs detected 2022-03-08 06:27:23,462 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 426ms. 2022-03-08 06:27:24,946 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1488ms. 2022-03-08 06:27:26,440 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 3 failover attempts. Trying to failover after sleeping for 3379ms. 2022-03-08 06:27:29,822 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 4 failover attempts. Trying to failover after sleeping for 5273ms. 2022-03-08 06:28:22,219 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 5319ms No GCs detected 2022-03-08 06:28:32,636 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1915ms No GCs detected 2022-03-08 06:28:35,298 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: MemstoreFlusherChore missed its start time 2022-03-08 06:28:35,298 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: CompactionChecker missed its start time 2022-03-08 06:28:47,308 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2578ms No GCs detected 2022-03-08 06:28:58,137 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3979ms No GCs detected 2022-03-08 06:29:06,693 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:29:19,503 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2444ms No GCs detected 2022-03-08 06:29:38,888 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:29:43,782 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1784ms No GCs detected 2022-03-08 06:29:45,818 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1535ms No GCs detected 2022-03-08 06:30:13,951 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1499ms No GCs detected 2022-03-08 06:30:26,699 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3106ms No GCs detected 2022-03-08 06:31:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434157, hits=426047, hitRatio=98.13%, , cachingAccesses=426177, cachingHits=425738, cachingHitsRatio=99.90%, evictions=26913, evicted=0, evictedPerRun=0.0 2022-03-08 06:31:46,628 INFO [MobFileCache #0] mob.MobFileCache: MobFileCache Statistics, access: 0, miss: 0, hit: 0, hit ratio: 0%, evicted files: 0 2022-03-08 06:31:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:32:44,688 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2112ms No GCs detected 2022-03-08 06:32:47,965 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2777ms No GCs detected 2022-03-08 06:32:55,589 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1990ms No GCs detected 2022-03-08 06:34:09,754 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1376ms No GCs detected 2022-03-08 06:35:03,554 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2702ms No GCs detected 2022-03-08 06:35:26,471 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1040ms No GCs detected 2022-03-08 06:35:36,801 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:36:46,931 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434271, hits=426161, hitRatio=98.13%, , cachingAccesses=426291, cachingHits=425852, cachingHitsRatio=99.90%, evictions=26943, evicted=0, evictedPerRun=0.0 2022-03-08 06:36:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:37:09,512 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:37:12,265 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 747ms. 2022-03-08 06:37:24,705 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1300ms No GCs detected 2022-03-08 06:37:45,653 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:38:45,790 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3651ms No GCs detected 2022-03-08 06:38:49,110 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2820ms No GCs detected 2022-03-08 06:39:04,939 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1352ms No GCs detected 2022-03-08 06:39:15,640 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 4697ms No GCs detected 2022-03-08 06:39:20,709 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:39:20,723 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 1 failover attempts. Trying to failover after sleeping for 669ms. 2022-03-08 06:39:56,307 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1590ms No GCs detected 2022-03-08 06:39:58,672 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1865ms No GCs detected 2022-03-08 06:40:06,245 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:41:07,874 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 06:41:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434387, hits=426277, hitRatio=98.13%, , cachingAccesses=426407, cachingHits=425968, cachingHitsRatio=99.90%, evictions=26972, evicted=0, evictedPerRun=0.0 2022-03-08 06:41:50,529 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:44:32,268 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2557ms No GCs detected 2022-03-08 06:45:07,283 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3054ms No GCs detected 2022-03-08 06:46:08,474 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1590ms No GCs detected 2022-03-08 06:46:14,192 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2217ms No GCs detected 2022-03-08 06:46:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434505, hits=426395, hitRatio=98.13%, , cachingAccesses=426525, cachingHits=426086, cachingHitsRatio=99.90%, evictions=27002, evicted=0, evictedPerRun=0.0 2022-03-08 06:46:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:47:22,074 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 06:47:35,994 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1155ms No GCs detected 2022-03-08 06:51:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434617, hits=426507, hitRatio=98.13%, , cachingAccesses=426637, cachingHits=426198, cachingHitsRatio=99.90%, evictions=27032, evicted=0, evictedPerRun=0.0 2022-03-08 06:51:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 06:53:17,859 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1699ms No GCs detected 2022-03-08 06:53:24,407 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1235ms No GCs detected 2022-03-08 06:53:53,322 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1034ms No GCs detected 2022-03-08 06:54:19,256 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1835ms No GCs detected 2022-03-08 06:56:46,562 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1286ms No GCs detected 2022-03-08 06:56:46,625 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434736, hits=426626, hitRatio=98.13%, , cachingAccesses=426756, cachingHits=426317, cachingHitsRatio=99.90%, evictions=27062, evicted=0, evictedPerRun=0.0 2022-03-08 06:56:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:01:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434853, hits=426743, hitRatio=98.14%, , cachingAccesses=426873, cachingHits=426434, cachingHitsRatio=99.90%, evictions=27092, evicted=0, evictedPerRun=0.0 2022-03-08 07:01:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:06:46,625 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=434975, hits=426865, hitRatio=98.14%, , cachingAccesses=426995, cachingHits=426556, cachingHitsRatio=99.90%, evictions=27122, evicted=0, evictedPerRun=0.0 2022-03-08 07:06:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:11:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435080, hits=426970, hitRatio=98.14%, , cachingAccesses=427100, cachingHits=426661, cachingHitsRatio=99.90%, evictions=27152, evicted=0, evictedPerRun=0.0 2022-03-08 07:11:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:12:11,113 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1138ms No GCs detected 2022-03-08 07:16:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435191, hits=427081, hitRatio=98.14%, , cachingAccesses=427211, cachingHits=426772, cachingHitsRatio=99.90%, evictions=27182, evicted=0, evictedPerRun=0.0 2022-03-08 07:16:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:21:46,625 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435313, hits=427203, hitRatio=98.14%, , cachingAccesses=427333, cachingHits=426894, cachingHitsRatio=99.90%, evictions=27212, evicted=0, evictedPerRun=0.0 2022-03-08 07:21:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:26:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435418, hits=427308, hitRatio=98.14%, , cachingAccesses=427438, cachingHits=426999, cachingHitsRatio=99.90%, evictions=27242, evicted=0, evictedPerRun=0.0 2022-03-08 07:26:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:29:31,575 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1044ms No GCs detected 2022-03-08 07:29:39,763 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7688ms No GCs detected 2022-03-08 07:31:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435537, hits=427427, hitRatio=98.14%, , cachingAccesses=427557, cachingHits=427118, cachingHitsRatio=99.90%, evictions=27271, evicted=0, evictedPerRun=0.0 2022-03-08 07:31:46,628 INFO [MobFileCache #0] mob.MobFileCache: MobFileCache Statistics, access: 0, miss: 0, hit: 0, hit ratio: 0%, evicted files: 0 2022-03-08 07:31:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:36:53,293 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435652, hits=427542, hitRatio=98.14%, , cachingAccesses=427672, cachingHits=427233, cachingHitsRatio=99.90%, evictions=27301, evicted=0, evictedPerRun=0.0 2022-03-08 07:36:53,293 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:36:53,311 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: MemstoreFlusherChore missed its start time 2022-03-08 07:36:53,311 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: CompactionChecker missed its start time 2022-03-08 07:36:53,485 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8123ms No GCs detected 2022-03-08 07:41:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435775, hits=427665, hitRatio=98.14%, , cachingAccesses=427795, cachingHits=427356, cachingHitsRatio=99.90%, evictions=27331, evicted=0, evictedPerRun=0.0 2022-03-08 07:41:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:42:30,754 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 07:42:30,791 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 761ms. 2022-03-08 07:42:32,553 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1458ms. 2022-03-08 07:42:35,590 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1342ms No GCs detected 2022-03-08 07:42:35,764 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 3 failover attempts. Trying to failover after sleeping for 1760ms. 2022-03-08 07:42:57,829 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 6256ms No GCs detected 2022-03-08 07:43:08,891 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 07:43:39,306 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 07:46:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435881, hits=427771, hitRatio=98.14%, , cachingAccesses=427901, cachingHits=427462, cachingHitsRatio=99.90%, evictions=27360, evicted=0, evictedPerRun=0.0 2022-03-08 07:46:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:51:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=435997, hits=427887, hitRatio=98.14%, , cachingAccesses=428017, cachingHits=427578, cachingHitsRatio=99.90%, evictions=27390, evicted=0, evictedPerRun=0.0 2022-03-08 07:51:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:56:46,625 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436100, hits=427990, hitRatio=98.14%, , cachingAccesses=428120, cachingHits=427681, cachingHitsRatio=99.90%, evictions=27420, evicted=0, evictedPerRun=0.0 2022-03-08 07:56:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 07:59:24,867 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 2180ms No GCs detected 2022-03-08 07:59:31,616 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1247ms No GCs detected 2022-03-08 07:59:47,432 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1546ms No GCs detected 2022-03-08 08:00:18,100 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 08:01:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436210, hits=428100, hitRatio=98.14%, , cachingAccesses=428230, cachingHits=427791, cachingHitsRatio=99.90%, evictions=27450, evicted=0, evictedPerRun=0.0 2022-03-08 08:01:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:03:26,247 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 5566ms No GCs detected 2022-03-08 08:03:50,461 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 08:06:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436324, hits=428214, hitRatio=98.14%, , cachingAccesses=428344, cachingHits=427905, cachingHitsRatio=99.90%, evictions=27480, evicted=0, evictedPerRun=0.0 2022-03-08 08:06:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:08:41,380 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3269ms No GCs detected 2022-03-08 08:08:43,219 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1338ms No GCs detected 2022-03-08 08:08:53,092 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 08:11:30,662 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1454ms No GCs detected 2022-03-08 08:11:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436436, hits=428326, hitRatio=98.14%, , cachingAccesses=428456, cachingHits=428017, cachingHitsRatio=99.90%, evictions=27510, evicted=0, evictedPerRun=0.0 2022-03-08 08:11:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:11:53,895 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001. Trying to failover immediately. 2022-03-08 08:12:49,221 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 6027ms No GCs detected 2022-03-08 08:16:46,625 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436545, hits=428435, hitRatio=98.14%, , cachingAccesses=428565, cachingHits=428126, cachingHitsRatio=99.90%, evictions=27539, evicted=0, evictedPerRun=0.0 2022-03-08 08:16:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:20:31,498 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 3419ms No GCs detected 2022-03-08 08:20:31,523 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.net.ConnectTimeoutException: Call From cvp328.sjc.aristanetworks.com/172.30.41.118 to cvp365.sjc.aristanetworks.com:9001 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=cvp365.sjc.aristanetworks.com/172.30.41.155:9001]; For more details see: http://wiki.apache.org/hadoop/SocketTimeout, while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 08:20:32,401 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 571ms. 2022-03-08 08:21:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436650, hits=428540, hitRatio=98.14%, , cachingAccesses=428670, cachingHits=428231, cachingHitsRatio=99.90%, evictions=27569, evicted=0, evictedPerRun=0.0 2022-03-08 08:21:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:23:03,574 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1684ms No GCs detected 2022-03-08 08:23:05,623 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1548ms No GCs detected 2022-03-08 08:23:12,373 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 6249ms No GCs detected 2022-03-08 08:23:16,021 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. 2022-03-08 08:23:16,059 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: ntNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 1 failover attempts. Trying to failover after sleeping for 725ms. 2022-03-08 08:23:16,797 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001 after 2 failover attempts. Trying to failover after sleeping for 1667ms. 2022-03-08 08:23:18,466 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: tNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp328.sjc.aristanetworks.com/172.30.41.118:9001 after 3 failover attempts. Trying to failover after sleeping for 2677ms. 2022-03-08 08:26:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436772, hits=428662, hitRatio=98.14%, , cachingAccesses=428792, cachingHits=428353, cachingHitsRatio=99.90%, evictions=27599, evicted=0, evictedPerRun=0.0 2022-03-08 08:26:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:31:46,626 INFO [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=874.05 KB, freeSize=306.35 MB, max=307.20 MB, blockCount=34, accesses=436880, hits=428770, hitRatio=98.14%, , cachingAccesses=428900, cachingHits=428461, cachingHitsRatio=99.90%, evictions=27629, evicted=0, evictedPerRun=0.0 2022-03-08 08:31:46,628 INFO [MobFileCache #0] mob.MobFileCache: MobFileCache Statistics, access: 0, miss: 0, hit: 0, hit ratio: 0%, evicted files: 0 2022-03-08 08:31:50,271 INFO [cvp328:16201Replication Statistics #0] regionserver.Replication: Global stats: WAL Edits Buffer Used=0B, Limit=268435456B 2022-03-08 08:33:03,291 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 1413ms No GCs detected 2022-03-08 08:33:42,002 INFO [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8104ms No GCs detected 2022-03-08 08:33:42,056 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: MemstoreFlusherChore missed its start time 2022-03-08 08:33:42,056 INFO [regionserver/cvp328:16201.Chore.2] hbase.ScheduledChore: Chore: CompactionChecker missed its start time 2022-03-08 08:33:54,572 INFO [LeaseRenewer:cvp@mycluster] retry.RetryInvocationHandler: org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:744) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) , while invoking ClientNamenodeProtocolTranslatorPB.renewLease over cvp365.sjc.aristanetworks.com/172.30.41.155:9001. Trying to failover immediately. RS Queue: =========================================================== Compaction/Split Queue summary: compactionQueue=(longCompactions=0:shortCompactions=0), splitQueue=0 Compaction/Split Queue dump: LargeCompation Queue: SmallCompation Queue: Split Queue: Flush Queue summary: flush_queue=0 Flush Queue Queue dump: Flush Queue: Call Queue Summary: =========================================================== Queue Name: Priority Queue Total call count for queue: 0 Total call size for queue (bytes): 0 Queue Name: Replication Queue Total call count for queue: 0 Total call size for queue (bytes): 0 Queue Name: Call Queue Method in call: Get Total call count for method: 2035 Total call size for method (bytes): 213491 Method in call: Multi Total call count for method: 13 Total call size for method (bytes): 11396 Total call count for queue: 2048 Total call size for queue (bytes): 224887 Queue Name: Meta Transition Queue Total call count for queue: 0 Total call size for queue (bytes): 0