Uploaded image for project: 'Apache Ozone'
  1. Apache Ozone
  2. HDDS-9433

[snapshot] OM shutsdown on RocksDB failure when performing distcp of snapshots

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • Snapshot

    Description

      OM shutsdown on RocksDB failure when performing distcp of snapshots

      OM Log error snippet -

      2023-10-07 02:49:40,112 ERROR [OMDoubleBufferFlushThread]-org.apache.hadoop.hdds.utils.db.RDBCheckpointManager: Unable to create RocksDB Snapshot.
      java.io.IOException: RocksDatabase[/var/lib/hadoop-ozone/om/data638886/om.db]: Failed to flush; status : Corruption; message : block checksum mismatch: stored = 2324934590, computed = 3088149924, type = 1  in /var/lib/hadoop-ozone/om/data638886/om.db/000711.sst offset 0 size 179
      	at org.apache.hadoop.hdds.utils.HddsServerUtil.toIOException(HddsServerUtil.java:667)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.toIOException(RocksDatabase.java:90)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.flush(RocksDatabase.java:504)
      	at org.apache.hadoop.hdds.utils.db.RDBCheckpointManager.createCheckpoint(RDBCheckpointManager.java:81)
      	at org.apache.hadoop.hdds.utils.db.RDBStore.getSnapshot(RDBStore.java:329)
      	at org.apache.hadoop.ozone.om.OmSnapshotManager.createOmSnapshotCheckpoint(OmSnapshotManager.java:437)
      	at org.apache.hadoop.ozone.om.response.snapshot.OMSnapshotCreateResponse.addToDBBatch(OMSnapshotCreateResponse.java:81)
      	at org.apache.hadoop.ozone.om.response.OMClientResponse.checkAndUpdateDB(OMClientResponse.java:73)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$5(OzoneManagerDoubleBuffer.java:409)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.addToBatchWithTrace(OzoneManagerDoubleBuffer.java:237)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.addToBatch(OzoneManagerDoubleBuffer.java:408)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushBatch(OzoneManagerDoubleBuffer.java:335)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushCurrentBuffer(OzoneManagerDoubleBuffer.java:314)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:279)
      	at java.lang.Thread.run(Thread.java:748)
      Caused by: org.rocksdb.RocksDBException: block checksum mismatch: stored = 2324934590, computed = 3088149924, type = 1  in /var/lib/hadoop-ozone/om/data638886/om.db/000711.sst offset 0 size 179
      	at org.rocksdb.RocksDB.flush(Native Method)
      	at org.rocksdb.RocksDB.flush(RocksDB.java:3785)
      	at org.rocksdb.RocksDB.flush(RocksDB.java:3763)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.flush(RocksDatabase.java:500)
      	... 12 more
      2023-10-07 02:49:40,169 ERROR [OMDoubleBufferFlushThread]-org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer: Terminating with exit status 1: During flush to DB encountered error in OMDoubleBuffer flush thread OMDoubleBufferFlushThread when handling OMRequest: cmdType: CreateSnapshot
      traceID: ""
      success: true
      status: OK
      CreateSnapshotResponse {
        snapshotInfo {
          snapshotID {
            mostSigBits: -275942429120051567
            leastSigBits: -8920278495645081455
          }
          name: "snap-yx5ul"
          volumeName: "vol-vgakk"
          bucketName: "buck-803bw"
          snapshotStatus: SNAPSHOT_ACTIVE
          creationTime: 1696646979088
          deletionTime: 18446744073709551615
          pathPreviousSnapshotID {
            mostSigBits: -5307092392312093775
            leastSigBits: -4873568154769949494
          }
          globalPreviousSnapshotID {
            mostSigBits: -8119536950719263848
            leastSigBits: -9193110665196680360
          }
          snapshotPath: "vol-vgakk/buck-803bw"
          checkpointDir: "-fc2ba7d2-9dd7-4691-8434-cdf84984e091"
          dbTxSequenceNumber: 4156
          deepClean: true
          sstFiltered: false
        }
      }
      
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.newIterator(RocksDatabase.java:856)
      	at org.apache.hadoop.hdds.utils.db.RDBTable.iterator(RDBTable.java:232)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:417)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:409)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:55)
      	at org.apache.hadoop.ozone.om.OmSnapshotManager.deleteKeysFromDelKeyTableInSnapshotScope(OmSnapshotManager.java:637)
      	at org.apache.hadoop.ozone.om.OmSnapshotManager.createOmSnapshotCheckpoint(OmSnapshotManager.java:442)
      	at org.apache.hadoop.ozone.om.response.snapshot.OMSnapshotCreateResponse.addToDBBatch(OMSnapshotCreateResponse.java:81)
      	at org.apache.hadoop.ozone.om.response.OMClientResponse.checkAndUpdateDB(OMClientResponse.java:73)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$5(OzoneManagerDoubleBuffer.java:409)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.addToBatchWithTrace(OzoneManagerDoubleBuffer.java:237)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.addToBatch(OzoneManagerDoubleBuffer.java:408)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushBatch(OzoneManagerDoubleBuffer.java:335)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushCurrentBuffer(OzoneManagerDoubleBuffer.java:314)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:279)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:49:40,173 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: ==> JVMShutdownHook.run()
      2023-10-07 02:49:40,173 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: Signalling async audit cleanup to start.
      2023-10-07 02:49:40,174 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: Waiting up to 30 seconds for audit cleanup to finish.
      2023-10-07 02:49:40,174 INFO [shutdown-hook-0]-org.apache.hadoop.ozone.om.OzoneManager: om125[quasar-qemowl-3.quasar-qemowl.root.hwx.site:9862]: Stopping Ozone Manager
      2023-10-07 02:49:40,175 INFO [shutdown-hook-0]-org.apache.hadoop.ipc.Server: Stopping server on 9862
      2023-10-07 02:49:40,174 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.provider.AuditProviderFactory: RangerAsyncAuditCleanup: Starting cleanup
      2023-10-07 02:49:40,175 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.queue.AuditAsyncQueue: Stop called. name=ozone.async
      2023-10-07 02:49:40,175 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.queue.AuditAsyncQueue: Interrupting consumerThread. name=ozone.async, consumer=ozone.async.summary
      2023-10-07 02:49:40,177 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.provider.AuditProviderFactory: RangerAsyncAuditCleanup: Done cleanup
      2023-10-07 02:49:40,177 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: Audit cleanup finished after 3 milli seconds
      2023-10-07 02:49:40,178 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditAsyncQueue: Caught exception in consumer thread. Shutdown might be in progress
      2023-10-07 02:49:40,178 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.provider.AuditProviderFactory: RangerAsyncAuditCleanup: Waiting to audit cleanup start signal
      2023-10-07 02:49:40,179 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditAsyncQueue: Exiting polling loop. name=ozone.async
      2023-10-07 02:49:40,178 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: Interrupting ranger async audit cleanup thread
      2023-10-07 02:49:40,179 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditAsyncQueue: Calling to stop consumer. name=ozone.async, consumer.name=ozone.async.summary
      2023-10-07 02:49:40,180 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Stop called. name=ozone.async.summary
      2023-10-07 02:49:40,179 INFO [shutdown-hook-0]-org.apache.ranger.audit.provider.AuditProviderFactory: <== JVMShutdownHook.run()
      2023-10-07 02:49:40,182 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Interrupting consumerThread. name=ozone.async.summary, consumer=ozone.async.summary.batch
      2023-10-07 02:49:40,183 INFO [IPC Server listener on 9862]-org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 9862
      2023-10-07 02:49:40,180 INFO [Ranger async Audit cleanup]-org.apache.ranger.audit.provider.AuditProviderFactory: RangerAsyncAuditCleanup: Interrupted while waiting for audit startCleanup signal!  Exiting the thread...
      java.lang.InterruptedException
      	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
      	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
      	at java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
      	at org.apache.ranger.audit.provider.AuditProviderFactory$RangerAsyncAuditCleanup.run(AuditProviderFactory.java:531)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:49:40,183 INFO [org.apache.ranger.audit.queue.AuditAsyncQueue0]-org.apache.ranger.audit.queue.AuditAsyncQueue: Exiting consumerThread.run() method. name=ozone.async
      2023-10-07 02:49:40,183 INFO [IPC Server Responder]-org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
      2023-10-07 02:49:40,183 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Caught exception in consumer thread. Shutdown might be in progress
      2023-10-07 02:49:40,184 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Exiting polling loop. name=ozone.async.summary
      2023-10-07 02:49:40,185 INFO [shutdown-hook-0]-org.apache.hadoop.ozone.om.OzoneManagerStarter: SHUTDOWN_MSG: 
      /************************************************************
      SHUTDOWN_MSG: Shutting down OzoneManager at quasar-qemowl-3.quasar-qemowl.root.hwx.site/172.27.188.78
      ************************************************************/
      2023-10-07 02:49:40,185 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Calling to stop consumer. name=ozone.async.summary, consumer.name=ozone.async.summary.batch
      2023-10-07 02:49:40,185 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Stop called. name=ozone.async.summary.batch
      2023-10-07 02:49:40,185 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Interrupting consumerThread. name=ozone.async.summary.batch, consumer=ozone.async.summary.batch.solr
      2023-10-07 02:49:40,185 INFO [org.apache.ranger.audit.queue.AuditSummaryQueue0]-org.apache.ranger.audit.queue.AuditSummaryQueue: Exiting consumerThread.run() method. name=ozone.async.summary
      2023-10-07 02:49:40,186 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Caught exception in consumer thread. Shutdown might be in progress
      2023-10-07 02:49:40,188 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Exiting consumerThread. Queue = ozone.async.summary.batch, dest = ozone.async.summary.batch.solr
      2023-10-07 02:49:40,188 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Calling to stop consumer. name = ozone.async.summary.batch, consumer.name = ozone.async.summary.batch.solr
      2023-10-07 02:49:40,188 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.destination.SolrAuditDestination: SolrAuditDestination.stop() called..
      2023-10-07 02:49:40,199 WARN [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:40,199 WARN [grpc-default-executor-13]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:40,299 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.zookeeper.ZooKeeper: Session: 0x311f49b9fc60240 closed
      2023-10-07 02:49:40,299 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0-EventThread]-org.apache.zookeeper.ClientCnxn: EventThread shut down for session: 0x311f49b9fc60240
      2023-10-07 02:49:40,301 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.queue.AuditFileSpool: Stop called, queueName=ozone.async.summary.batch, consumer=ozone.async.summary.batch.solr
      2023-10-07 02:49:40,302 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0]-org.apache.ranger.audit.queue.AuditBatchQueue: Exiting consumerThread.run() method. name=ozone.async.summary.batch
      2023-10-07 02:49:40,302 INFO [ozone.async.summary.batch_ozone.async.summary.batch.solr_destWriter]-org.apache.ranger.audit.queue.AuditFileSpool: Caught exception in consumer thread. Shutdown might be in progress
      2023-10-07 02:49:40,302 INFO [ozone.async.summary.batch_ozone.async.summary.batch.solr_destWriter]-org.apache.ranger.audit.queue.AuditFileSpool: Exiting file spooler. provider=ozone.async.summary.batch, consumer=ozone.async.summary.batch.solr
      2023-10-07 02:49:41,460 WARN [grpc-default-executor-13]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:41,460 WARN [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:41,606 WARN [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:41,606 WARN [grpc-default-executor-13]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:41,606 INFO [grpc-default-executor-9]-org.apache.ratis.server.leader.FollowerInfo: om125@group-9F198C4C3682->om123: decreaseNextIndex nextIndex: updateUnconditionally 1024 -> 0
      2023-10-07 02:49:42,720 WARN [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:42,720 WARN [grpc-default-executor-13]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:43,981 WARN [grpc-default-executor-13]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:43,981 WARN [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-AppendLogResponseHandler: Failed appendEntries: org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
      2023-10-07 02:49:44,234 INFO [shutdown-hook-0]-org.apache.hadoop.ozone.om.GrpcOzoneManagerServer: Server GrpcOzoneManagerServer is shutdown
      2023-10-07 02:49:44,241 INFO [shutdown-hook-0]-org.apache.ratis.server.RaftServer: om125: close
      2023-10-07 02:49:44,243 INFO [shutdown-hook-0]-org.apache.ratis.grpc.server.GrpcService: om125: shutdown server GrpcServerProtocolService now
      2023-10-07 02:49:44,243 INFO [om125-impl-thread2]-org.apache.ratis.server.RaftServer$Division: om125@group-9F198C4C3682: shutdown
      2023-10-07 02:49:44,243 INFO [om125-impl-thread2]-org.apache.ratis.util.JmxRegister: Successfully un-registered JMX Bean with object name Ratis:service=RaftServer,group=group-9F198C4C3682,id=om125
      2023-10-07 02:49:44,244 INFO [om125-impl-thread2]-org.apache.ratis.server.impl.RoleInfo: om125: shutdown om125@group-9F198C4C3682-LeaderStateImpl
      2023-10-07 02:49:44,245 WARN [om125@group-9F198C4C3682->om123-GrpcLogAppender-LogAppenderDaemon]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om123-GrpcLogAppender: Wait interrupted by java.lang.InterruptedException
      2023-10-07 02:49:44,246 WARN [om125@group-9F198C4C3682->om124-GrpcLogAppender-LogAppenderDaemon]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om124-GrpcLogAppender: Wait interrupted by java.lang.InterruptedException
      2023-10-07 02:49:44,246 INFO [om125-impl-thread2]-org.apache.ratis.server.impl.PendingRequests: om125@group-9F198C4C3682-PendingRequests: sendNotLeaderResponses
      2023-10-07 02:49:44,250 INFO [om125-impl-thread2]-org.apache.ratis.server.impl.StateMachineUpdater: om125@group-9F198C4C3682-StateMachineUpdater: set stopIndex = 2628
      2023-10-07 02:49:44,251 INFO [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om124-AppendLogResponseHandler: follower responses appendEntries COMPLETED
      2023-10-07 02:49:44,251 INFO [grpc-default-executor-9]-org.apache.ratis.server.leader.FollowerInfo: om125@group-9F198C4C3682->om124: decreaseNextIndex nextIndex: updateUnconditionally 2629 -> 2628
      2023-10-07 02:49:44,251 INFO [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine: Current Snapshot Index (t:5, i:2626)
      2023-10-07 02:49:44,252 ERROR [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.ratis.server.impl.StateMachineUpdater: om125@group-9F198C4C3682-StateMachineUpdater: Failed to take snapshot
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.put(RocksDatabase.java:481)
      	at org.apache.hadoop.hdds.utils.db.RDBTable.put(RDBTable.java:70)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.put(TypedTable.java:156)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.takeSnapshot(OzoneManagerStateMachine.java:490)
      	at org.apache.ratis.server.impl.StateMachineUpdater.takeSnapshot(StateMachineUpdater.java:274)
      	at org.apache.ratis.server.impl.StateMachineUpdater.checkAndTakeSnapshot(StateMachineUpdater.java:266)
      	at org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:185)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:49:44,252 INFO [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine: Current Snapshot Index (t:5, i:2626)
      2023-10-07 02:49:44,252 ERROR [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.ratis.server.impl.StateMachineUpdater: om125@group-9F198C4C3682-StateMachineUpdater: Failed to take snapshot
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.put(RocksDatabase.java:481)
      	at org.apache.hadoop.hdds.utils.db.RDBTable.put(RDBTable.java:70)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.put(TypedTable.java:156)
      	at org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.takeSnapshot(OzoneManagerStateMachine.java:490)
      	at org.apache.ratis.server.impl.StateMachineUpdater.takeSnapshot(StateMachineUpdater.java:274)
      	at org.apache.ratis.server.impl.StateMachineUpdater.checkAndTakeSnapshot(StateMachineUpdater.java:266)
      	at org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:188)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:49:44,253 INFO [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine: StateMachine has shutdown. Shutdown OzoneManager if not already shutdown.
      2023-10-07 02:49:44,253 INFO [om125@group-9F198C4C3682-StateMachineUpdater]-org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer: Stopping OMDoubleBuffer flush thread
      2023-10-07 02:49:44,255 INFO [grpc-default-executor-9]-org.apache.ratis.grpc.server.GrpcLogAppender: om125@group-9F198C4C3682->om124-AppendLogResponseHandler: follower responses appendEntries COMPLETED
      2023-10-07 02:49:44,258 INFO [Thread-8867]-org.apache.ratis.grpc.server.GrpcServerProtocolClient: om124 Close channels
      2023-10-07 02:49:44,258 INFO [Thread-8866]-org.apache.ratis.grpc.server.GrpcServerProtocolClient: om123 Close channels
      2023-10-07 02:49:44,267 INFO [shutdown-hook-0]-org.apache.ratis.grpc.server.GrpcService: om125: shutdown server GrpcServerProtocolService successfully
      2023-10-07 02:49:46,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:49:46,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:49:46,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:49:56,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:49:56,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:49:56,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:05,453 ERROR [qtp1209033601-141]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
      	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1378)
      	at com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
      	at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:244)
      	at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
      	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
      	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656)
      	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1681)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.NoCacheFilter.doFilter(NoCacheFilter.java:48)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
      	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
      	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
      	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
      	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505)
      	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
      	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
      	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.Server.handle(Server.java:516)
      	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
      	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
      	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
      	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
      	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555)
      	at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410)
      	at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
      	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:50:05,454 ERROR [qtp1209033601-141]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
      	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1378)
      	at com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
      	at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:244)
      	at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
      	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
      	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656)
      	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1681)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.NoCacheFilter.doFilter(NoCacheFilter.java:48)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
      	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
      	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
      	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
      	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505)
      	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
      	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
      	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.Server.handle(Server.java:516)
      	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
      	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
      	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
      	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
      	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555)
      	at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410)
      	at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
      	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:50:05,455 ERROR [qtp1209033601-141]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:183)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:156)
      	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1378)
      	at com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)
      	at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:244)
      	at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
      	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
      	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
      	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656)
      	at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1681)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.apache.hadoop.hdds.server.http.NoCacheFilter.doFilter(NoCacheFilter.java:48)
      	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
      	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
      	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:552)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
      	at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
      	at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
      	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1440)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
      	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:505)
      	at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
      	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
      	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1355)
      	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
      	at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
      	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
      	at org.eclipse.jetty.server.Server.handle(Server.java:516)
      	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487)
      	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732)
      	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479)
      	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
      	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:555)
      	at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:410)
      	at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:164)
      	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
      	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
      	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
      	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
      	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:50:06,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:06,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:06,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:08,939 ERROR [SstFilteringService#0]-org.apache.hadoop.ozone.om.SstFilteringService: Error during Snapshot sst filtering 
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.newIterator(RocksDatabase.java:856)
      	at org.apache.hadoop.hdds.utils.db.RDBTable.iterator(RDBTable.java:232)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:417)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:409)
      	at org.apache.hadoop.hdds.utils.db.TypedTable.iterator(TypedTable.java:55)
      	at org.apache.hadoop.ozone.om.SstFilteringService$SstFilteringTask.call(SstFilteringService.java:177)
      	at org.apache.hadoop.hdds.utils.BackgroundService$PeriodicalTask.lambda$run$0(BackgroundService.java:121)
      	at java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1640)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
      	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      	at java.lang.Thread.run(Thread.java:748)
      2023-10-07 02:50:16,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:16,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:16,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:26,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:26,481 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:26,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:36,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get property mem-table-flush-pending from rocksdb
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getProperty(RocksDatabase.java:807)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:214)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:36,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to compute sst file stat
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLiveFilesMetaData(RocksDatabase.java:642)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.computeSstFileStat(RocksDBStoreMetrics.java:251)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getDBPropertyData(RocksDBStoreMetrics.java:235)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:151)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505)
      2023-10-07 02:50:36,482 ERROR [Timer for 'OzoneManager' metrics system]-org.apache.hadoop.hdds.utils.RocksDBStoreMetrics: Failed to get latest sequence number
      java.io.IOException: Rocks Database is closed
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.assertClose(RocksDatabase.java:444)
      	at org.apache.hadoop.hdds.utils.db.RocksDatabase.getLatestSequenceNumber(RocksDatabase.java:834)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getLatestSequenceNumber(RocksDBStoreMetrics.java:302)
      	at org.apache.hadoop.hdds.utils.RocksDBStoreMetrics.getMetrics(RocksDBStoreMetrics.java:152)
      	at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:200)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.snapshotMetrics(MetricsSystemImpl.java:419)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.sampleMetrics(MetricsSystemImpl.java:406)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.onTimerEvent(MetricsSystemImpl.java:381)
      	at org.apache.hadoop.metrics2.impl.MetricsSystemImpl$4.run(MetricsSystemImpl.java:368)
      	at java.util.TimerThread.mainLoop(Timer.java:555)
      	at java.util.TimerThread.run(Timer.java:505) 

      Attachments

        Issue Links

          Activity

            People

              swamirishi Swaminathan Balachandran
              jyosin Jyotirmoy Sinha
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: