Uploaded image for project: 'Ratis'
  1. Ratis
  2. RATIS-840

Memory leak of LogAppender

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • None
    • 1.0.0
    • server
    • None

    Description

      What's the problem ?

      When run hadoop-ozone for 4 days, datanode memory leak.  When dump heap, I found there are 460710 instances of GrpcLogAppender. But there are only 6 instances of SenderList, and each SenderList contains 1-2 instance of GrpcLogAppender. And there are a lot of logs related to LeaderState::restartSender.

      INFO impl.RaftServerImpl: 1665f5ea-ab17-4a0e-af6d-6958efd322fa@group-F64B465F37B5-LeaderState: Restarting GrpcLogAppender for 1665f5ea-ab17-4a0e-af6d-6958efd322fa@group-F64B465F37B5-\u003e229cbcc1-a3b2-4383-9c0d-c0f4c28c3d4a\n","stream":"stderr","time":"2020-04-06T03:59:53.37892512Z"}

      So there are a lot of GrpcLogAppender did not stop the Daemon Thread when removed from senders.


       
      Why LeaderState::restartSender so many times ?
      1. As the image shows, when remove group, SegmentedRaftLog will close, then GrpcLogAppender throw exception when find the SegmentedRaftLog was closed. Then GrpcLogAppender will be restarted, and the new GrpcLogAppender throw exception again when find the SegmentedRaftLog was closed, then GrpcLogAppender will be restarted again ... . It results in an infinite restart of GrpcLogAppender.
      2. Actually, when remove group, GrpcLogAppender will be stoped: RaftServerImpl::shutdown -> RoleInfo::shutdownLeaderState -> LeaderState::stop -> LogAppender::stopAppender, then SegmentedRaftLog will be closed: RaftServerImpl::shutdown -> ServerState:close ... . Though RoleInfo::shutdownLeaderState called before ServerState:close, but the GrpcLogAppender was stopped asynchronously. So infinite restart of GrpcLogAppender happens, when GrpcLogAppender stop after SegmentedRaftLog close.

      Why GrpcLogAppender did not stop the Daemon Thread when removed from senders ?
      I find a lot of GrpcLogAppender blocked inside logs4j. I think it's GrpcLogAppender restart too fast, then blocked in logs4j.

      Can the new GrpcLogAppender work normally ?
      1. Even though without the above problem, the new created GrpcLogAppender still can not work normally.
      2. When creat a new GrpcLogAppender, a new FollowerInfo will also be created: LeaderState::addAndStartSenders ->
      LeaderState::addSenders->RaftServerImpl::newLogAppender -> new FollowerInfo
      3. When the new created GrpcLogAppender append entry to follower, then the follower response SUCCESS.
      4. Then LeaderState::updateCommit -> LeaderState::getMajorityMin ->
      voterLists.get(0) . Error happens because voterLists.get(0) return the FollowerInfo of the old GrpcLogAppender, not the FollowerInfo of the new GrpcLogAppender.
      5. Because the majority commit got from the FollowerInfo of the old GrpcLogAppender never changes. So even though follower has append entry successfully, the leader can not update commit. So the new created GrpcLogAppender can never work normally.
      6. The reason of unit test of runTestRestartLogAppender can pass is that it did not stop the old GrpcLogAppender, and the old GrpcLogAppender append entry to follower, not the new GrpcLogAppender. If stop the old GrpcLogAppender, runTestRestartLogAppender will fail.

      Attachments

        1. RATIS-840.003.patch
          8 kB
          runzhiwang
        2. RATIS-840.002.patch
          9 kB
          runzhiwang
        3. RATIS-840.001.patch
          8 kB
          runzhiwang
        4. screenshot-2.png
          146 kB
          runzhiwang
        5. screenshot-1.png
          429 kB
          runzhiwang
        6. image-2020-04-06-14-27-39-582.png
          24 kB
          runzhiwang
        7. image-2020-04-06-14-27-28-485.png
          90 kB
          runzhiwang

        Issue Links

          Activity

            People

              yjxxtd runzhiwang
              yjxxtd runzhiwang
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0h
                  0h
                  Logged:
                  Time Spent - 0.5h
                  0.5h