Uploaded image for project: 'Accumulo'
  1. Accumulo
  2. ACCUMULO-4157

WAL can be prematurely deleted



    • Type: Bug
    • Status: Resolved
    • Priority: Blocker
    • Resolution: Fixed
    • Affects Version/s: 1.6.5, 1.7.1
    • Fix Version/s: 1.6.6, 1.7.2
    • Component/s: gc
    • Labels:


      Ran into a situation where the master started logging Unable to initiate log sort because the WAL could not be found right after a tserver died. The WAL was for a tablet in the metadata table, with a key extent like !0;t;endrow;prevrow hosted by a tabletserver like tserver1. Doing a log sort happens in the master before a tablet can be reassigned and brought back online. Accumulo is in a really bad state when this happens as that tablet will stay unhosted until manual intervention. Luckily, the WAL was in the HDFS trash and could be copied back to the expected location.

      Piecing together the logs showed something like the following

      This suggests that the GargabeCollectWriteAheadLog is too aggressively removing WALs for a server it thinks is dead but may actually still be doing work. The tserver was under heavy load before it went down.

      Studying the logs with Keith Turner and brainstorming, here are some things that could be fixed/checked

      1. When gc doesn't see a reference to a WAL in metadata table, it asks the tablet server to delete the log. The gc process then logs at DEBUG that the WAL was deleted regardless of whether it was or not. Maybe change log to "asking tserver to delete WAL" or something. We found these messages in the gc log 45 minutes before this event. These messages were misleading because further investigation shows the tserver will log Deleting wal when a WAL is truly deleted. There were not such message in the tserver 45 min earlier, indicating the WAL was not actually deleted.
      2. GC logs "Removing WAL for offline" at DEBUG. These can roll off pretty quickly, so change that to INFO. This will help keep history around longer to aid troubleshooting.
      3. Verify the "adding 1 logs for extent" is using the srv:lock column to enforce the constraint. Looks like it is, but if zooLock is null in the update of MetadataTableUtil maybe badness is happening.
      4. In GC, maybe keep a map of first time we see a tablet server is down and don't actually remove the WAL for offline tablet servers until they have been down an hour or something. Would need to make sure that map is cleared when a tserver comes back online.


          Issue Links



              • Assignee:
                mjwall Michael Wall
                mjwall Michael Wall
              • Votes:
                0 Vote for this issue
                6 Start watching this issue


                • Created:

                  Time Tracking

                  Original Estimate - Not Specified
                  Not Specified
                  Remaining Estimate - 0h
                  Time Spent - 7h