Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-1003

Proposal to batch commits to edits log.



    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.14.0
    • None
    • None


      Right now most expensive namenode operations are that require commits to edits log. e.g. creating a file, deleting, renaming etc. Most of the time is spent in fsync() of edits file (multiple fsync() calls in the case of multiple image directories). During this time whole namesystem is under lock and even non-mutating operations like open() are blocked.

      On a local filesystem, each fsync could take in the order of milliseconds. My understanding is that guarantee namenode provides is that edits log is synced before replying to the client. Without any changes to current locking structure, I was thinking of the following for batching multiple edits :

      a) a facility in RPC Server to postpone responding to a particular call (communication with ThreadLocals may be). This is strictly not required but without it, number operations batched would be limited to number of IPC threads.
      b) Another Server thread that waits for pending commits to be synced and replies back to clients.
      c) fsync manager that periodically syncs the edit log and informs waiting RPCs. The sync thread can dynamically decide to wait longer or shorter based on the load so that we don't increase the latency when namenode is lightly loaded. Event simple policy of 'sync if there are any mutations' will also work but that might reduce the hard disk life.

      All the synchronization between these threads is a bit complicated but it can be stable. My main concern is whether the guarantee we are providing enough for namenode operation. I think it is enough.

      In terms of throughput, number of creates a namenode can do should be on the same range as number of opens it can do.


        1. editLogSync3.patch
          7 kB
          Dhruba Borthakur



            dhruba Dhruba Borthakur
            rangadi Raghu Angadi
            0 Vote for this issue
            0 Start watching this issue