• Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 0.5.0
    • Fix Version/s: 0.5.0
    • Component/s: None
    • Labels:


      Suraj and I had a bit of discussion about incoming and outgoing message buffering and scalability.

      Currently everything lies on the heap, causing huge amounts of GC and waste of memory. We can do better.
      Therefore we need to extract an abstract Messenger class which is directly under the interface but over the compressor class.
      It should abstract the use of the queues in the back (currently lot of duplicated code) and it should be backed by a sequencefile on local disk.
      Once sync() starts it should return a message iterator for combining and then gets put into a message bundle which is send over RPC.

      On the other side we get a bundle and looping over it putting everything into the heap making it much larger than it needs to be. Here we can also flush on disk because we are just using a queue-like method to the user-side.

      Plus points:
      In case we have enough heap (see our new metric system), we can also implement a buffering technology that is not flushing everything to disk.

      Open questions:
      I don't know how much slower the whole system gets, but it would save alot of memory. Maybe we should first evaluate if it is really needed.
      In any case, the refactoring of the duplicate code in the messengers is needed.


        1. HAMA-521_final_2.patch
          52 kB
          Thomas Jungblut
        2. HAMA-521_final.patch
          27 kB
          Thomas Jungblut
        3. mytest.patch
          55 kB
          Edward J. Yoon
        4. HAMA-521_3.patch
          53 kB
          Thomas Jungblut
        5. HAMA-521_2.patch
          49 kB
          Thomas Jungblut
        6. HAMA-521_1.patch
          30 kB
          Thomas Jungblut
        7. HAMA-521.patch
          14 kB
          Thomas Jungblut

          Issue Links



              • Assignee:
                thomas.jungblut Thomas Jungblut
                thomas.jungblut Thomas Jungblut
              • Votes:
                0 Vote for this issue
                0 Start watching this issue


                • Created: