Uploaded image for project: 'Geode'
  1. Geode
  2. GEODE-6636

Buffers.acquireBuffer is not optimal

    XMLWordPrintableJSON

Details

    Description

      org.apache.geode.internal.net.Buffers.acquireBuffer takes buffers out of a ConcurrentLinkedQueue. If the buffer is too small then it adds it back to the queue and adds it to an IdentityHashMap. The map is just to detect if we have looped around and found one we added to the map in a previous iteration.

      A more efficient way to do this, which will only remove things from the queue that we will either throw away or use and return later, and which gets rid of the map, is to use ConcurrentLinkedQueue.remove(Object). You can see an example of this by looking at: AvailableConnectionManager.EqualsWithPredicate. The predicate to use with acquireBuffer is that the soft reference is null or that the capacity of the referenced buffer is large enough. If you remove one because the reference is null then you need to call remove(Object) again (after decrementing the correct stat) since all you did was find one that had been garbage collected. You want to keep the predicate as cheap as possible since it is called in the "compare-and-set" spin loop. The more you do in the predicate, the more likely a concurrent thread will take that buffer and you will need to spin around and try again.

      I was surprised when running a benchmark under the profiler to see operations on this IdentityHashMap show up. The benchmark was doing pr puts of all the same size so I would have thought all these direct buffers would be the same. I think it would be worth understanding what sizes of buffers will be requested by acquireBuffer and how often they will be acquired and returned. If different sizes is a normal use case then it probably would make sense to have more than one queue. If we could go to a queue knowing that if it has a buffer in it that it meets are desired size. The geode off-heap free list implementation does this and only the largest allocations need to search through a free list that has items in it that exceed a max size. Every other free list is found quickly be using the requested size to index an array of free lists, which only contain items of that size.

       

      Attachments

        Activity

          People

            mivanac Mario Ivanac
            dschneider Darrel Schneider
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Time Tracking

                Estimated:
                Original Estimate - Not Specified
                Not Specified
                Remaining:
                Remaining Estimate - 0h
                0h
                Logged:
                Time Spent - 5h 20m
                5h 20m