Uploaded image for project: 'Ranger'
  1. Ranger
  2. RANGER-3442

Ranger KMS DAO memory issues when many new keys are created



    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.0.0
    • 3.0.0, 2.3.0
    • kms
    • None


      We have many keys created in our KMS keystore and recently we found that when we create new keys, the KMS instances easily hit against the memory limit. 

      We can reproduce this with a script to call KMS createKey and then getMetadata for new keys in a loop. Basically we restart our instances and memory usage is approximately 1.5GB out of 8GB, but after running this script for a bit (1-5 minutes), we hit close to the 8GB limit and the memory usage does not go back down after that.  

      I did a heap dump and saw that most of the memory was being retained by XXRangerKeystore and eclipse EntityManagerImpl.  

      • org.eclipse.persistence.internal.jpa.EntityManagerImpl
      • org.eclipse.persistence.internal.sessions.RepeatableWriteUnitOfWork 

      And the largest shallow size object was char[] with 4GB+...


      My fix

      I was ultimately able to solve this issue by adding an getEntityManager().clear() call in BaseDao.java getAllKeys(). 

      After I added this fix, we can now run as many KMS CreateKey / getMetadata calls as we want without any increase in memory usage whatsoever. Memory usage now stays constant at <1.7GB.

      My understanding is that Ranger KMS has a many instances of ThreadLocal EntityManager (160+ according to my heap dump) which each held a cache of the results for getAllKeys. Since we have so many keys in our KMS, this would easily put as at the memory limit. 

      Not sure if there are any drawbacks to clearing EntityManager in BaseDao.getAllKeys() but we are seeing greatly improved performance in our case since we aren't constantly hitting the memory limit anymore.


        1. kms-key.py
          2 kB
          Pavi Subenderan
        2. RANGER-3442-entity-manager-clear.patch
          0.7 kB
          Pavi Subenderan



            pavitheran Pavi Subenderan
            pavitheran Pavi Subenderan
            0 Vote for this issue
            5 Start watching this issue