Uploaded image for project: 'OFBiz'
  1. OFBiz
  2. OFBIZ-10592

OutOfMemory and stucked JobPoller issue

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Critical
    • Resolution: Done
    • Release Branch 13.07
    • None
    • ALL APPLICATIONS
    • None

    Description

       
      This installation is composed by two instances of OFBiz (v13.07.03), served via an Apache Tomcat webserver, along with a load balancer.
      The database server is MariaDB.
       
      We had the first problems, about 3 weeks ago, when suddenly, the front1 (ofbiz instance 1), stopped serving web requests; front2, instead, was still working correctly.
       
      Obviously we checked the log files, and we saw that async services were failing; the failure was accompanied by this error line:
       
      Thread "AsyncAppender-async" java.lang.OutOfMemoryError: GC overhead limit exceeded
       
      We analyzed the situation with our system specialists, and they told us that the application was highly stressing machine resources (cpu always at or near 100%, RAM usage rapidly increasing), until the jvm run out of memory.
      This "resource-high-consumption situation", occurred only when ofbiz1 instance was started with the JobPoller enabled; if the JobPoller was not enabled, ofbiz run with low resource usage. 
       
      We then focused on the db, to check first of all the dimensions; the result was disconcerting; 45GB, mainly divided on four tables: SERVER_HIT (about 18 GB), VISIT (about 15 GB), ENTITY_SYNC_REMOVE (about 8 GB), VISITOR (about 2 GB).
      All the other tables had a size in the order of few MB each.
       
      The first thing we did, was to clear all those tables, reducing considerably the db size.
      After the cleaning, we tried to start ofbiz1 again, with the JobPoller component enabled; this caused a lot of old scheduled/queued jobs, to execute.
      Except than for the start-up time, the resource usage of the machine, stabilized around normal to low values (cpu 1-10%).
      Ofbiz seemed to work (web request was served), but we noticed that the JobPoller did not schedule or run jobs, anymore. 
      The number of job in "Pending" state in the JobSandbox entity was small (about 20); no Queued, no Failed, no jobs in other states.
      In addition to this, unfortunately, after few hours, jvm run out of memory again.
       
      Our jvm has an heap maximum size of 20GB ( we have 32GB on the  machine), so it's not so small, I think.
      The next step we're going to do is set-up locally the application over the same production db to see what happens.
       
      Now that I explained the situation, I am going to ask if, in your opinion/experience:
       
      Could the JobPoller component be the root (and only) cause of the OutOfMemory of the jvm?
       
      Could this issue be related to OFBIZ-5710?
       
      Dumping and analyzing the heap of the jvm could help in some way to understand what or who fills the memory or is this operation a waste of time?
       
      Is there something that we did not considered or missed during the whole process of problem analysis?
       
       
      I really thank you all for your attention and your help; any suggestion or advice would really be greatly appreciated.
       
      Kind regards,
      Giulio

      Attachments

        1. alloc_tree_600k_12102018.png
          281 kB
          Giulio Speri
        2. jvm_ofbiz1_profi_telem.png
          33 kB
          Giulio Speri
        3. jvm_prof_ofbiz1_telem2.png
          41 kB
          Giulio Speri
        4. ofbiz1_jvm_profil_nojobpoller.png
          64 kB
          Giulio Speri
        5. OFBIZ-10592_OutOfMemory_order_properties.patch
          0.5 kB
          Giulio Speri
        6. OFBIZ-10592-nmalin.patch
          13 kB
          Nicolas Malin
        7. OFBIZ-10592-trunkv18-OutOfMemory_order_properties.patch
          0.7 kB
          Giulio Speri
        8. OFBIZ-10592-trunkv18-OutOfMemory_order_properties.patch
          0.7 kB
          Giulio Speri
        9. OFBIZ-10592-trunkv18-OutOfMemory_ShoppingListServices.patch
          13 kB
          Giulio Speri
        10. OFBIZ-10592-trunkv18-OutOfMemory_ShoppingListServices.patch
          13 kB
          Giulio Speri
        11. OFBIZ-10592-trunkv18-OutOfMemory_ShoppingListServices.patch
          13 kB
          Giulio Speri
        12. order_properties_patchv2.patch
          0.5 kB
          Giulio Speri
        13. order_properties.patch
          0.4 kB
          Giulio Speri
        14. recorder_object_600k_12102018.png
          206 kB
          Giulio Speri
        15. Screenshot from 2019-04-20 02-32-37.png
          42 kB
          Giulio Speri
        16. ShoppingListServices_patchv2.patch
          20 kB
          Giulio Speri
        17. ShoppingListServices.patch
          15 kB
          Giulio Speri
        18. ShoppingListServices.patch
          15 kB
          Giulio Speri
        19. telemetry_ovrl_600k_12102018.png
          142 kB
          Giulio Speri

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Giulio_MpStyle Giulio Speri
            Giulio_MpStyle Giulio Speri
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment