Jetspeed 2
  1. Jetspeed 2
  2. JS2-852

Release content buffers after rendering

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 2.2.0
    • Component/s: Aggregation
    • Labels:
      None

      Description

      With a heap analysis tool, we have discovered that, when using a ThreadPool, such as on WAS 6.1, thread locals are being held onto for long periods of time.
      In the PortletEntityImpl, we are using a ThreadLocal to associate per user content fragments with the current thread, as the entity object is not user specific and thus not content specific.
      Instead of putting FragmentPortletDefinition directly on the thread local, instead put it on the Request Context request attribute map (which is really the servlet container's request attribute map)
      Thus, the FragmentPortletDefinition, and all objects in its subtree, most notably ContentFragmentImpl, PortletContentImpl, and PortletContentImpl's stream of content, will all be put into true per request storage, and no longer stick to the pooled thread locals.

      As an additional improvement, we will add logic to release the PortletContent streams after completion of draining of buffered streams out to the servlet response stream.

      I also want to investigate stream pooling properly at this time

        Activity

        David Sean Taylor created issue -
        David Sean Taylor made changes -
        Field Original Value New Value
        Status Open [ 1 ] In Progress [ 3 ]
        Hide
        Vitaly Baranovsky added a comment -

        Same problem is when you using keep-alive feature of servlet container of application server.
        For example, I have tomcat server with keep-alive feature turned on. So, under production load my keep-alive threads at some times helds up to 900Mb of jetspeed data structures.
        So, anyone should turn off keep-alive feature of application server while this issue will be completed.

        Show
        Vitaly Baranovsky added a comment - Same problem is when you using keep-alive feature of servlet container of application server. For example, I have tomcat server with keep-alive feature turned on. So, under production load my keep-alive threads at some times helds up to 900Mb of jetspeed data structures. So, anyone should turn off keep-alive feature of application server while this issue will be completed.
        Hide
        Vitaly Baranovsky added a comment -

        I really don't understand, why Jetspeed stores PortletContentImpl inside threads (or requests), if I turned off portlet caching.
        It is possibly just writing portlet content to response stream when rendering page without storing portlet content in any intermediate buffer. Why jetspeed stores portlet contents somewhere?

        Show
        Vitaly Baranovsky added a comment - I really don't understand, why Jetspeed stores PortletContentImpl inside threads (or requests), if I turned off portlet caching. It is possibly just writing portlet content to response stream when rendering page without storing portlet content in any intermediate buffer. Why jetspeed stores portlet contents somewhere?
        Hide
        Vitaly Baranovsky added a comment -

        The same problem is with tomcat. By default, tomcat contains minSpareThreads=25 and maxSpareThreads=75. So, thread pool contains from 25 up to 75 threads. So, there is very big memory overuse after some time of portal working. For example, our site is hanging for 5-10 minutes for about every 40 minutes. At this time garbage collector tries to free TenuredGen without any success.
        I've maken memory time at the moment of this hanging, and I see many ContentFragmentImpl inside ThreadWithAttributes, and it uses very much of memory...
        I think, workaround is setting minSpareThreads=0 maxSpareThreads=0 for tomcat, and people that uses jetspeed have to wait next jetspeed release. I've asked question about this in tomcat mailing list: http://www.nabble.com/Will-be-any-problem-if-I-set-minSpareThreads%3D0-maxSpareThreads%3D0-to15443784.html

        Is there a simple way to fix this issue in source code of jetspeed 2.1.3?

        Show
        Vitaly Baranovsky added a comment - The same problem is with tomcat. By default, tomcat contains minSpareThreads=25 and maxSpareThreads=75. So, thread pool contains from 25 up to 75 threads. So, there is very big memory overuse after some time of portal working. For example, our site is hanging for 5-10 minutes for about every 40 minutes. At this time garbage collector tries to free TenuredGen without any success. I've maken memory time at the moment of this hanging, and I see many ContentFragmentImpl inside ThreadWithAttributes, and it uses very much of memory... I think, workaround is setting minSpareThreads=0 maxSpareThreads=0 for tomcat, and people that uses jetspeed have to wait next jetspeed release. I've asked question about this in tomcat mailing list: http://www.nabble.com/Will-be-any-problem-if-I-set-minSpareThreads%3D0-maxSpareThreads%3D0-to15443784.html Is there a simple way to fix this issue in source code of jetspeed 2.1.3?
        Hide
        David Sean Taylor added a comment -

        I applied a patch to both the 2.1.2 and 2.1.3 POSTRELEASE branches, as well as the 2.2 branch
        Suggest taking the patch from there for the source, or you can take pre-built jars from here:
        http://www.bluesunrise.com/jetspeed-2/2.1.3-POST/

        Show
        David Sean Taylor added a comment - I applied a patch to both the 2.1.2 and 2.1.3 POSTRELEASE branches, as well as the 2.2 branch Suggest taking the patch from there for the source, or you can take pre-built jars from here: http://www.bluesunrise.com/jetspeed-2/2.1.3-POST/
        Hide
        David Sean Taylor added a comment -

        We have done some extensive testing over the last month on this. Using the IBM Heap Analyzer, I am not seeing this leak anymore

        Show
        David Sean Taylor added a comment - We have done some extensive testing over the last month on this. Using the IBM Heap Analyzer, I am not seeing this leak anymore
        David Sean Taylor made changes -
        Status In Progress [ 3 ] Resolved [ 5 ]
        Resolution Fixed [ 1 ]
        Ate Douma made changes -
        Status Resolved [ 5 ] Closed [ 6 ]
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Open Open In Progress In Progress
        2m 29s 1 David Sean Taylor 30/Jan/08 01:20
        In Progress In Progress Resolved Resolved
        126d 15h 49m 1 David Sean Taylor 04/Jun/08 17:10
        Resolved Resolved Closed Closed
        1217d 3h 57m 1 Ate Douma 04/Oct/11 21:07

          People

          • Assignee:
            David Sean Taylor
            Reporter:
            David Sean Taylor
          • Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development