OpenJPA
  1. OpenJPA
  2. OPENJPA-1648

Slice thread pool breaks down under high concurrency

    Details

    • Type: Bug Bug
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.1.0
    • Component/s: slice
    • Labels:
      None

      Description

      Slice thread pool breaks down under heavy usage [1].
      This is due to poor choice of thread pool.

      Also creation of thread pool for every flush() is inefficient.

      Simple solution will be to use a cached thread pool – which will be upper bounded by available system's capacity for concurrent native threads.

        Activity

        Hide
        Michael Dick added a comment -

        If there's more work to be done for this issue please re-open, or open a sub task for the remaining work.

        Show
        Michael Dick added a comment - If there's more work to be done for this issue please re-open, or open a sub task for the remaining work.
        Hide
        Simon So added a comment -

        Hi Pinaki

        I try and finally{} the block with threadPool.shutdown().

        That seems to work. 100k transaction no problem now.

        You nail it right on with the excessive pool creation in every flush().

        I am not sure CachedThreadPool would have solved the problem. You still have more flush() call coming along the way – then we need to define RejectExecutionHandler. We can't abort and we can't discard.

        Since the pool is gonna go out of scope by the time flush() is done, we probably need to shut it down before going out of scope (so that expired thread no longer hang around).

        I will keep on stressing the stack and see if there are more problems.

        Cheers,
        Simon

        http://openjpa.208410.n2.nabble.com/Spring-3-0-2-OpenJPA-2-0-Slice-OutOfMemoryError-shortly-after-pounding-1000-threads-to-the-system-td5000822.html#a5000822

        Show
        Simon So added a comment - Hi Pinaki I try and finally{} the block with threadPool.shutdown(). That seems to work. 100k transaction no problem now. You nail it right on with the excessive pool creation in every flush(). I am not sure CachedThreadPool would have solved the problem. You still have more flush() call coming along the way – then we need to define RejectExecutionHandler. We can't abort and we can't discard. Since the pool is gonna go out of scope by the time flush() is done, we probably need to shut it down before going out of scope (so that expired thread no longer hang around). I will keep on stressing the stack and see if there are more problems. Cheers, Simon http://openjpa.208410.n2.nabble.com/Spring-3-0-2-OpenJPA-2-0-Slice-OutOfMemoryError-shortly-after-pounding-1000-threads-to-the-system-td5000822.html#a5000822

          People

          • Assignee:
            Pinaki Poddar
            Reporter:
            Pinaki Poddar
          • Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development