Uploaded image for project: 'Geronimo'
  1. Geronimo
  2. GERONIMO-90

Connection Management preview

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Won't Fix
    • None
    • None
    • core
    • None

    Description

      Proposal for the Connection Management section of the JCA specifications.

      GeronimoConnectionManager:

      The ConnectionManager spi interface has been implemented and delegates the allocation of connection handles to a pool of ManagedConnection. By now, the ConnectionManager is really simple: it delegates directly to the pool. However, one needs to hook-in the Transaction and Security services in the allocateConnection method. AFAIK, it should be a ?simple? task: a ConnectionFactory MUST ? required by the specifications ? call allocateConnection in the same thread than the application component requesting this connection. In other words, two ThreadLocal (one related to our TM and one related to our Security Manager) should do the trick.

      Partition:

      The specifications do not define how connection pooling should be implemented. However, some non-prescriptive guidelines have been provided. One of them is to partition this pool. This is basically what I have decided to implement: The pool is partition on a per-ManagedConnectionFactory basis. By now, it is further partitioned on an idle, active, factory, destroy basis. The general idea of this design is to define distinct set of behavior depending on the kind of partition.

      Examples:
      The factory partition is in charge of creating/allocating new connection handles. When its allocateConnection method is called, it decides if a new ManagedConnection should be created or if an existing one can be re-used.
      The XA partition (to be implemented) is in charge of creating/allocating new transacted connection handles. When its allocateConnection is called, it enlists the ManagedConnection with our TM and then gets a connection handle from this enlisted ManagedConnection.

      PartitionEventSupport, PartitionEvent and PartitionListener:

      Inter-partition events can be propagated via an AWT like event model. This mechanism is used for example by the factory partition: It monitors the idle and destroy partitions in order to decide how to serve a new allocation request. More accurately, if a ManagedConnection is added to the idle partition, then a permit to try a matchManagedConnection is added. If a ManagedConnection is added to the destroy partition, then a permit to create a new ManagedConnection is added.

      PartitionRecycler and PartitionRecycling:

      Partitions may be recycled. For instance, if a ManagedConnection seats idle too long time, then this ManagedConnection may be eligible for recycling (destroy in the case of idle ManagedConnection).

      LoggerFactory:

      The inner workings of ManagedConnectionFactory and ManagedConnection can be tracked via a PrintWriter. LoggerFactory defines the contract to obtain a PrintWriter factory backed by various output streams.

      Open issues:
      GeronimoConnectionManager MUST be Serializable. I believe that this requirement is to support Serializable but not Referenceable ConnectionFactory. The current implementation is a rather big instance (extends AbstractContainer) and should not. Moreover the connection pool used by the implementation is declared as transient and should not (One needs to define a mechanism ? I do not want a JMX lookup because this is definitively not the right bus to push allocation requests - to get an handle on the pool w/o having to reference it).

      A thorough code coverage/review MUST be done. The goal is to make sure that the implementation is thread-safe. The implementation has been stressed with 10 concurrent clients, which open and close 100 times a connection. During this stress, no concurrent modification exceptions has been raised (it always breaks when you do not want).

      The current implementation uses dumb synchronization. One should consider the concurrent API developed by Doug Lea. The stress test (20 concurrent clients, 100 requests) has been executed in ~7500 ms on my box (P4 2GHz). However, it does not scale well based on the maximum number of ManagedConnection, which is a pity for a pool. I have identified the issue: when idle connections are available, the matchManagedConnection is invoked under synchronization in order to reserve all the ManagedConnection passed to this method.

      Attachments

        Activity

          People

            Unassigned Unassigned
            gianny Gianny Damour
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: