Uploaded image for project: 'ManifoldCF'
  1. ManifoldCF
  2. CONNECTORS-1112

Heavy contention over repository connections

    XMLWordPrintableJSON

Details

    Description

      On a recent test using (a slightly customised version of) the JCIFS repository connector, I noticed the Stuffer thread spends more than 50% of its time calling out to IRepositoryConnectorPool#grab (between wait() and talking to ZK).

      I wondered if the StufferThread#run method could benefit from caching repository connections along the lines of CONNECTORS-1094 so I patched it up (see attachement) which took the time down a bit. Upon closer inspection I saw the calls to RepositoryConnectorPool#grab and RepositoryConnectorPool#releaseMultiple were constantly blocked waiting on a lock owned by the Idle cleanup thread. It turns out the Idle cleanup thread does some relatively expensive work ** at least for the JCIFS connector ** once it's acquired a lock on ConnectorPool#poolHash. It reads a config property in SharedDriveConnector#setThreadContext and that involves a trip to ZK and some XML parsing which results in loading and processing JAR files. I subsequently monitored other threads and found that many can be impacted for prolonged periods of time as they try to acquire and release repository connections.

      Caching the values of config properties almost eliminates the time the Idle cleanup thread spends under lock and is easy enough to implement for the JCIFS connector. It would be great if this could be done in a way that is more generic to prevent a slight inefficiency in the code of one connector slowing down the entire system.

      Attachments

        1. CONNECTORS-1112.patch
          14 kB
          Aeham Abushwashi

        Activity

          People

            kwright@metacarta.com Karl Wright
            aeham.abushwashi Aeham Abushwashi
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: