Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
Description
Consider the following scenario:
Two separate threads are launched at the same time with identical Http requests through CachingHttpClient.
Both threads look up the same URI in the cache at [almost] the same time and find no cached response for that URI.
Both threads fall back to backend HttpClient and make identical requests to the server.
Both threads retrieve the resource and attempt to store it in the cache.
The same resource gets retrieved from the server twice and is stored in the cache twice.
Obviously, the described algorithm is inefficient
Suggested fix: introduce read-write locking mechanism which would block multiple requests to retrieve the same URI until one of the concurrent requests has either received a response header indicating that the response is not cacheable, or until cacheable response has been fully retrieved and stored in the cache. The proposed pseudo-code follows:
cachingClient.execute(url) {
if (lock_count(url)>0)
lock=lockingFactory.acquireReadLock(url);
else
lock=lockingFactory.acquireWriteLock(url);
response=satisfyFromCache(url);
if (response==null) {
if (lock.isReadLock())
response=satisfyFromServerAndStoreInCache(url);
}
lock.release();
return response;
}
where lockingFactory instance is shared by multiple instances of CachingHttpClient.