longer >>than without.
No, after I applied the patch I have never seen a lockup.
oldest Solr collections have been running in CNET for 2 years now, and
I've never seen this happen). What I have seen is that exact
exception when the server died, restarted, and then couldn't grab the
write lock.... normally due to not a big enough heap causing excessive
GC and leading resin's wrapper to restart the container.
Another reason to use native locking. From the lucene native fs lock
javadocs: "Furthermore, if the JVM crashes, the OS will free any held
locks, whereas SimpleFSLockFactory will keep the locks held, requiring
manual removal before re-running Lucene."
My hunch (and that's all it is) is that people seeing/not seeing the
issue may come down to usage patterns. My project is heavily focused on
low indexing latency so we're doing huge numbers of
add/deletes/commits/searches in very fast succession and from multiple
clients. A more batch oriented update usage pattern may not see the
The patch because as is, it doesn't change any api or cause any change
of existing functionality whatsoever unless you use the new option in
solrconfig. I would argue that using native locking should be the