> That being said: your interpretation makes this feature less useful to clients (IMHO).
> From the client's point of view, it should be irrelevant how an overlapping update occurred.
> When the client gets a property value, modifies it, and can write it back although the underlying
> property has changed, then that is an overlapping update that wasn't catched.
That's again assuming that the "get property" operation is included in the control flow. We basically have two separate issues here:
1) The getProperty(), setProperty(), save(), case. This is equivalent to a database client doing a SELECT followed by an UPDATE on the same row. A database that supports isolation levels REPEATABLE READ or SERIALIZED will guarantee that if the transaction succeeds no other transaction can have updated the row between the SELECT and UPDATE statements. Jackrabbit has never supported such isolation levels and thus the lack of this isn't a regression. We can discuss implementing higher isolation levels as a new feature request, but note that the feature a) has a high design and runtime cost, b) is not needed by many (most?) clients, and c) there's already a standard solution (JCR locks) for clients that do need the functionality. In any case this is IMHO outside the scope of this issue.
2) The setProperty(), save() case. This is equivalent to a database client doing a prepareStatement followed by executeUpdate on an UPDATE statement. I still don't see how or why such a client could care about concurrent updates (except if the parent node gets removed), and thus the fact that we no longer throw exceptions for some such cases is IMHO rather an improvement than a regression. Based on this reasoning I propose that we resolve this issue as Won't Fix and perhaps create a new improvement issue to get rid of the remaining InvalidItemStateExceptions from concurrent property updates.