Details

    • Type: Sub-task Sub-task
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-M2
    • Fix Version/s: 2.0.0-M2
    • Component/s: jpa
    • Labels:
      None

      Activity

      Hide
      Albert Lee added a comment -

      This issues is functionally complete. Outstanding issues raised by Pinaki will be addressed in future JIRA.

      Albert Lee.

      Show
      Albert Lee added a comment - This issues is functionally complete. Outstanding issues raised by Pinaki will be addressed in future JIRA. Albert Lee.
      Hide
      Albert Lee added a comment -

      Iteration summary:

      Support new JPA LockModeType in find, lock and refresh methods in the EntityManager interface. A new "mixed" lock manager is introduced implementing the new mixed optimistic and pessimistic entity locking semantics.

      Albert Lee.

      Show
      Albert Lee added a comment - Iteration summary: Support new JPA LockModeType in find, lock and refresh methods in the EntityManager interface. A new "mixed" lock manager is introduced implementing the new mixed optimistic and pessimistic entity locking semantics. Albert Lee.
      Hide
      Albert Lee added a comment -

      Pinaki,

      Thanks for your time reviewing the code and your architectural insight into the code base.

      There are 2 functions in the implementation:

      • setting the correct (jpa2) lock level in the fetch configuration so that the datastore can issue the correct SQL
      • process the related input properties and set up the appropriate fetch configuration values.

      1) The original thought was the new pessimistic lock mode type is only related to jpa spec, therefore it is not push down to say fetch configuration in the kernel layer, which does not know anything about the the new lock levels, to handle the processing. I am trying to encapsulate this in the persistence layer. The attribute to implement and differentiate PESSIMISTIC_READ/WRITE is based on the isolation level being set in the fetch configuration, hence the code is getting the dictionary to see if it is supported by the db and then set the isolation in the JDBCFetchConfiguration. Both objects are in the jdbc-jdbc layer. We can skip the db checking (and avoid touching the JDBCConfiguration object) but still have to find a way to set the JDBCFetchConfiguration isolation level somehow. If we can find a solution to this problem, then we can avoid the dependency of openjpa-jdbc from openjpa-persistence. One thought is to defer the hint/property process further down the path and by setting the isolation property, we can de-couple the dependency.

      2) If we process the JPA behavior in the FetchConfiguration, that also means the kernel is aware of the JPA personality's behavior. Is that what we want.

      3) Point taken. I will move the (expensive) getPropertyKeys() call out of the loop and request it once.

      4) The intent of using IntValue to process the property value is to keep the exact same behavior as if it is being process by the configuration plugin framework. Since the property map takes Object as value, so it can take many valid types.

      5) The current implementation will only do at most 1 push and 0 if no fetch configuration is changed. That is the fcPushed is for. I was debating either a) always do 1 push and set the value in the fetch configuration (cleaner in code path, no if(!fsPush) conditional processing ) or b) push only if needed (current implementation.)

      Comments/suggestions are welcome.
      Albert Lee.

      Show
      Albert Lee added a comment - Pinaki, Thanks for your time reviewing the code and your architectural insight into the code base. There are 2 functions in the implementation: setting the correct (jpa2) lock level in the fetch configuration so that the datastore can issue the correct SQL process the related input properties and set up the appropriate fetch configuration values. 1) The original thought was the new pessimistic lock mode type is only related to jpa spec, therefore it is not push down to say fetch configuration in the kernel layer, which does not know anything about the the new lock levels, to handle the processing. I am trying to encapsulate this in the persistence layer. The attribute to implement and differentiate PESSIMISTIC_READ/WRITE is based on the isolation level being set in the fetch configuration, hence the code is getting the dictionary to see if it is supported by the db and then set the isolation in the JDBCFetchConfiguration. Both objects are in the jdbc-jdbc layer. We can skip the db checking (and avoid touching the JDBCConfiguration object) but still have to find a way to set the JDBCFetchConfiguration isolation level somehow. If we can find a solution to this problem, then we can avoid the dependency of openjpa-jdbc from openjpa-persistence. One thought is to defer the hint/property process further down the path and by setting the isolation property, we can de-couple the dependency. 2) If we process the JPA behavior in the FetchConfiguration, that also means the kernel is aware of the JPA personality's behavior. Is that what we want. 3) Point taken. I will move the (expensive) getPropertyKeys() call out of the loop and request it once. 4) The intent of using IntValue to process the property value is to keep the exact same behavior as if it is being process by the configuration plugin framework. Since the property map takes Object as value, so it can take many valid types. 5) The current implementation will only do at most 1 push and 0 if no fetch configuration is changed. That is the fcPushed is for. I was debating either a) always do 1 push and set the value in the fetch configuration (cleaner in code path, no if(!fsPush) conditional processing ) or b) push only if needed (current implementation.) Comments/suggestions are welcome. Albert Lee.
      Hide
      Albert Lee added a comment -

      Donald,

      Per our separated conversation, current OpenJPA implementation uses statement level setQueryTimeout for lock timeout and query timeout, which is stored in the fetch configuration. "OPENJPA-958,959: timeout on query" is addressing the separation of both property hint definitions. A new JIRA can be used to document your suggested functions.

      Albert Lee.

      Show
      Albert Lee added a comment - Donald, Per our separated conversation, current OpenJPA implementation uses statement level setQueryTimeout for lock timeout and query timeout, which is stored in the fetch configuration. " OPENJPA-958 ,959: timeout on query" is addressing the separation of both property hint definitions. A new JIRA can be used to document your suggested functions. Albert Lee.
      Hide
      Pinaki Poddar added a comment -

      I have started looking into the changes in EntityManagerImpl.java related to this issue. Here are few comments:(the first one is critical becuase it is architectural)

      1. I see JDBCConfiguration appearing in EntityManagerImpl. This is not a welcome sign. Considerable effort has gone in throughout this codebase to maintain architectural layerering so that EntityManager/Facade does not know the nature of the Store. In fact, I thought maven build prohibits package import via dependencies to enforce such layering restriction.
      Anyway, the short point is: if EntityManagerImpl has to refer to JDBCConfiguration then something else is amiss. It also violates one of the basic architectural principles.

      2. Going by the code further, I think many of the new processing added to EntityManagerImpl actually belongs somewhere else. Most probabaly in appropriate FetchConfiguration implementation.

      3. The setFetchConfigProperty() calls in a loop
      configuration.getPropertyKeys()
      Please note that given this is a costly operation and in the returned value is not going to change across the loop, it makes sense to compute it once before entering the loop.

      4. I also completely missed the point that why one will require to instantiate IntValue in such a place. The purpose there looked like to populate an instance of FetchConfiguration from the user supplied map and then push that onto the stack. Why will one require conf.IntValue to do that is not clear to me at all.

      5. FetchConfiguration seemed to be pushed in a loop.That will amount to multiple clones in the stack – that is not what is wanted – the user properties should poulate one single instance of FetchConfiguration and that should be pished into the stack. What will happen by this code (perhaps, reading always can be faulty) is user property will land up in different fetch config instances.

      Show
      Pinaki Poddar added a comment - I have started looking into the changes in EntityManagerImpl.java related to this issue. Here are few comments:(the first one is critical becuase it is architectural) 1. I see JDBCConfiguration appearing in EntityManagerImpl. This is not a welcome sign. Considerable effort has gone in throughout this codebase to maintain architectural layerering so that EntityManager/Facade does not know the nature of the Store. In fact, I thought maven build prohibits package import via dependencies to enforce such layering restriction. Anyway, the short point is: if EntityManagerImpl has to refer to JDBCConfiguration then something else is amiss. It also violates one of the basic architectural principles. 2. Going by the code further, I think many of the new processing added to EntityManagerImpl actually belongs somewhere else. Most probabaly in appropriate FetchConfiguration implementation. 3. The setFetchConfigProperty() calls in a loop configuration.getPropertyKeys() Please note that given this is a costly operation and in the returned value is not going to change across the loop, it makes sense to compute it once before entering the loop. 4. I also completely missed the point that why one will require to instantiate IntValue in such a place. The purpose there looked like to populate an instance of FetchConfiguration from the user supplied map and then push that onto the stack. Why will one require conf.IntValue to do that is not clear to me at all. 5. FetchConfiguration seemed to be pushed in a loop.That will amount to multiple clones in the stack – that is not what is wanted – the user properties should poulate one single instance of FetchConfiguration and that should be pished into the stack. What will happen by this code (perhaps, reading always can be faulty) is user property will land up in different fetch config instances.
      Hide
      Pinaki Poddar added a comment -

      > 3. SQLExceptions
      >>Not sure what this is referring to.

      Just a minor comment. In the following code LockManager lm is never used.

      public static OpenJPAException getStoreSQLException(
      OpenJPAConfiguration config, SQLException se, DBDictionary dict,
      int level) {
      OpenJPAException storeEx = SQLExceptions.getStore(se, dict);
      String lm = config.getLockManager();
      if (storeEx.getSubtype() == StoreException.LOCK)

      { LockException lockEx = (LockException) storeEx; lockEx.setLockLevel(level); }

      return storeEx;
      }

      Show
      Pinaki Poddar added a comment - > 3. SQLExceptions >>Not sure what this is referring to. Just a minor comment. In the following code LockManager lm is never used. public static OpenJPAException getStoreSQLException( OpenJPAConfiguration config, SQLException se, DBDictionary dict, int level) { OpenJPAException storeEx = SQLExceptions.getStore(se, dict); String lm = config.getLockManager(); if (storeEx.getSubtype() == StoreException.LOCK) { LockException lockEx = (LockException) storeEx; lockEx.setLockLevel(level); } return storeEx; }
      Hide
      Donald Woods added a comment -

      Albert, it looks like you have implemented this using setQueryTimeout(), which is a client side JDBC timeout function, while lock timeouts are implemented in the DB server. See -
      http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0011874.htm
      http://msdn.microsoft.com/en-us/library/aa213032(SQL.80).aspx
      Also, the following discussion gives a good overview of the two and why apps should use both to handle unreliable network conditions -
      http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/95755534-bbef-4c2c-afa4-b80ca2a2c333/

      Show
      Donald Woods added a comment - Albert, it looks like you have implemented this using setQueryTimeout(), which is a client side JDBC timeout function, while lock timeouts are implemented in the DB server. See - http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0011874.htm http://msdn.microsoft.com/en-us/library/aa213032(SQL.80).aspx Also, the following discussion gives a good overview of the two and why apps should use both to handle unreliable network conditions - http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/95755534-bbef-4c2c-afa4-b80ca2a2c333/
      Hide
      Albert Lee added a comment -

      Pinaki,

      Thanks for the comments.

      >> 1 & 4
      Fixed.

      >>2. Naming (in classes, in configuration) that uses 'jpa2' does somehow looks awkward. It will perhaps make less sense in 2012 when newer JPA version is available and no significant change has happened in an area to warranty a JPA6.java. As a rule of thumb, perhaps a naming scheme that is explanatory of its function rather than its compliance to a spec version is a more long-lasting approach.

      Agree. What would be a good choice? I am thinking the following:

      • mixed
      • miximistic (mix of both optimistic and pessimistic)
      • combomistic (comination of both)
      • jpa

      >> 3. SQLExceptions
      Newly added method used OpenJPAConfiguration but appears in method signature

      Not sure what this is referring to. There is a methods that needs the OpenJPAConfiguration to access the lock manager. The other is a convenient method with StateManager as agrument that calls the first method to do the same thing.

      Albert Lee.

      Show
      Albert Lee added a comment - Pinaki, Thanks for the comments. >> 1 & 4 Fixed. >>2. Naming (in classes, in configuration) that uses 'jpa2' does somehow looks awkward. It will perhaps make less sense in 2012 when newer JPA version is available and no significant change has happened in an area to warranty a JPA6 .java. As a rule of thumb, perhaps a naming scheme that is explanatory of its function rather than its compliance to a spec version is a more long-lasting approach. Agree. What would be a good choice? I am thinking the following: mixed miximistic (mix of both optimistic and pessimistic) combomistic (comination of both) jpa >> 3. SQLExceptions Newly added method used OpenJPAConfiguration but appears in method signature Not sure what this is referring to. There is a methods that needs the OpenJPAConfiguration to access the lock manager. The other is a convenient method with StateManager as agrument that calls the first method to do the same thing. Albert Lee.
      Hide
      Albert Lee added a comment -

      More testing reveals there exist 2 problems in the current implementations:

      1) e.lock() does not honor the following contract

      • @throws IllegalArgumentException if the instance is not an entity or is a detached entity, E.g.

      e = em.lock(null,LockModeType.XXX);
      e = em.lock("xxxx",LockModeType.XXX);

      2) e.refresh does not honor the WRITE lock contract.

      3.4.3 Lock Modes
      "If transaction T1 calls lock(entity, LockModeType.WRITE) on a versioned object, the entity manager must avoid the phenomena P1 and P2 (as with LockModeType.READ) and must also force an update (increment) to the entity's version column. A forced version update may be performed immediately, or may be deferred until a flush or commit." E.g.

      e = em.find(Entity.class, 1);
      em.refresh(e, LockModeType.WRITE)
      em.getTransaction.commit();

      does not update the version field on commit. but the lock did force the version to be updated.
      e = em.find(Entity.class, 1);
      em.lock(e, LockModeType.WRITE)
      em.getTransaction.commit();

      More corrective patch and test cases will be posted later.
      Albert Lee.

      Show
      Albert Lee added a comment - More testing reveals there exist 2 problems in the current implementations: 1) e.lock() does not honor the following contract @throws IllegalArgumentException if the instance is not an entity or is a detached entity, E.g. e = em.lock(null,LockModeType.XXX); e = em.lock("xxxx",LockModeType.XXX); 2) e.refresh does not honor the WRITE lock contract. 3.4.3 Lock Modes "If transaction T1 calls lock(entity, LockModeType.WRITE) on a versioned object, the entity manager must avoid the phenomena P1 and P2 (as with LockModeType.READ) and must also force an update (increment) to the entity's version column. A forced version update may be performed immediately, or may be deferred until a flush or commit." E.g. e = em.find(Entity.class, 1); em.refresh(e, LockModeType.WRITE) em.getTransaction.commit(); does not update the version field on commit. but the lock did force the version to be updated. e = em.find(Entity.class, 1); em.lock(e, LockModeType.WRITE) em.getTransaction.commit(); More corrective patch and test cases will be posted later. Albert Lee.
      Hide
      Pinaki Poddar added a comment -

      Few minor suggestions/comments:

      1. A new method isRecoverable() method is added in one of the OpenJPA Exception class. There is an existing isFatal() method which had similar semantics. Could that have sufficed instead of the new method? If not, then what is the semantic difference and it may be documented for the newly added method.

      2. Naming (in classes, in configuration) that uses 'jpa2' does somehow looks awkward. It will perhaps make less sense in 2012 when newer JPA version is available and no significant change has happened in an area to warranty a JPA6.java. As a rule of thumb, perhaps a naming scheme that is explanatory of its function rather than its compliance to a spec version is a more long-lasting approach.

      3. SQLExceptions
      Newly added method used OpenJPAConfiguration but appears in method signature

      4. LockException
      Should have constructor with new lockLevel argument as in the case of existing state variables.

      Show
      Pinaki Poddar added a comment - Few minor suggestions/comments: 1. A new method isRecoverable() method is added in one of the OpenJPA Exception class. There is an existing isFatal() method which had similar semantics. Could that have sufficed instead of the new method? If not, then what is the semantic difference and it may be documented for the newly added method. 2. Naming (in classes, in configuration) that uses 'jpa2' does somehow looks awkward. It will perhaps make less sense in 2012 when newer JPA version is available and no significant change has happened in an area to warranty a JPA6 .java. As a rule of thumb, perhaps a naming scheme that is explanatory of its function rather than its compliance to a spec version is a more long-lasting approach. 3. SQLExceptions Newly added method used OpenJPAConfiguration but appears in method signature 4. LockException Should have constructor with new lockLevel argument as in the case of existing state variables.
      Hide
      Albert Lee added a comment -

      Milosz, Thank for the comment. Method name changed.

      Albert Lee.

      Show
      Albert Lee added a comment - Milosz, Thank for the comment. Method name changed. Albert Lee.
      Hide
      Milosz Tylenda added a comment -

      Albert, maybe the newly added DBDictionary.supportIsolationForUpdate() could be named supportsIsolationForUpdate (with "s") to be more aligned with other supports* methods.

      Show
      Milosz Tylenda added a comment - Albert, maybe the newly added DBDictionary.supportIsolationForUpdate() could be named supportsIsolationForUpdate (with "s") to be more aligned with other supports* methods.
      Hide
      Albert Lee added a comment -

      The JIRA implements the new JPA2 LockModeType features.

      The following is a summary of the design points and considerations in the OpenJPA implementation:

      Support the following new LockModeType and property map combination for find, lock and refresh methods in the EntityManager interface:
      Optimistic (Same as Read)
      Optimistic_Force_Increment (Same as Write)
      Pessimistic_Read
      Pessimistic_Write
      Pessimistic_Force_Increment

      Since OpenJPA already support both optimistic and pessimistic lock managers, the basic design goal is to "reuse" as much as possible to ensure stability, compatibility and semantics of existing lock managers behaviors as well as the new JPA 2 support..

      A new JPA2LockManager is introduced to support the new lock mode semantics. "jpa2" is the alias name for the openjpa.LockManager properties. This will be the default for OpenJPA 2.0.0.

      There are 3 aspects in supporting the new features:
      The front end - EntityManagerImpl:
      The EntityManagerImpl needs to implement the new interface methods.
      Since the em method's LockMode and properties Map arguments are transient and only apply during the method call, the front end code needs to save, apply the setting for the current method call and restore the previous values when the method exit.
      Translate the relevant property values and apply to the fetch configuration for use downstream. Only the following properties will be processed
      javax.persistence.lock.timeout
      openjpa.LockTimeout
      openjpa.ReadLockMode
      openjpa.WriteLockMode

      public <T> T find(Class<T> cls, Object oid, LockModeType mode,
      Map<String, Object> properties) {
      assertNotCloseInvoked();
      if (mode != LockModeType.NONE)
      _broker.assertActiveTransaction();

      boolean fcPushed = pushLockProperties(mode, properties);
      try

      { oid = _broker.newObjectId(cls, oid); return (T) _broker.find(oid, true, this); }

      finally

      { popLockProperties(fcPushed); }

      }

      The back end - Lock Manager
      The role of the JPA2LockManager is to route the requested LockMode to the lock manager and set the version check options to be processed by the delegated lock manager:

      protected void lockInternal(OpenJPAStateManager sm, int level, int timeout, Object sdata) {
      if (level >= LOCK_PESSIMISTIC_FORCE_INCREMENT)

      { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(true); super.lockInternal(sm, level, timeout, sdata); }

      else if (level >= LOCK_PESSIMISTIC_READ)

      { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(false); super.lockInternal(sm, level, timeout, sdata); }

      else if (level >= LOCK_READ)

      { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(true); optimisticLockInternal(sm, level, timeout, sdata); }

      }

      Exception handling
      New LockTimeoutException and PessimisticLockException require differentiation its statement-level and transaction-level rollback semantics respectively. Application may recover and re-try when the former exception is received.
      Since detecting these conditions is very database specific. the DBDictionary.narrow() method will delegate to the associated Dictionary subclass to examine the SQLException (SQLState, SQLCode and message text etc.) thrown by the database and to compute if the exception is a recoverable exception. The existing sqlstate mapping to StoreException's subtype remains unchanged (except a few corrections). The StoreException is used to encapsulate the recoverable attribute of the SQLException and will be processed during exception translation.. The only subtype that is affected is StoreException.LOCK type.

      OpenJPAException narrow(String msg, SQLException ex) {
      Boolean recoverable = null;
      int errorType = StoreException.GENERAL;
      for (Integer type : sqlStateCodes.keySet()) {
      Set<String> errorStates = sqlStateCodes.get(type);
      if (errorStates != null) {
      recoverable = matchErrorState(type, errorStates, ex);
      if (recoverable != null)

      { errorType = type; break; }

      }
      }
      StoreException storeEx;
      switch (errorType) {
      case StoreException.LOCK:
      storeEx = new LockException(msg);
      break;
      E.g.
      DB2:
      @Override
      protected Boolean matchErrorState(int subtype, Set<String> errorStates,
      SQLException ex) {
      Boolean recoverable = null;
      if (errorStates.contains(errorState)) {
      recoverable = Boolean.FALSE;
      if (subtype == StoreException.LOCK && errorState.equals("57033")
      && ex.getMessage().indexOf("80") != -1)

      { recoverable = Boolean.TRUE; }
      }
      return recoverable;
      }
      Derby:
      @Override
      protected Boolean matchErrorState(int subtype, Set<String> errorStates,
      SQLException ex) {
      Boolean recoverable = null;
      String errorState = ex.getSQLState();
      int errorCode = ex.getErrorCode();
      if (errorStates.contains(errorState)) {
      recoverable = Boolean.FALSE;
      if (subtype == StoreException.LOCK && errorCode < 30000) { recoverable = Boolean.TRUE; }

      }
      return recoverable;
      }

      The LockException will be translated to the proper em method exception as in :

      private static Throwable translateStoreException(OpenJPAException ke)

      { ....... }

      else if (ke.getSubtype() == StoreException.LOCK || cause instanceof LockException) {
      LockException lockEx = (LockException)(ke instanceof LockException ? ke : cause);
      if( lockEx != null && lockEx.isPessimistic()) {
      if( lockEx.isRecoverable())

      { e = new org.apache.openjpa.persistence .LockTimeoutException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); }

      else

      { e = new org.apache.openjpa.persistence .PessimisticLockException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); }

      } else

      { e = new org.apache.openjpa.persistence .OptimisticLockException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); }

      } else if (ke.getSubtype() == StoreException.OBJECT_EXISTS

      The LockTimeoutException and QueryTimeoutException will be exempted from being marked RolledbackOnly in PersistenceExceptions

      public static RuntimeExceptionTranslator getRollbackTranslator(
      final OpenJPAEntityManager em) {
      return new RuntimeExceptionTranslator() {
      private boolean throwing = false;
      public RuntimeException translate(RuntimeException re) {
      RuntimeException ex = toPersistenceException(re);
      if (!(ex instanceof NonUniqueResultException)
      && !(ex instanceof NoResultException)
      && !(ex instanceof LockTimeoutException)
      && !(ex instanceof QueryTimeoutException)
      && !throwing) {
      try {
      throwing = true;

      openjpa.ReadLockLevel and WriteLockLevel are enhanced in parallel to the new lock modes. (i.e.
      5.53. openjpa.ReadLockLevel
      Property name: openjpa.ReadLockLevel
      Resource adaptor config-property: ReadLockLevel
      Default: read
      Possible values: none, read, write, optimistic, optimistic-force-increment, pessimistic-read, pessimistic-write, pessimistic-force-increment, numeric values for lock-manager specific lock levels
      Description: The default level at which to lock objects retrieved during a non-optimistic transaction. Note that for the default JDBC lock manager, read and write lock levels are equivalent.

      For the em methods that take both LockModeType and a property Map, we may run into situation where "openjpa.ReadLockMode/WriteLockMode" are specified and create a conflict with the LockModeType argument. Resolution: the LockModeType argument takes a higher precedent than the *LockLevel values specified in the Map argument.

      If application specifies "version" or "pessimistic" as the openjpa.LockManager, the old behavior is honored.
      However if new lock mode is requested using the new EntityManager Interface, the following behavior will be implemented:
      "version" lock manager
      All "Pessimistic_*" mode will be down grade to "Optimistic_Force_Increment" (Write) and a warning message is logged.
      "pessimistic" lock manager
      All lock type uses the same existing pessimistic semantics.

      Albert Lee.

      Show
      Albert Lee added a comment - The JIRA implements the new JPA2 LockModeType features. The following is a summary of the design points and considerations in the OpenJPA implementation: Support the following new LockModeType and property map combination for find, lock and refresh methods in the EntityManager interface: Optimistic (Same as Read) Optimistic_Force_Increment (Same as Write) Pessimistic_Read Pessimistic_Write Pessimistic_Force_Increment Since OpenJPA already support both optimistic and pessimistic lock managers, the basic design goal is to "reuse" as much as possible to ensure stability, compatibility and semantics of existing lock managers behaviors as well as the new JPA 2 support.. A new JPA2LockManager is introduced to support the new lock mode semantics. "jpa2" is the alias name for the openjpa.LockManager properties. This will be the default for OpenJPA 2.0.0. There are 3 aspects in supporting the new features: The front end - EntityManagerImpl: The EntityManagerImpl needs to implement the new interface methods. Since the em method's LockMode and properties Map arguments are transient and only apply during the method call, the front end code needs to save, apply the setting for the current method call and restore the previous values when the method exit. Translate the relevant property values and apply to the fetch configuration for use downstream. Only the following properties will be processed javax.persistence.lock.timeout openjpa.LockTimeout openjpa.ReadLockMode openjpa.WriteLockMode public <T> T find(Class<T> cls, Object oid, LockModeType mode, Map<String, Object> properties) { assertNotCloseInvoked(); if (mode != LockModeType.NONE) _broker.assertActiveTransaction(); boolean fcPushed = pushLockProperties(mode, properties); try { oid = _broker.newObjectId(cls, oid); return (T) _broker.find(oid, true, this); } finally { popLockProperties(fcPushed); } } The back end - Lock Manager The role of the JPA2LockManager is to route the requested LockMode to the lock manager and set the version check options to be processed by the delegated lock manager: protected void lockInternal(OpenJPAStateManager sm, int level, int timeout, Object sdata) { if (level >= LOCK_PESSIMISTIC_FORCE_INCREMENT) { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(true); super.lockInternal(sm, level, timeout, sdata); } else if (level >= LOCK_PESSIMISTIC_READ) { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(false); super.lockInternal(sm, level, timeout, sdata); } else if (level >= LOCK_READ) { setVersionCheckOnReadLock(true); setVersionUpdateOnWriteLock(true); optimisticLockInternal(sm, level, timeout, sdata); } } Exception handling New LockTimeoutException and PessimisticLockException require differentiation its statement-level and transaction-level rollback semantics respectively. Application may recover and re-try when the former exception is received. Since detecting these conditions is very database specific. the DBDictionary.narrow() method will delegate to the associated Dictionary subclass to examine the SQLException (SQLState, SQLCode and message text etc.) thrown by the database and to compute if the exception is a recoverable exception. The existing sqlstate mapping to StoreException's subtype remains unchanged (except a few corrections). The StoreException is used to encapsulate the recoverable attribute of the SQLException and will be processed during exception translation.. The only subtype that is affected is StoreException.LOCK type. OpenJPAException narrow(String msg, SQLException ex) { Boolean recoverable = null; int errorType = StoreException.GENERAL; for (Integer type : sqlStateCodes.keySet()) { Set<String> errorStates = sqlStateCodes.get(type); if (errorStates != null) { recoverable = matchErrorState(type, errorStates, ex); if (recoverable != null) { errorType = type; break; } } } StoreException storeEx; switch (errorType) { case StoreException.LOCK: storeEx = new LockException(msg); break; E.g. DB2: @Override protected Boolean matchErrorState(int subtype, Set<String> errorStates, SQLException ex) { Boolean recoverable = null; if (errorStates.contains(errorState)) { recoverable = Boolean.FALSE; if (subtype == StoreException.LOCK && errorState.equals("57033") && ex.getMessage().indexOf("80") != -1) { recoverable = Boolean.TRUE; } } return recoverable; } Derby: @Override protected Boolean matchErrorState(int subtype, Set<String> errorStates, SQLException ex) { Boolean recoverable = null; String errorState = ex.getSQLState(); int errorCode = ex.getErrorCode(); if (errorStates.contains(errorState)) { recoverable = Boolean.FALSE; if (subtype == StoreException.LOCK && errorCode < 30000) { recoverable = Boolean.TRUE; } } return recoverable; } The LockException will be translated to the proper em method exception as in : private static Throwable translateStoreException(OpenJPAException ke) { ....... } else if (ke.getSubtype() == StoreException.LOCK || cause instanceof LockException) { LockException lockEx = (LockException)(ke instanceof LockException ? ke : cause); if( lockEx != null && lockEx.isPessimistic()) { if( lockEx.isRecoverable()) { e = new org.apache.openjpa.persistence .LockTimeoutException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); } else { e = new org.apache.openjpa.persistence .PessimisticLockException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); } } else { e = new org.apache.openjpa.persistence .OptimisticLockException(ke.getMessage(), getNestedThrowables(ke), getFailedObject(ke), ke.isFatal()); } } else if (ke.getSubtype() == StoreException.OBJECT_EXISTS The LockTimeoutException and QueryTimeoutException will be exempted from being marked RolledbackOnly in PersistenceExceptions public static RuntimeExceptionTranslator getRollbackTranslator( final OpenJPAEntityManager em) { return new RuntimeExceptionTranslator() { private boolean throwing = false; public RuntimeException translate(RuntimeException re) { RuntimeException ex = toPersistenceException(re); if (!(ex instanceof NonUniqueResultException) && !(ex instanceof NoResultException) && !(ex instanceof LockTimeoutException) && !(ex instanceof QueryTimeoutException) && !throwing) { try { throwing = true; openjpa.ReadLockLevel and WriteLockLevel are enhanced in parallel to the new lock modes. (i.e. 5.53. openjpa.ReadLockLevel Property name: openjpa.ReadLockLevel Resource adaptor config-property: ReadLockLevel Default: read Possible values: none, read, write, optimistic, optimistic-force-increment, pessimistic-read, pessimistic-write, pessimistic-force-increment, numeric values for lock-manager specific lock levels Description: The default level at which to lock objects retrieved during a non-optimistic transaction. Note that for the default JDBC lock manager, read and write lock levels are equivalent. For the em methods that take both LockModeType and a property Map, we may run into situation where "openjpa.ReadLockMode/WriteLockMode" are specified and create a conflict with the LockModeType argument. Resolution: the LockModeType argument takes a higher precedent than the *LockLevel values specified in the Map argument. If application specifies "version" or "pessimistic" as the openjpa.LockManager, the old behavior is honored. However if new lock mode is requested using the new EntityManager Interface, the following behavior will be implemented: "version" lock manager All "Pessimistic_*" mode will be down grade to "Optimistic_Force_Increment" (Write) and a warning message is logged. "pessimistic" lock manager All lock type uses the same existing pessimistic semantics. Albert Lee.
      Hide
      Albert Lee added a comment -

      Still investigating and coding for the desired functions. No code will be committed under OPENJPA-808 and move this sub-task to OPENJPA-875.

      Show
      Albert Lee added a comment - Still investigating and coding for the desired functions. No code will be committed under OPENJPA-808 and move this sub-task to OPENJPA-875 .
      Hide
      Albert Lee added a comment -

      This sub-task is used to implement the initial support of new optimistic locks types: Optimistic and Optimistic_Force_Increment, per JPA 2 spec.

      The new pessimistic lock types will be addressed in (part 2 ) future iteration. This requires the spec settles on the names being used, the final semantics of the these lock types and the availability of the javax.persistence.* API from Geronimo project.

      Albert Lee.

      Show
      Albert Lee added a comment - This sub-task is used to implement the initial support of new optimistic locks types: Optimistic and Optimistic_Force_Increment, per JPA 2 spec. The new pessimistic lock types will be addressed in (part 2 ) future iteration. This requires the spec settles on the names being used, the final semantics of the these lock types and the availability of the javax.persistence.* API from Geronimo project. Albert Lee.

        People

        • Assignee:
          Albert Lee
          Reporter:
          Albert Lee
        • Votes:
          0 Vote for this issue
          Watchers:
          1 Start watching this issue

          Dates

          • Created:
            Updated:
            Resolved:

            Development