Uploaded image for project: 'OpenJPA'
  1. OpenJPA
  2. OPENJPA-2611

Bulk operations behave inconsistently to JPA Spec and OpenJPA documentation

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.1.1
    • None
    • jdbc
    • None
    • OpenJPA 2.1.1 in Websphere 8.0.0.10 on IBM J9 VM 2.6

    Description

      During mass data tests we discovered an undocumented behaviour in OpenJPA when executing bulk operations via JPQL. OpenJPA decides to execute the bulk operation completely in-memory if it detects either a CascadeType.REMOVE annotation or a mapping to a different table on a field (1). The in-memory-delete loads all entities which should be deleted into memory at first and then deletes them manually to make sure no constraints are violated. According to comments inside the OpenJPA source code it is clearly stated that bulk deletes won't be performed with server-side operations if a mapping lies in multiple tables (2).
      Although this behaviour might be seen as a feature of OpenJPA, it is actually a bug in either your documentation or your implementation regarding the JPA specification. The official OpenJPA documentation says that "a (bulk) delete operation only applies to entities of the specified class and its subclasses. It does not cascade to related entities." (3) - however, the actual implementation will cascade on a CascadeType.REMOVE annotation or a orphanRemoval=true property (that's why OpenJPA performs the in-memory delete). The JPA 2.0 specification contains exactly the same definition as your documentation.
      We stumbled upon this behaviour while investigating an OutOfMemoryError that occurred during a load test in which OpenJPA should delete 1.4 million records from our database. Although we discovered this bug in version 2.1.1 of OpenJPA, a short look on GrepCode shows that the relevant source has been present at least since 1.x and is still present in 2.4.0. It would be desireable if this behaviour is at least documented outside of the source code, since users of Hibernate or EclipseLink would not expect in-memory operations and might experience OutOfMemoryErrors like we did. Otherwise it would be even better if OpenJPA runs always a server-side bulk operation to conform with the JPA specification or provide at least a documented configuration option for disabling the in-memory operations. However, this would also mean, that existing users who rely on the current implementation might get problems with server-side constraints in the future.
      If you know any way of disabling this behaviour without updating the OpenJPA library, please let me know. The only workaround we currently see, is using native SQL queries for bulk deleting records.

      Sources:
      (1): http://grepcode.com/file/repo1.maven.org/maven2/org.apache.openjpa/openjpa-jdbc/2.4.0/org/apache/openjpa/jdbc/kernel/JDBCStoreQuery.java#JDBCStoreQuery.getTable%28org.apache.openjpa.jdbc.meta.FieldMapping%2Corg.apache.openjpa.jdbc.schema.Table%29
      (2): http://grepcode.com/file/repo1.maven.org/maven2/org.apache.openjpa/openjpa-jdbc/2.1.1/org/apache/openjpa/jdbc/kernel/JDBCStoreQuery.java#JDBCStoreQuery.isSingleTableMapping%28org.apache.openjpa.jdbc.meta.ClassMapping%2Cboolean%29
      (3): http://openjpa.apache.org/builds/2.4.0/apache-openjpa/docs/jpa_langref.html#jpa_langref_bulk_ops

      Attachments

        Activity

          People

            Unassigned Unassigned
            jakobh Jakob He
            Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: