Uploaded image for project: 'Jackrabbit Content Repository'
  1. Jackrabbit Content Repository
  2. JCR-2483

Out of memory error while adding a new host due to large number of revisions

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Patch Available
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: 1.6
    • Fix Version/s: None
    • Component/s: clustering
    • Labels:
      None
    • Environment:
      MySQL DB. 512 MB memory allocated to java app.

      Description

      In a cluster deployment, revisions are saved in Journal Table in the DB. After a while a huge number of revisions can get created (around 70 k in our test). When a new host is added to the cluster, it tries to read all the revisions and hence the following error:

      Caused by: java.lang.OutOfMemoryError: Java heap space
      at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2931)
      at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2871)
      at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3414)
      at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:910)
      at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1405)
      at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2816)
      at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:467)
      at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2510)
      at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1746)
      at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2135)
      at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2542)
      at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1734)
      at com.mysql.jdbc.PreparedStatement.execute(PreparedStatement.java:995)
      at org.apache.jackrabbit.core.journal.DatabaseJournal.getRecords(DatabaseJournal.java:460)
      at org.apache.jackrabbit.core.journal.AbstractJournal.doSync(AbstractJournal.java:201)
      at org.apache.jackrabbit.core.journal.AbstractJournal.sync(AbstractJournal.java:188)
      at org.apache.jackrabbit.core.cluster.ClusterNode.sync(ClusterNode.java:329)
      at org.apache.jackrabbit.core.cluster.ClusterNode.start(ClusterNode.java:270)

      This can also happen to an existing host in the cluster when the number of revisions returned is very high.

      Possible solutions:
      1. Cleaning old revisions using Janitor thread: This may be good for new hosts. But it will fail in a scenario when sync delay is high (few hours) and number of updates is high in existing hosts in the cluster
      2. Increases memory allocated to Java process: This is not a feasible option always
      3. Limit the number of updates read from the DB in any cycle.

        Attachments

        1. patch
          5 kB
          aasoj

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                aasoj aasoj
              • Votes:
                1 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated: