Uploaded image for project: 'Jackrabbit Oak'
  1. Jackrabbit Oak
  2. OAK-8912

Version garbage collector is not working if documents exceeded 100000

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Cannot Reproduce
    • None
    • None
    • documentmk
    • None

    Description

      Oak version - 1.10.2, PostgreSQL 10.7 (10.7), using driver: PostgreSQL JDBC Driver 42.2.2 (42.2).

      Actual :

      After below code run, if document collectlimit exceeded 100000, it throws exception attached in .txt

        public static void runVersionGC() { 
                log.info("Running garbage collection for DocumentNodeStore");
                try {
                      final VersionGCOptions versionGCOptions = new VersionGCOptions();
                      versionGCOptions.withCollectLimit(1000000);
                      documentNodeStore.getVersionGarbageCollector().setOptions(versionGCOptions);
                       log.info("versionGCOptions.collectLimit : " + versionGCOptions.collectLimit);
                      documentNodeStore.getVersionGarbageCollector().gc(0, TimeUnit.DAYS);
                 } catch (final DocumentStoreException e) {
                   //
           }

      Below is the code to create repository and get documentNodeStore object for version garbage collection.

         private static Repository createRepo(final Map<String, String> dbDetails)
                   throws DataStoreException {
             try {
                   final RDBOptions options =
                      new DBOptions().tablePrefix(dbDetails.get(DB_TABLE_PREFIX)).dropTablesOnClose(
                         false);
                  final DataSource ds =
                  RDBDataSourceFactory.forJdbcUrl(
                       dbDetails.get("dbURL"),
                       dbDetails.get("dbUser"),
                       dbDetails.get("dbPassword"));
                final Properties properties = buildS3Properties(dbDetails);
                final S3DataStore s3DataStore = buildS3DataStore(properties);
                final DataStoreBlobStore dataStoreBlobStore = new DataStoreBlobStore(s3DataStore);
                final Whiteboard wb = new DefaultWhiteboard();

               bapRegistration =
                               wb.register(BlobAccessProvider.class,(BlobAccessProvider)             

                                 dataStoreBlobStore,properties);

               documentNodeStore =
                      new RDBDocumentNodeStoreBuilder()
                          .setBlobStore(dataStoreBlobStore)
                          .setBundlingDisabled(true)
                          .setRDBConnection(ds, options)
                          .build();

                 repository = new Jcr(documentNodeStore).with(wb).createRepository();
                return repository;
            } catch (final DataStoreException e) {
                     log.error("S3 Connection could not be created." + e);
                    throw new DataStoreException("S3 Connection could not be created");
            }
        }

      Even after setting collectLimit in code, still it is taking 100000 as limit.

      Expected :

      versionGCOptions.collectLimit should set to custom value to avoid DocumentStoreException or solution to avoid DocumentStoreException if documents exceeded to 100000.

      Attachments

        1. exception.txt
          13 kB
          Ankush Nagapure

        Issue Links

          Activity

            People

              Unassigned Unassigned
              ankush28 Ankush Nagapure
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: