Uploaded image for project: 'Stanbol'
  1. Stanbol
  2. STANBOL-1424

The commons.opennlp module can load the same model twice in parallel

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 0.12.0
    • Fix Version/s: 1.0.0, 0.12.1
    • Component/s: None
    • Labels:
      None

      Description

      The commons.opennlp model allows to load models by their names via the DataFileProvider infrastructure. Loaded models are cached in memory.

      If two components do request the same model in a short time. Especially when the 2md request for a model comes before the first was completed the same model is loaded twice in parallel. This will result that two instances of the model are loaded.

      While the 2nd request will override the cached model of the first the first component requesting the model might still hold a reference. In this case two instances of the model are holded in-memory.

      To solve those situations the OpenNLP service needs to use lock while loading models.

        Attachments

          Activity

            People

            • Assignee:
              rwesten Rupert Westenthaler
              Reporter:
              rwesten Rupert Westenthaler

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment