Velocity
  1. Velocity
  2. VELOCITY-776

"velocimacro.permissions.allow.inline.local.scope" makes VelocityEngine not threadsafe

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 1.6.4
    • Fix Version/s: None
    • Component/s: Engine
    • Labels:
      None
    • Environment:
      Sun java jdk1.6.0_21, Ubuntu 10.04

      Description

      The attached unit test shows that when "velocimacro.permissions.allow.inline.local.scope" is set to true, and multiple threads use the same VelocityEngine instance then macros sometimes don't get expanded and the #macroname call remains in the output text.

      Notes:

      • running test method "testMultipleEvals" (single threaded case) always succeeds
      • running test method "testMultiThreadMultipleEvals" always fails
      • commenting out the allow.inline.local.scope line makes the multithread test pass (but of course has other side-effects)

      Interestingly, for the multithread case it seems that 1 thread always succeeds and N-1 threads fail.

      1. RenderVelocityTemplateTest.java
        3 kB
        Simon Kitching
      2. RenderVelocityTemplate.java
        0.9 kB
        Simon Kitching

        Issue Links

          Activity

          Hide
          Simon Kitching added a comment -

          Example code and associated junit test that demonstrate this issue

          Show
          Simon Kitching added a comment - Example code and associated junit test that demonstrate this issue
          Hide
          Nathan Bubna added a comment -

          Simon, does this only occur via Velocity.evaluate(...)? Or was this happening with standard template retrieval/rendering too?

          Show
          Nathan Bubna added a comment - Simon, does this only occur via Velocity.evaluate(...)? Or was this happening with standard template retrieval/rendering too?
          Hide
          Nathan Bubna added a comment -

          I ask because the VelocimacroManager uses the "log tag" parameter given to evaluate(...) as the local namespace key. If i alter your test to iterate the "log tag", then the namespace collisions disappear (of course), and your test passes.

          Granted, i don't yet understand exactly why the VelocimacroManager fails to properly handle multi-threaded addition of macros (the same macro, even) to the same namespace (i haven't had my nose deep in this code in a while). Still, that appears to be the catching point at the moment.

          Show
          Nathan Bubna added a comment - I ask because the VelocimacroManager uses the "log tag" parameter given to evaluate(...) as the local namespace key. If i alter your test to iterate the "log tag", then the namespace collisions disappear (of course), and your test passes. Granted, i don't yet understand exactly why the VelocimacroManager fails to properly handle multi-threaded addition of macros (the same macro, even) to the same namespace (i haven't had my nose deep in this code in a while). Still, that appears to be the catching point at the moment.
          Hide
          Nathan Bubna added a comment -

          No solution yet, but i can replicate it via Velocity.mergeTemplate, by turning caching off on the resource loader (StringResourceLoader in this case). Of course, that's effectively the same as using evaluate(...). Using a resource loader and turning caching on fixes the problem.

          Show
          Nathan Bubna added a comment - No solution yet, but i can replicate it via Velocity.mergeTemplate, by turning caching off on the resource loader (StringResourceLoader in this case). Of course, that's effectively the same as using evaluate(...). Using a resource loader and turning caching on fixes the problem.
          Hide
          Nathan Bubna added a comment -

          Ok, RuntimeInstance dumps the vm namespace for a template before parsing it. It does this under the assumption that if you are reparsing the template, it must have changed. If it changed, we shouldn't keep local macros around, as they may no longer exist in the new version. This still seems to me that correct behavior. When rapidly (re)parsing the same template in multiple threads, this means that thread B may dump the namespace newly created and not yet used in rendering by thread A.

          This unfortunate event can be avoided by permanently caching templates, or greatly reduced in likelihood by caching them with an infrequent modificationCheckInterval. Though, even there it is possible, i think.

          So, here's the choices i've thought of thus far:

          1) leave it and add disclaimers about inline macros when templates aren't permanently cached
          2) we can flip the bit and risk memory leakage (and potential interference) from stale macros
          3) add another config switch to let users choose between #1 and #2
          4) synchronize creatively to prevent simultaneous parsing and/or rendering in the same namespace
          5) try to find a way to refactor so inline macros are kept with the template, not managed centrally

          #5 is more than i can take on right now, and probably won't work for the 1.x branch. maybe in 2.x
          #4 would probably both wreck performance and never quite work right
          #3 is tiresome, but probably the best current option
          #2 would probably only be terrible for people with users creating templates, but could be quite the memory leak for them
          #1is arguably acceptable for 1.7, but is not a long-term solution

          thoughts?

          Show
          Nathan Bubna added a comment - Ok, RuntimeInstance dumps the vm namespace for a template before parsing it. It does this under the assumption that if you are reparsing the template, it must have changed. If it changed, we shouldn't keep local macros around, as they may no longer exist in the new version. This still seems to me that correct behavior. When rapidly (re)parsing the same template in multiple threads, this means that thread B may dump the namespace newly created and not yet used in rendering by thread A. This unfortunate event can be avoided by permanently caching templates, or greatly reduced in likelihood by caching them with an infrequent modificationCheckInterval. Though, even there it is possible, i think. So, here's the choices i've thought of thus far: 1) leave it and add disclaimers about inline macros when templates aren't permanently cached 2) we can flip the bit and risk memory leakage (and potential interference) from stale macros 3) add another config switch to let users choose between #1 and #2 4) synchronize creatively to prevent simultaneous parsing and/or rendering in the same namespace 5) try to find a way to refactor so inline macros are kept with the template, not managed centrally #5 is more than i can take on right now, and probably won't work for the 1.x branch. maybe in 2.x #4 would probably both wreck performance and never quite work right #3 is tiresome, but probably the best current option #2 would probably only be terrible for people with users creating templates, but could be quite the memory leak for them #1is arguably acceptable for 1.7, but is not a long-term solution thoughts?
          Hide
          Alex added a comment - - edited

          My bug 811 was marked as a duplicate of this. For me this happens in the context of including macro modules to override some default macros. See example below. When the top level template.vtl is merged concurrently using the same engine instance the effects described above do happen.

          ----- template.vtl ----
          #parse(macros.vtl)
          #myMacro("param")
          ----------------------------

          ------ macros.vtl ------
          #macro(myMacro $param)
          #end
          -----------------------------

          The system I am working on uses macro overloading a lot ( as a poor man's subclassing to keep the things sane ) what really would be nice to have a capability to include and exclude VM libraries dynamically for overloading purposes.
          I have modified the code in the RuntimeInstance.parse(reader, templateName) to pass false as the "dump" parameter and it seems to fix the issue for now. In my system the VTL files are not changed all that often and when they do macros are not typically removed, so if the parse would retain some unused macros in the cache until the next JVM restart it's not a big deal. It would be a big deal if the parse does not refresh the existing macros on parse though.

          I still think that the synchronization in the VelocimacroManager between getNamespace, addNamespace and dumpNamespace is broken and just presents a smaller window for the race condition than the one between the dump and parse.

          Show
          Alex added a comment - - edited My bug 811 was marked as a duplicate of this. For me this happens in the context of including macro modules to override some default macros. See example below. When the top level template.vtl is merged concurrently using the same engine instance the effects described above do happen. ----- template.vtl ---- #parse(macros.vtl) #myMacro("param") ---------------------------- ------ macros.vtl ------ #macro(myMacro $param) #end ----------------------------- The system I am working on uses macro overloading a lot ( as a poor man's subclassing to keep the things sane ) what really would be nice to have a capability to include and exclude VM libraries dynamically for overloading purposes. I have modified the code in the RuntimeInstance.parse(reader, templateName) to pass false as the "dump" parameter and it seems to fix the issue for now. In my system the VTL files are not changed all that often and when they do macros are not typically removed, so if the parse would retain some unused macros in the cache until the next JVM restart it's not a big deal. It would be a big deal if the parse does not refresh the existing macros on parse though. I still think that the synchronization in the VelocimacroManager between getNamespace, addNamespace and dumpNamespace is broken and just presents a smaller window for the race condition than the one between the dump and parse.
          Hide
          Sergey Zolotaryov added a comment - - edited

          We are experiencing the same issue with 1.7.0 on linux. This, however, has never happened on Windows.

          Show
          Sergey Zolotaryov added a comment - - edited We are experiencing the same issue with 1.7.0 on linux. This, however, has never happened on Windows.

            People

            • Assignee:
              Unassigned
              Reporter:
              Simon Kitching
            • Votes:
              3 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:

                Development