In performance testing I found that setting the resource.manager.defaultcache.size property to 0 increases performance by about 30%. The reason for this is that when the cache size is set to 0 Velocity uses the java.util.concurrent.ConcurrentHashMap container which provides better thread concurrency. when the cache size is set to a non-zero value Velocity uses the org.apache.commons.collections.map.LRUMap container, which provides poor concurrency. The problem is that It's unclear if there is a container that provides both good concurrency, and has a max size setting.
The default cache size is 89, which uses the slower LRUMap, and the end user is non the wiser. The best solution would be to find a container that can be used in both cases that performs as well as the ConcurrentHashMap, otherwise a discussion in the documentation is probably warranted.