Copying a large and deep directory tree via URL->URL copy over DAV uses too much
memory. In the wild, I've seen servers abort or crash while servicing copy
operations of top-level directories in repositories totalling about 40GB in size.
To reproduce, create a directory tree with the attached script (writes about
400MB of data to /var/tmp/gentreetest/) and import this tree into a fresh
repository under /trunk. I tested with trunk, FSFS format 7.
Then try to copy the trunk directory over HTTP. httpd's quickly reaches up to 1
or 2 GB in my observations. Even if the copy succeeds it takes some noticable
amount of time to complete.
This problem seems to be specific to mod_dav or mod_dav_svn.
A workaround is to use file:// URLs. The copy operation completes instantly over
file:// with no apparent memory usage problem.