When a commit over ra_dav is adding a file to the repository, the httpd process allocates as much memory as the size of the file being transfered. For large files this isn't possible and httpd runs out of memory midway through the commit. How to replicate: 1. Create a huge file containing random data. I used dd and /dev/urandom. The file only has to be a little bigger than the maximum ammount of memory a process on the server can allocate. You can reduce the hard limit on the data segment size to view this problem with a smaller file if you wish. 2. Add the file to a repository. 3. Commit the change. 4. Watch httpd run out of memory.
Original issue reported by mprice