There's a few things to clarify here.
Firstly, I don't think the logic for the proxy needs to change at all, and there's quite a few rules in there about how that does things. There are plenty of tests too, though you might want to shuffle them around to get a better separation of the unit tests and the "integration tests" that go via webdav. What we're looking to achieve is improve the way it is structured in the code (to the point where it could be turned on or off as a module, rather than being intertwined into the webdav stuff).
The key to the new architecture is that you want to obtain the metadata first, and only obtain the artifact when that is requested. This can get a bit confusing in Maven since it has its metadata, POM metadata, and then there is other metadata for plain artifact files The proxy should become a little "dumber" - it shouldn't know anything about repository storage or remote repo formats - but basically jumping in between trying to get metadata / artifacts from storage and going remote if necessary, though it will need to do a number of filtering operations (convert paths for the remote repo, whitelist/blacklist, search multiple remotes, error handling, determine if it needs to update something already in storage).
Not sure if that's making sense - we should sketch this out on a wiki page some more.