I agree with you 100% that we shouldn't be doing any copying or filtering manually – it has to be the same fileset.
Right ... that's the crux of my concern.
Note that at the moment this isn't consistent either – if there is a jar dependency upgrade people are left with unused jars under lib/* and this can (and does) cause headaches until you realize you need to clean up that folder (which is a reason for this issue, actually).
but our checksum validation warns you of that .. the reason for this issue was not that people didn't know they had bad jars (the build already told them that) the point was to try and make dealing with it automatic.
One thing I didn't know how to work around with direct ivy caches was jar checksums – I think it is a good idea to keep these but then I don't know how they could be verified if we use ivy caches directly.
yeah ... the checksum validation is really huge ... we definitely should sacrifice that.
Which leads me to a strawman suggestion: what if instead of making all these ant targets depend on "ant clean-jars" we add an optional build property that tells the checksum validation code to try to remove any jar that doesn't have a checksum file? values for the property could indicate:
- don't try to delete but warn of existence
- don't try to delete and fail because of existence (current behavior)
- try to delete, fail if delete fails (new default)
- try to delete, warn & don't fail if delete fails (new default if windows)
...in the cases where deletion failure is non-fatal, the code could still register a deleteOnExit() for the files as a fallback (which should work on windows right? by that point windows will have closed the file handle for the jar?)
if we did that, then (i think) the worst case scenario for windows dev/jenkins users after ivy config changes would be that the first build attempt might fail because of a jar that couldn't be deleted (because it was in use), but that file should be deleted when the JVM exists, and after that the build should start working.