The other problem I see is that HadoopArchives uses HarFileSystem.VERSION when it creates files. So, if we commit HarFileSystem.java first, HadoopArchives will generate old-style files with VERSION = 2. And if we commit HadoopArchives.java first, new-style files will be generated with VERSION 1 (although the old HadoopArchives won't be able to read them properly). It's sort of a chicken-and-egg problem created because two java files that actually depend on each other ended up in two different hadoop projects (common and mapreduce).
Sincerely, I don't understand why HarFileSystem is part of common.
For instance, in Raid, we made DistributedRaidFileSystem part of MAPREDUCE as well, partly to avoid this type or problem.