- test wise, it shows that actually the JAR update process needs to have more rigorous testing: the patch should go in for all the subprojects so yetus tests them all. I'll keep an eye on future changes there
- circular dependencies are generally considered "bad form" and shouldn't have happened. More specifically, the use of the acronym "DAG" in the maven dependency graph declares that the graph is "acyclic". Pulling in hbase as a dependency of hadoop is trying to do what spark/hive have achieved: created a cycle. Is there anyway to unwind the cycle so that there is an ATSvb2 server module independent of the others, which pulls in both. That way, it can take in any shaded guava libs from Hadoop & whatever HBase needs, while also allowing Hadoop to make progress on the plan to drop jersey 1 and so avoid version issues there ( HADOOP-13332 )
BTW, Hadoop may export Guava 11 but it's coded to not use classes cut out of later versions (e.g. guava dropping stopwatch (
HADOOP-11032). Even so, the need to up the base guava version is a serious need, not just for downestream projects, but even because things like Curator which we pull in depends on later versions. Every upgrade of curator has problems related to guava HADOOP-11102 , HADOOP-11612
Side issue: In an ideal world. Guava would be backwards compatible, It isn't , and we get to deal with the pain. Whatever we do, something breaks. Oh, and then there's protobuf.