We're looking into leveraging this new feature to ease the installation of Phoenix (https://github.com/forcedotcom/phoenix). Currently we require that the phoenix jar be copied into the HBase lib dir of every region server, followed by a restart. For some background, Phoenix uses both coprocessors and custom filters. These are just the tip of the iceberg, so to speak. There's a ton of shared/foundational phoenix code being used by these coprocessors and filters - our type system, expression evaluation, schema interpretation, throttling code, memory management, etc. So when we say we'd like to upgrade our coprocessor and custom filters to a new version, that means all the foundational classes under it have changed as well.
If we use this new feature, we're not sure we're easing the burden on our users, since users will still need to:
1) update the hbase-sites.xml on each region server to set the hbase.dynamics.jar.dir path of the jar
2) copy the phoenix jar to hdfs
3) make a sym link to the new phoenix jar
4) get a rolling restart to be done on the cluster
My fear would be that (1) would be error prone, and for (2) & (3) the user wouldn't have the necessary perms. And (4), we'll probably just have to live with, but in a utopia, we could just have the new jar be used for new coprocessor/filter invocations.
My question: how close can we come to automating all of this to the point where we could have a phoenix install script that looks like this:
hbase install phoenix-1.2.jar
Is HBASE-8400 a prerequisite? Any other missing pieces? We'd be happy to be a guinea pig/test case for how to solve this problem from a real application/platform standpoint.