We should start thinking about Hadoop 2 support now that it is Cloudera's recommended distribution and many new Hadoop users will probably be adopting it.
When I investigated this first a few months ago it seemed like the biggest barrier to this was that all the Map/Reduce related tests are implemented using pseudo-private constructors from Hadoop 1.0 that are no-longer present in Hadoop 2.0.
The main strategy to fix this should probably be to adopt the Map/Reduce cluster test object for testing the various Accumulo input formats instead of instrumenting them directly. I have used this convenience object successfully on tests utilizing MockInstance, so I think it should work fine.
There may also be some filesystem API issues but I don't think they will be too severe.
The other main issue is that we will need to actually deploy on Hadoop 1 and 2 and run the integration tests once we start supporting both, so that will be a headache for release testing that we should think through.