> You are subject to reduced availability only for the portions of your application that use the higher consistency APIs.
(a) that's not the case in your patch, but let's accept that it's possible
(b) you can already get per-key high consistency (with the same availability price) by specifying block_for=N
> So, we can decide we never want atomicity guarantees for anything other than a single insert or get.
Perhaps not, but if there is an approach worth trading clean design away for, this isn't it. The approach here only gives you the ability to have better atomicity across keys on the same master node, which is dubiously useful. Definitely not worth the price.
> BTW using ZK based locks will lead to the very same availability loss
Yes, you always lose availability when you choose strong consistency instead. I'm aware of the CAP theorem.
> except at much worse performance.
http://hadoop.apache.org/zookeeper/docs/current/zookeeperOver.html#Performance leads me to believe that ZK-based locking will be fine for a lot of people, but my position does not change if I am wrong on that point.
Trading performance for clean design is often acceptable. This is what Megastore does, layering transactions on top of Bigtable. They get terrible write performance (if AppEngine is indeed built on Megastore, which seems highly probable) but that's okay. At least for some apps.
In the spirit of Megastore (http://perspectives.mvdirona.com/2008/07/10/GoogleMegastore.aspx), doing block_for=N writes onto a CommitLog column family might be an interesting approach. It's not clear to me though that reader transactions could avoid partial reads. (Checking the xlog version before and after read might do the trick, though... just throwing out ideas.)
If neither of those turn out to be acceptable, I'm okay with that. It's far better to have a clear design vision than to try to be all things to all people.