for throughput, BB is consistently about 10% faster on inserts, and about equal on reads, across all row types
Since there isn't anything here wildly inconsistent with my results, I'd summarize it as ~10% faster on inserts, and about equal on reads, counter increments, and index inserts.
BB has substantially lower latency for large values on reads
I don't see how this test can be correct since the cost of parsing the query is identical no matter how wide the rows are, or how large the values.
something is fishy w/ BB stdev that might be worth investigating (generating extra garbage somehow)?
This was consistent with what I saw as well, though for the life of me I can't imagine what's causing it.
10% faster writes is a big enough deal that I'm in favor of committing the BB version for 1.1.
It's not nearly so compelling to me. 10% is definitely on the high side of making me stand up and take notice, but it's not enormous.
It's also limited to inserts, and requires that you completely saturate the processors to make it evident at all, which is not a typical workload. That doesn't make it irrelevant, just more relevant to those conducting benchmarks than to real users.
On the other side, what's at stake is increased complexity for an arbitrary number of clients, and a proven vector for bugs. And, to make this class of bug even more interesting, it has the potential to make otherwise identical queries return different results depending on whether they use the prepared statement API, or the conventional one.
THAT BEING SAID: I've heard from enough people that were following the results as they came in to know that most people (engineers?) have a hard time looking past a simple faster/slower distinction, (even when the difference in question was much less than 10%). If others feel the same, that we should give up this abstraction for 10% faster standard writes, then I won't belabor the point further.