For the Kudu 1.4 release, I'll be working to improve node density.
Here's a brief primer on Kudu's scalability targets today:
- We recommend no more than 4 TB of total data per node. This is specific to Kudu data blocks, so this data is post-encoding and post-compression.
- We recommend no more than 1000 partitions (post-replication) per node.
- We recommend no more than 100 nodes per cluster.
- We recommend no more than 60 partitions per table per tserver.
For 1.4, here's what we'd like to achieve:
- Up to 16 TB of total data per node. Maybe even 48 TB, if possible.
- Up to 100 "hot" partitions per node. In this context, "hot" means partitions that are actively servicing writes.
- Thousands of "cold" partitions per node. Put another way, it should be drastically cheaper to serve "cold" partitions than it is today.
- Maintain the "100 nodes per cluster" limit.
- Remove the "no more than 60 partitions per table per node" limit.
I'll be linking various interesting JIRAs into this one, and I'll document, for each one, which aspect of data scalability it affects.
|bootstrap should not replay logs that are known to be fully flushed||Open|
|Explore reducing number of data blocks by tuning existing parameters||Open||Unassigned|
|Explore ways to reduce maintenance manager CPU load||Open||Unassigned|
|Coalesce RPCs destined for the same server||Open||Unassigned|
|Skip entire WAL files or sections of WAL files that do not need to be replayed||Open||Unassigned|