A particular HBase query has highly selective key filters and runs into code bugs that produce a bogus, huge cardinality value.
HbaseScanNode.computeStats() attempts to compute table cardinality by calling HBaseTable.getEstimatedRowStats(). This then calls into (in the latest versions) FeHBaseTable.getEstimatedRowStats().
This code tries to estimate cardinality by:
- Scanning a set of regions.
- For each getting the size.
- Averaging a bunch of rows to estimate row width.
Once we know the size of the regions we need to scan, and the average row width, we can compute the scan cardinality.
The problem in this particular query is that the predicates are so selective that no regions match. As a result, the average row width is zero. We divide (as a double) the region size by 0 and get INF. We cast that to a long and get Long.MAX_VALUE. We then use that as our (highly bogus) cardinality estimate.
The code must:
- Detect the division-by-zero (now sample rows) case.
- Use an alternative estimate (such as multiplying total table row count from HMS by the filter selectivity.)