Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
I'm doing some testing where I deleted all the rows in a very large table and then reinserted a bunch of new data. My scans are now relatively slow because they are skipping over all of these deleted rows, but at first glance it wasn't really clear what was going on. If on the /scans dashboard we surfaced info indicating that it scanned over 1M deleted rows to return 1000 live ones it would help understand the performance.