Affects Version/s: 1.4.1
Fix Version/s: None
With large data sets, > 10M rows.
Setting start=<large number> and rows=<large numbers> is slow, and gets slower the farther you get from start=0 with a complex query. Random also makes this slower.
Would like to somehow make this performance faster for looping through large data sets. It would be nice if we could pass a pointer to the result set to loop, or support very large rows=<number>.
Then within interval (like 5 mins) I can reference this loop:
What do you think? Since the data is too great the cache is not helping.
|Transition||Time In Source Status||Execution Times||Last Executer||Last Execution Date|
|337d 11h 29m||1||Grant Ingersoll||06/Oct/11 14:54|
|Status||Open [ 1 ]||Resolved [ 5 ]|
|Resolution||Duplicate [ 3 ]|