Details
-
Sub-task
-
Status: Resolved
-
Major
-
Resolution: Incomplete
-
2.1.1
-
None
Description
In logistic regression gradient update, we currently compute by each individual row. If we blocked the rows together, we can do a blocked gradient update which leverages the BLAS GEMM operation.
On high dimensional dense datasets, I've observed ~10x speedups. The problem here, though, is that it likely won't improve the sparse case so we need to keep both implementations around, and this blocked algorithm will require caching a new dataset of type:
BlockInstance(label: Vector, weight: Vector, features: Matrix)
We have avoided caching anything beside the original dataset passed to train in the past because it adds memory overhead if the user has cached this original dataset for other reasons. Here, I'd like to discuss whether we think this patch would be worth the investment, given that it only improves a subset of the use cases.