1, if the `memoryUsage` is improperly set, for example, too small to store a instance;
2, the blockify+GMM reuse two matrices whose shape is related to current blockSize:
When implementing blockify+GMM, I found that if I do not pre-allocate those matrices, there will be seriously regression (maybe 3~4 slower, I fogot the detailed numbers);
3, in MLP, three pre-allocated objects are also related to numRows:
I am not very familiar with the impl of MLP and failed to find some related document about this pro-allocation. But I guess there maybe regression if we disable this pro-allocation, since those objects look relatively big.