NOTE: These benchmarks contained an error that made hnswlib perform too well. See this comment for correct results.
I hooked up our HNSW implementation to ann-benchmarks, a widely used repo for benchmarking nearest neighbor search libraries against large datasets. I found the results interesting and opened this issue to share and discuss. My benchmarking code can be found here – it's hacky and not easy to commit but I’m happy to help anyone get set up with it.
- LuceneVectorsOnly: a baseline that only indexes vectors, and performs a brute force scan to determine nearest neighbors
- LuceneHnsw: our HNSW implementation
- hnswlib: a C++ HNSW implementation from the author of the paper
- sift-128-euclidean: 1 million SIFT feature vectors, dimension 128, comparing euclidean distance
- glove-100-angular: ~1.2 million GloVe word vectors, dimension 100, comparing cosine similarity
Results on sift-128-euclidean
Parameters: M=16, efConstruction=500
Results on glove-100-angular
Parameters: M=32, efConstruction=500
Notes on benchmark:
- By default, the ann-benchmarks repo retrieves 10 nearest neighbors and computes the recall against the true neighbors. The recall calculation has a small 'fudge factor' that allows neighbors that are within a small epsilon of the best distance. Queries are executed serially to obtain the QPS.
- I chose parameters where hnswlib performed well, then passed these same parameters to Lucene HNSW. For index-time parameters, I set maxConn as M and beamWidth as efConstruction. For search parameters, I set k to k, and fanout as (num_cands - k) so that the beam search is of size num_cands. Note that our default value for beamWidth is 16, which is really low – I wasn’t able to obtain acceptable recall until I bumped it to closer to 500 to match the hnswlib default.
- I force-merged to one segment before running searches since this gives the best recall + QPS, and also to match hnswlib.
- It'd be really nice to extend luceneutil to measure vector search recall in addition to latency. That would help ensure we’re benchmarking a more realistic scenario, instead of accidentally indexing/ searching at a very low recall. Tracking recall would also guard against subtle, unintentional changes to the algorithm. It's easy to make an accidental change while refactoring, and with approximate algorithms, unit tests don't guard as well against this.
- Lucene HNSW gives a great speed-up over the baseline without sacrificing too much recall. But it doesn't perform as well as hnswlib in terms of both recall and QPS. We wouldn’t expect the results to line up perfectly, since Lucene doesn't actually implement HNSW – the current algorithm isn't actually hierarchical and only uses a single graph layer. Does this difference indicate we're leaving performance 'on the table' by not using layers, which (I don't think) adds much index time or space? Or are there other algorithmic improvements would help close the gap?
- Setting beamWidth to 500 really slowed down indexing. I'll open a separate issue with indexing speed results, keeping this one focused on search.