Description
Now that we can query on complex polygons without going OOM (LUCENE-7153), we should do something to address the current 🐢 performance.
Currently, we use a basic crossings test (O(n)) for boundary cases. We defer these expensive per-doc checks on boundary cases to a two phase iterator (LUCENE-7019, LUCENE-7109), so that it can be avoided if e.g. excluded by filters, conjunctions, deleted doc, and so on. This is currently important for performance, but basically its shoving the problem under the rug and hoping it goes away. At least for point in poly, there are a number of faster techniques described here: http://erich.realtimerendering.com/ptinpoly/
Additionally I am not sure how costly our "tree traversal" (rectangle intersection algorithms). Maybe its nothing to be worried about, but likely it too gets bad if the thing gets complex enough. These don't need to be perfect but need to behave like java's Shape#contains (can conservatively return false), and Shape#intersects (can conservatively return true). Of course, if they are too inaccurate, then things can get slower.
In cases of precomputed structures we should also consider memory usage: e.g. we shouldn't make a horrible tradeoff there.
Attachments
Attachments
Issue Links
- contains
-
LUCENE-7222 Improve Polygon.contains()
- Resolved
- incorporates
-
LUCENE-7242 LatLonTree should build a balanced tree
- Closed
-
LUCENE-7249 LatLonPoint polygon should use tree relate()
- Closed
-
LUCENE-7251 remove LatLonGrid
- Closed
-
LUCENE-7229 Improve Polygon.relate
- Closed
-
LUCENE-7239 Speed up LatLonPoint's polygon queries when there are many vertices
- Closed