Looks like one of my patches stepped on the evil test (blame -> git apply ). I'll start with the failure first:
when I apply both, I still see test failures,
A quick investigation reveals the bounding box for the point-distance slightly off (as in upwards of a degree in both directions). At present there are 2 approaches to computing that bbox:
1. use vincenty distance to compute a location along a range (distance) bearing (azimuth). Since vincenty can fail to converge on nearly anti-podal points (hence the need for iteration thresholds and the fudge factor whack-a-mole game as seen before) this can obviously be problematic for large distance queries.
2. inverse haversine. USGS claims an average error of 22km over large distances, and this error certainly falls within that threshold (its 15km if interested).
This fix can come in a few ways, 1. dynamically expand the computed bbox based on computed error (using distance as the independent variable). Maybe overkill? 2. add a static "fudge factor" of 1 degree to min/max lon/lat. This would probably need to be verified through some beasting.
In closestPointOnBBox should you maybe use Double.NaN as the marker value instead of 0.0 since 0.0 can legitimately occur?
The logic handles it. 0, 0 means closest point is the same as centerLon, centerLat - which is what it gets set to in the method. Though thanks for getting me to look at that closer. There's a superfluous logic block.
A general geo API question: why do we sometimes use x/y (rMinX, rMinY) and other times use lon/lat (centerLon, centerLat)?
Short answer: lazy inconsistencies. Longer answer: I like to use x/y when I'm either going to swap out with cartesian logic or I'm lazy typing. Since neither are good answers I agree it would be a good idea to refactor to lon/lat for geodesic and x/y cartesian.
Good comments on the code cleanup.