the first patch looks exactly like I said in our first ideas-exchange!
There are smaller problems (but solveable) and one optimization... (a trick one...): FieldCacheDocIdSet has some special cases which work with the implementation here, but are unclean and should violate some assertions - and should be fixed...:
- FieldCacheDocIdSet excepts that the match() method throws ArrayIndexOutOfBoundsException when the FieldCacheArray is out of bounds. With the FixedBitSet behind that implementation of the FieldCache this basically works, but should violate some code assertions added by MikeMcCandless (not sure why the testcase does not hit this - doesn't it - I assume it does not because the trunk bits() on DocIdSet will intercept this as our filter is not sparse -> it switches to random access)
- The FieldCacheDocIdSet should maybe made un-private and refactored out of the FieldCacheRangeFilter.
- The positive case could be optimized: A instanceof check in the getDocIdSet() method could check for the positive case that the FieldCacheImpl itsself returns a FixedBitSet/DocIdSet already and return this directly:
final Bits docsWithField = FieldCache.DEFAULT.getDocsWithField(context.reader, field);
if (negate && docsWithField instanceof DocIdSet) return (DocIdSet) docsWithField;
In general the other cases can be easily done by the default stupid (stupid in the case that its slowly iterating by doc++ and in trunk directly uses the Bits) impl like you did, but once factoring out the FieldCacheRangeFilter.FieldCacheDocIdSet we could optimize this and maybe have a better negation.
In all cases I dont like double negation of this Filter.
I'll work on the problems and make this filter work better. Should I take this issue and solve the problems first? I also want to backport the FieldCacheTermsFilter code-duplication removal in trunk to 3.x, so some cleanup is really needed!
I will come with a patch adressing those problems later or tomorrow.