I think there are two rather separate ideas here?
First, IW should not have to "know" how to get a TokenStream from a
IndexableField; it should only ask the Field for the token stream and get that
back and iterate its tokens.
Under the hood (in the IndexableField impl) is where the logic for
tokenized or not, Reader vs String vs pre-created token stream,
etc. should live, instead of hardwired inside indexer. Maybe an app
has a fully custom way to make a token stream for the field...
Likewise, for multi-valued fields, IW shouldn't "see" the separate
values; it should just receive a single token stream, and under the
hood (in Document/Field impl) it's concatenating separate token
streams, adding posIncr/offset gaps, etc. This too is now hardwired
in indexer but shouldn't be. Maybe an app wants to insert custom
"separator" tokens between the values...
(And I agree: as a pre-req we need to fix Analyzer to not allow
non-reused token streams; else we can't concatenate w/o attr
If IW still receives analyzer and simply passes it through when asking
for the tokenStream I think that's fine for now. In the future, I
think IW should not receive analyzer (ie, it should be agnostic to how
the app creates token streams); rather, each FieldType would hold the
analyzer for that field. However, that sounds contentious, so let's
leave it for another day.
Second, this new idea to "invert" TokenStream into an AttrConsumer,
which I think is separate? I'm actually not sure I like such an
approach... it seems more confusing for simple usage? Ie, if I want
to analyze some text and iterate over the tokens... suddenly, instead
of a few lines of local code, I have to make a class instance with a
method that receives each token? It seems more convoluted? I
mean, for Lucene's limited internal usage of token stream, this is
fine, but for others who consume token streams... it seems more
Anyway, I think we should open a separate issue for "invert
TokenStream into AttrConsumer"?