Further investigantions showed, that there is some difference between using this filter/analyzer and the current setting in IndexWriter. IndexWriter uses the given MaxFieldLength as maximum value for all instances of the same field name. So if you add 100 fields "foo" (with each 1,000 terms) and have the default of 10,000 tokens, DocInverter will index 10 of these field instances (10,000 terms in total) and the rest will be supressed.
If you use the Filter, the limit is per TokenStream, so the above example will index all field instances and produce 100,000 terms.
But the current IndexWriter code has a bug, too: The check for too many terms is done after the first token of each input stream is indexed, so in the abovce example, IW will index 10,089 terms, because once the limit is reached, each stream left will index one term. This could be fixed (if really needed, as the MaxFieldLength in IW should be deprecated) by moving the check up and dont even try to index the field and create the TokenStream.
I just wanted to add this difference here for further discussing.