Index: src/java/org/apache/lucene/analysis/TokenStream.java
===================================================================
--- src/java/org/apache/lucene/analysis/TokenStream.java (revision 809286)
+++ src/java/org/apache/lucene/analysis/TokenStream.java (working copy)
@@ -34,30 +34,30 @@
import org.apache.lucene.util.AttributeSource;
/**
- * A {@link TokenStream} enumerates the sequence of tokens, either from
+ * A TokenStream enumerates the sequence of tokens, either from
* {@link Field}s of a {@link Document} or from query text.
*
* This is an abstract class. Concrete subclasses are: *
TokenStream whose input is a Reader; and
+ * TokenStream whose input is another
+ * TokenStream.
* TokenStream API has been introduced with Lucene 2.9. This API
* has moved from being {@link Token} based to {@link Attribute} based. While
* {@link Token} still exists in 2.9 as a convenience class, the preferred way
* to store the information of a {@link Token} is to use {@link AttributeImpl}s.
*
- * {@link TokenStream} now extends {@link AttributeSource}, which provides
- * access to all of the token {@link Attribute}s for the {@link TokenStream}.
+ * TokenStream now extends {@link AttributeSource}, which provides
+ * access to all of the token {@link Attribute}s for the TokenStream.
* Note that only one instance per {@link AttributeImpl} is created and reused
* for every token. This approach reduces object creation and allows local
* caching of references to the {@link AttributeImpl}s. See
* {@link #incrementToken()} for further details.
*
- * The workflow of the new {@link TokenStream} API is as follows:
+ * The workflow of the new TokenStream API is as follows:
*
TokenStream/{@link TokenFilter}s which add/get
* attributes to/from the {@link AttributeSource}.
* TokenStream
*
- * Sometimes it is desirable to capture a current state of a {@link TokenStream}
+ * Sometimes it is desirable to capture a current state of a TokenStream
* , e. g. for buffering purposes (see {@link CachingTokenFilter},
* {@link TeeSinkTokenFilter}). For this usecase
* {@link AttributeSource#captureState} and {@link AttributeSource#restoreState}
@@ -245,20 +245,20 @@
* For extra performance you can globally enable the new
* {@link #incrementToken} API using {@link Attribute}s. There will be a
* small, but in most cases negligible performance increase by enabling this,
- * but it only works if all {@link TokenStream}s use the new API and
+ * but it only works if all TokenStreams use the new API and
* implement {@link #incrementToken}. This setting can only be enabled
* globally.
*
- * This setting only affects {@link TokenStream}s instantiated after this
- * call. All {@link TokenStream}s already created use the other setting.
+ * This setting only affects TokenStreams instantiated after this
+ * call. All TokenStreams already created use the other setting.
*
* All core {@link Analyzer}s are compatible with this setting, if you have
- * your own {@link TokenStream}s that are also compatible, you should enable
+ * your own TokenStreams that are also compatible, you should enable
* this.
*
* When enabled, tokenization may throw {@link UnsupportedOperationException}
* s, if the whole tokenizer chain is not compatible eg one of the
- * {@link TokenStream}s does not implement the new {@link TokenStream} API.
+ * TokenStreams does not implement the new TokenStream API.
*
* The default is false, so there is the fallback to the old API
* available.
@@ -321,9 +321,9 @@
/**
* This method is called by the consumer after the last token has been
- * consumed, eg after {@link #incrementToken()} returned false
- * (using the new {@link TokenStream} API) or after {@link #next(Token)} or
- * {@link #next()} returned null (old {@link TokenStream} API).
+ * consumed, after {@link #incrementToken()} returned false
+ * (using the new TokenStream API). Streams implementing the old API
+ * should upgrade to use this feature.
*
TokenStream are intended to be consumed more than once, it is
* necessary to implement {@link #reset()}. Note that if your TokenStream
* caches tokens and feeds them back again after a reset, it is imperative
* that you clone the tokens when you store them away (on the first pass) as