From my perspective the most important reason is to avoid a huge performance trap: previously if you subclassed one of these analyzers, override tokenStream(), and added SpecialFilter for example, most of the time users would actually slow down indexing, because now reusableTokenStream() cannot be used by the indexer.
Additionally, exactly this special case (overwriting one of the methods) was the biggest problem, leading to ugly reflection based checks in Lucene 3.0: In 3.0 StandardAnalyzer correctly implemented both tokenStream() and reuseableTokenStream(). As soon as one subclass only overrided tokenStream(), but the indexer still calling reuseableTokenStream() the changes were not even used, leading to lots of bug reports. Because of this, a reflection based backwards hack was done in 3.0 (see o.a.l.util.VirtualMethod class to make this easier), that prevented the indexer from calling reuseableTokenStream if a subclass suddenly overwrote only one of the methods. With moving forward in 3.1, these backwards hacks even got heavier (e.g. changes in TokenStreams, new base class ReuseableAnalyzerBase,...), so the only solution was to enforce the decorator pattern.
The above example by Robert is the correct way to implement your "factory" of TokenStreams. Everything else like subclassing StandardAnalyzer is ugly as it hides what you are really doing. The above pattern does exactly what also Solr's Schema does: You have to explicitely list all your components, making it clear what your TokenStreams are doing.
Trust me, the above example is shorter than subclassing previous StandardAnalyzer completely (both tokenStream and reuseableTokenStream) and is showing like solrschema.xml what your Analyzer looks like (no hidden stuff in superfactories,...)