offsets point back to the original field value for a particular token... and to me, it's a semantic contract (point to what makes sense in the source). It's not limited to the offsets generated by the Tokenizer... Analyzers don't have to use Tokenizers and TokenFilters at all.
As an example, WordDelimiterFilter modifies offsets when it splits words, and that makese sense to me.
Another way to think about it is that there is more than one way to solve a problem (construct an analyzer).
What matters is the tokens that come out the end... not if I did
a) a tokenizer that split on something followed by a filter that trimmed
b) a tokenizer that managed to split on something including discarding the whitespace
For this specific case, I think it comes down to the likely usecases for the filter, and an argument could be made either way. I'm fine with either as this is a very minor issue.