Description
New links are normalized twice by the fetcher:
First in DOMContentUtils.getOutlinks, where the constructor Outlink(url.toString(), linkText.toString().trim(), conf) normalizes the URL.
The second time is in ParseOutputFormat.write().
For some URLs (e.g. those repeated on a page) a given URL may be normalized a number of times, but it is always normalized at least twice.
For those of us with expensive normalizations, this is probably burning some CPU.
I'd gladly fix this, but I'm not yet familiar enough with the code to know if there are some hidden assumptions which rely on this behavior.
[A related note is that URLs are normalized *before* filtering; this is causing a lot of extra normalization as well. In general, filters may not be safe to run before normalization, but there is likely a class of them which are (filtering out .gif/.jpg etc). Perhaps the notion of a "pre-normalizer filter" would be a useful one?]