Description
When running a large web crawl it happens that a webadmin requests to immediately stop crawling a certain domain. The default Nutch workflow applies URL filters only to seeds and outlinks. Applying filters during fetch list generation is expensive with a large CrawlDb (fetch lists are usually much shorter). Allowing the fetcher to optionally filter URLs would allow to apply changed filter rules to the next launched fetcher job even if the the segment has been already created (esp., if multiple segments are generated in one turn).
Attachments
Issue Links
- links to