Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
None
-
None
-
None
-
Patch Available
Description
In the past we have encountered situations where crawling specific broken sites resulted in ridiciously long urls that caused the stalling of tasks. The regex plugins (normalizing/filtering) processed single urls for hours, if not indefinitely hanging.
My suggestion is to limit the outlink url target length as soon possible. It is a configurable limit, the default is 3000. This should be reasonably long enough for most uses. But sufficienly strict enough to make sure regex plugins do not choke on urls that are too long. Please see attached patch for the Nutchgora implementation.
I'd like to hear what you think about this.
Attachments
Attachments
Issue Links
- is related to
-
NUTCH-1106 Options to skip url's based on length
- Closed
-
NUTCH-1531 URL filtering takes long time for very long URLs
- Closed