In our team we feel that this patch would have been beneficial in practical terms. In the context of the enterprise intelligence solution which we are gradually porting over to Nutch, the emphasis is on ease of configuration. We try to avoid exposing features such as regex filter, which although are very powerful for a more experienced user, are perhaps confusing to the novice. This is because we are primarily focused on the enterprise and less on the WWW.
This is why we preconfigure the db.ignore.external.links property to "true", and then only the urls file is used to seed the crawl.
Our ideal is to have a collection of predefined configuration settings for specific scenarios – e.g. Enterprise-XML, Enterprise-Documents, Enterprise-Database, Internet-News etc. We have a script that generates multiple crawlers, each one with different sources to be crawled, and although possible, it isn't the most practical to change the filters for each one manually based on the individual user requirements.
I realise this patch is closed, but how about another approach that says that FileResponse.java looks at db.ignore.external.links and decides based on this whether to go up the tree.
Obviously, this would also prevent you from crawling outlinks to the WWW embedded in documents, but when crawling an enterprise file system, you usually don't want to go all over the place anyway. As I see it, file systems are different to the web in that they are inherently hierarchical whereas the web is as its name implies, non-hierarchical. Therefore, when crawling a file system, "going up" the tree is just as much an external URI (so to speak) as a link to a web site.
Ducks for cover