Details
-
New Feature
-
Status: Closed
-
Minor
-
Resolution: Won't Fix
-
0.8
-
None
-
None
Description
Add support to the fetcher to look for sitemap files, download them and process them into webdb.
Perhaps create a robots.txt directive that can be used to create a standard format for sitemaps in RSS, XML or text format (one line per url) and process that.
I would love to see someone stomp on proprietary sitemap features or making things so google specific as they are today
- RSS format/Atom Format (standard)
- XML meta descroption
- OAI-PMH meta description (http://www.openarchives.org/OAI/openarchivesprotocol.html)
Perhaps even a "pre crawler" that will scour for these to inject into the web db to help build your link map so you could even just index topN.