Description
Currently Injector creates two mapreduce jobs:
1. sort job: get the urls from seeds file, emit CrawlDatum objects.
2. merge job: read CrawlDatum objects from both crawldb and output of sort job. Merge and emit final CrawlDatum objects.
Using MultipleInputs, we can read CrawlDatum objects from crawldb and urls from seeds file simultaneously and perform inject in a single map-reduce job.
Also, here are additional things covered with this jira:
1. Pushed filtering and normalization above metadata extraction so that the unwanted records are ruled out quickly.
2. Migrated to new mapreduce API
3. Improved documentation
4. New junits with better coverage
Relevant discussion over nutch-dev can be found here:
http://mail-archives.apache.org/mod_mbox/nutch-dev/201401.mbox/%3CCAFKhtFyXO6WL7gyUV+a5Y1pzNtdCoqPz4jz_up_bkp9cJe80kg@mail.gmail.com%3E
Attachments
Attachments
Issue Links
- is blocked by
-
NUTCH-2049 Upgrade Trunk to Hadoop > 2.4 stable
- Closed
- is duplicated by
-
NUTCH-1772 Injector does not need merging if no pre-existing crawldb
- Closed