The injector currently works as following :
- MapReduce job 1 - Mapper : converts input lines into CrawlDatum objects with normalisation and filtering
- MapReduce job 1 - Reducer : identity reducers. Can still have duplicates at this stage
- MapReducer job 2 - Mapper : CrawlDbFilter on existing crawldb (if any) + output of previous job
- MapReducer job 2 - Reducer : deduplication
If there is no existing crawldb (which will often be the case at injection time) we don't really need to do the second mapreduce job and could simply take the output of the MR job #1 as CrawlDB provided that we do the deduplication as part of the reduce step.
If there is a crawldb then the reduce step of the MR job #1 is not really needed and we could have that step as map only.