Uploaded image for project: 'Nutch'
  1. Nutch
  2. NUTCH-398

map-reduce very slow when crawling on single server

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Cannot Reproduce
    • 0.8.1
    • 1.0.0
    • fetcher
    • None
    • linux and windows

    Description

      This seems a bug and so I create a ticket here. I'm using nutch 0.9-dev to crawl web on one linux server. With default hadoop
      configuration (local file system, no distributed crawling), the Generator and Fetcher spend unproportional amount of time on map-reduce opearations. For example:
      2006-11-01 20:32:44,074 INFO crawl.Generator - Generator: segment:
      crawl/segments/20061101203244
      ... (doing map and reduce for 2 hours )
      2006-11-01 22:28:11,102 INFO fetcher.Fetcher - Fetcher: segment:
      crawl/segments/20061101203244
      ... (fetching 12 hours )
      2006-11-02 11:15:10,590 INFO mapred.LocalJobRunner - 175383 pages, 16583
      errors, 3.8 pages/s, 687 kb/s,
      2006-11-02 11:17:24,039 INFO mapred.LocalJobRunner - reduce > sort
      ... (but doing reduce>sort and reduce>duce for 8 hours )
      2006-11-02 19:13:38,882 INFO crawl.CrawlDb - CrawlDb update: segment:
      crawl/segments/20061101203244

      Since it's crawling on a single machine, such slow map-reduce opearation is not expected.

      Attachments

        Issue Links

          Activity

            People

              ab Andrzej Bialecki
              canovaj AJ Chen
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: