MapReduce multi-computer crawl environment: 11 machines (1 master with JobTracker/NameNode, 10 slaves with TaskTrackers/DataNodes)
We've been using Nutch 0.8 (MapReduce) to perform some internet crawling. Things seemed to be going well until...
060129 222409 Lost tracker 'tracker_56288'
060129 222409 Task 'task_m_10gs5f' has been lost.
060129 222409 Task 'task_m_10qhzr' has been lost.
060129 222409 Task 'task_r_zggbwu' has been lost.
060129 222409 Task 'task_r_zh8dao' has been lost.
060129 222455 Server handler 8 on 8010 caught: java.net.SocketException: Socket closed
java.net.SocketException: Socket closed
060129 222455 Adding task 'task_m_cia5po' to set for tracker 'tracker_56288'
060129 223711 Adding task 'task_m_ffv59i' to set for tracker 'tracker_25647'
I'm hoping that someone could explain why task_m_cia5po got added to tracker_56288 after this tracker was lost.
The Crawl .main process died with the following output:
060129 221129 Indexer: adding segment: /user/crawler/crawl-20060129091444/segments/20060129200246
Exception in thread "main" java.io.IOException: timed out waiting for response
at $Proxy1.submitJob(Unknown Source)
However, it definitely seems as if the JobTracker is still waiting for the job to finish (no failed jobs).
Doug Cutting's response:
The bug here is that the RPC call times out while the map task is computing splits. The fix is that the job tracker should not compute splits until after it has returned from the submitJob RPC. Please submit a bug in Jira to help remind us to fix this.