The Hadoop framework has been designed, in an eort to enhance perfor-
mances, with a single JobTracker (master node). It's responsibilities varies
from managing job submission process, compute the input splits, schedule
the tasks to the slave nodes (TaskTrackers) and monitor their health.
In some environments, like the IBM and Google's Internet-scale com-
puting initiative, there is the need for high-availability, and performances
becomes a secondary issue. In this environments, having a system with
a Single Point of Failure (such as Hadoop's single JobTracker) is a major
My proposal is to provide a redundant version of Hadoop by adding
support for multiple replicated JobTrackers. This design can be approached
in many dierent ways.
In the document at: http://sites.google.com/site/hadoopthesis/Home/FaultTolerantHadoop.pdf?attredirects=0
I wrote an overview of the problem and some approaches to solve it.
I post this to the community to gather feedback on the best way to proceed in my work.