Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3705

Add the possibility to mark a node as 'low priority' for read in the DFSClient

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Won't Fix
    • 1.0.3, 2.0.0-alpha, 3.0.0-alpha1
    • None
    • hdfs-client
    • None

    Description

      This has been partly discussed in HBASE-6435.

      The DFSClient includes a 'bad nodes' management for reads and writes. Sometimes, the client application already know that some deads are dead or likely to be dead.
      An example is the 'HBase Write-Ahead-Log': when HBase reads this file, it knows that the HBase regionserver died, and it's very likely that the box died so the datanode on the same box is dead as well. This is actually critical, because:

      • it's the hbase recovery that reads these log files
      • if we read them it means that we lost a box, so we have 1 dead replica out the the 3.
      • for all files read, we have 33% of chance to go to the dead datanode
      • as the box just died, we're very likely to get a timeout exception so we're delaying the hbase recovery by 1 minute. For HBase, it means that the data is not available during this minute.

      Attachments

        1. HDFS-3705.v1.patch
          8 kB
          Nicolas Liochon
        2. hdfs-3705.sample.patch
          5 kB
          Nicolas Liochon

        Issue Links

          Activity

            People

              Unassigned Unassigned
              nkeywal Nicolas Liochon
              Votes:
              0 Vote for this issue
              Watchers:
              13 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: