Hadoop Common
  1. Hadoop Common
  2. HADOOP-4910

NameNode should exclude corrupt replicas when choosing excessive replicas to delete

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.17.0
    • Fix Version/s: 0.18.3
    • Component/s: None
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      Currently, when NameNode handles an over-replicated block in FSNamesystem#processOverReplicatedBlock, it excludes ones already in excessReplicateMap and decommissed ones, but it treats a corrupt replica as a valid one. This may lead to unnecessary deletion of more replicas and thus cause data lose. It should exclude corrupt replicas as well.

      1. overReplicated2.patch
        6 kB
        Hairong Kuang
      2. overReplicated2-br18.patch
        6 kB
        Hairong Kuang
      3. overReplicated1.patch
        6 kB
        Hairong Kuang
      4. overReplicated.patch
        1 kB
        Hairong Kuang

        Activity

          People

          • Assignee:
            Hairong Kuang
            Reporter:
            Hairong Kuang
          • Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Development