Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-4910

NameNode should exclude corrupt replicas when choosing excessive replicas to delete

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Blocker
    • Resolution: Fixed
    • 0.17.0
    • 0.18.3
    • None
    • None
    • Reviewed

    Description

      Currently, when NameNode handles an over-replicated block in FSNamesystem#processOverReplicatedBlock, it excludes ones already in excessReplicateMap and decommissed ones, but it treats a corrupt replica as a valid one. This may lead to unnecessary deletion of more replicas and thus cause data lose. It should exclude corrupt replicas as well.

      Attachments

        1. overReplicated.patch
          1 kB
          Hairong Kuang
        2. overReplicated1.patch
          6 kB
          Hairong Kuang
        3. overReplicated2.patch
          6 kB
          Hairong Kuang
        4. overReplicated2-br18.patch
          6 kB
          Hairong Kuang

        Activity

          People

            hairong Hairong Kuang
            hairong Hairong Kuang
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: