Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3119

Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.0.0-alpha
    • Fix Version/s: 2.0.0-alpha, 0.23.7
    • Component/s: namenode
    • Labels:
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      cluster setup:
      --------------

      1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB

      step1: write a file "filewrite.txt" of size 90bytes with sync(not closed)
      step2: change the replication factor to 1 using the command: "./hdfs dfs -setrep 1 /filewrite.txt"
      step3: close the file

      • At the NN side the file "Decreasing replication from 2 to 1 for /filewrite.txt" , logs has occured but the overreplicated blocks are not deleted even after the block report is sent from DN
      • while listing the file in the console using "./hdfs dfs -ls " the replication factor for that file is mentioned as 1
      • In fsck report for that files displays that the file is replicated to 2 datanodes

        Attachments

        1. HDFS-3119-1.patch
          5 kB
          Ashish Singhi
        2. HDFS-3119-1.patch
          5 kB
          Uma Maheswara Rao G
        3. HDFS-3119.patch
          0.9 kB
          Ashish Singhi

          Activity

            People

            • Assignee:
              ashish singhi Ashish Singhi
              Reporter:
              andreina J.Andreina
            • Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: