Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-3119

Overreplicated block is not deleted even after the replication factor is reduced after sync follwed by closing that file

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • 2.0.0-alpha
    • 2.0.0-alpha, 0.23.7
    • namenode
    • Reviewed

    Description

      cluster setup:
      --------------

      1NN,2 DN,replication factor 2,block report interval 3sec ,block size-256MB

      step1: write a file "filewrite.txt" of size 90bytes with sync(not closed)
      step2: change the replication factor to 1 using the command: "./hdfs dfs -setrep 1 /filewrite.txt"
      step3: close the file

      • At the NN side the file "Decreasing replication from 2 to 1 for /filewrite.txt" , logs has occured but the overreplicated blocks are not deleted even after the block report is sent from DN
      • while listing the file in the console using "./hdfs dfs -ls " the replication factor for that file is mentioned as 1
      • In fsck report for that files displays that the file is replicated to 2 datanodes

      Attachments

        1. HDFS-3119-1.patch
          5 kB
          Ashish Singhi
        2. HDFS-3119-1.patch
          5 kB
          Uma Maheswara Rao G
        3. HDFS-3119.patch
          0.9 kB
          Ashish Singhi

        Activity

          People

            ashish singhi Ashish Singhi
            andreina J.Andreina
            Votes:
            0 Vote for this issue
            Watchers:
            11 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: