Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-855

HDFS should repair corrupted files

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 0.11.0
    • None
    • None

    Description

      While reading if we discover a mismatch between a block and checksum, we want to report this back to the namenode to delete the corrupted block or crc.

      To implement this, we need to do the following:
      DFSInputStream
      1. move DFSInputStream out of DFSClient
      2. add member variable to keep track of current datanode (the chosen node)

      DistributedFileSystem
      1. change reportChecksumFailure parameter crc from int to FSInputStream (needed to be able to delete it).
      2. determine specific block and datanode from DFSInputStream passed to reportChecksumFailure
      3. call namenode to delete block/crc vis DFSClient

      ClientProtocol
      1. add method to ask namenode to delete certain blocks on specifc datanode.

      Namenode
      1. add ability to delete certain blocks on specific datanode

      Attachments

        1. hadoop-855-9.patch
          18 kB
          Wendy Chien
        2. hadoop-855-7.patch
          17 kB
          Wendy Chien
        3. hadoop-855-5.patch
          17 kB
          Wendy Chien

        Issue Links

          Activity

            People

              wchien Wendy Chien
              wchien Wendy Chien
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: