Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9819

FsVolume should tolerate few times check-dir failed due to deletion by mistake

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Invalid
    • 2.7.1
    • None
    • None
    • None

    Description

      FsVolume should tolerate few times check-dir failed because sometimes we will do a delete dir/file operation by mistake in datanode data-dirs. Then the DataNode#startCheckDiskErrorThread will invoking checkDir method periodicity and find dir not existed, throw exception. The checked volume will be added to failed volume list. The blocks on this volume will be replicated again. But actually, this is not needed to do. We should let volume can be tolerated few times check-dir failed like config dfs.datanode.failed.volumes.tolerated.

      Attachments

        1. HDFS-9819.001.patch
          6 kB
          Yiqun Lin

        Issue Links

          Activity

            People

              linyiqun Yiqun Lin
              linyiqun Yiqun Lin
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: