Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4360

multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.22.0
    • Fix Version/s: None
    • Component/s: contrib/raid
    • Labels:

      Description

      current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

      the change/fix will be mainly in BlockFixer.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer

        Activity

          People

          • Assignee:
            Unassigned
            Reporter:
            Jun Jin
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:

              Development