Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-4360

multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 0.22.0
    • Fix Version/s: None
    • Component/s: contrib/raid
    • Labels:

      Description

      current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

      the change/fix will be mainly in BlockFixer.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer

        Activity

        Jun Jin created issue -
        Jun Jin made changes -
        Field Original Value New Value
        Description current implementation can only run single BlockFixer since the fsck only check the whole DFS file system. multiple Raidnode/BlockFixer will do the same thing and try to fix same file if multiple RaidNode and BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and getCorruptFiles(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        current implementation can only run single BlockFixer since the fsck only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and getCorruptFiles(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        Jun Jin made changes -
        Affects Version/s 0.23.0 [ 12315571 ]
        Affects Version/s 0.23.1 [ 12318885 ]
        Jun Jin made changes -
        Description current implementation can only run single BlockFixer since the fsck only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and getCorruptFiles(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and getCorruptFiles(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        Jun Jin made changes -
        Description current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and getCorruptFiles(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        Jun Jin made changes -
        Summary multiple BlockFixer should be supported in order to improve scalability and relief task on single BlockFixer multiple BlockFixer should be supported in order to improve scalability and reduce work on single BlockFixer
        Jun Jin made changes -
        Summary multiple BlockFixer should be supported in order to improve scalability and reduce work on single BlockFixer multiple BlockFixer should be supported in order to improve scalability and reduce too much work on single BlockFixer
        Jun Jin made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Jun Jin made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Hide
        Jun Jin added a comment -

        first version of patch

        Show
        Jun Jin added a comment - first version of patch
        Jun Jin made changes -
        Attachment HDFS-4360.patch [ 12563403 ]
        Jun Jin made changes -
        Status Open [ 1 ] Patch Available [ 10002 ]
        Jun Jin made changes -
        Description current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in RaidNode.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        current implementation can only run single BlockFixer since the fsck (in RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple BlockFixer will do the same thing and try to fix same file if multiple BlockFixer launched.

        the change/fix will be mainly in BlockFixer.java and RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths defined in separated Raid.xml for single RaidNode/BlockFixer
        Hide
        Hadoop QA added a comment -

        -1 overall. Here are the results of testing the latest attachment
        http://issues.apache.org/jira/secure/attachment/12563403/HDFS-4360.patch
        against trunk revision .

        -1 patch. The patch command could not apply the patch.

        Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3744//console

        This message is automatically generated.

        Show
        Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12563403/HDFS-4360.patch against trunk revision . -1 patch . The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3744//console This message is automatically generated.
        Jun Jin made changes -
        Status Patch Available [ 10002 ] Open [ 1 ]
        Transition Time In Source Status Execution Times Last Executer Last Execution Date
        Open Open Patch Available Patch Available
        1d 54m 2 Jun Jin 05/Jan/13 07:05
        Patch Available Patch Available Open Open
        37m 28s 2 Jun Jin 05/Jan/13 07:40

          People

          • Assignee:
            Unassigned
            Reporter:
            Jun Jin
          • Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

            • Created:
              Updated:

              Development