Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-4239

Means of telling the datanode to stop using a sick disk

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None
    • None

    Description

      If a disk has been deemed 'sick' – i.e. not dead but wounded, failing occasionally, or just exhibiting high latency – your choices are:

      1. Decommission the total datanode. If the datanode is carrying 6 or 12 disks of data, especially on a cluster that is smallish – 5 to 20 nodes – the rereplication of the downed datanode's data can be pretty disruptive, especially if the cluster is doing low latency serving: e.g. hosting an hbase cluster.

      2. Stop the datanode, unmount the bad disk, and restart the datanode (You can't unmount the disk while it is in use). This latter is better in that only the bad disk's data is rereplicated, not all datanode data.

      Is it possible to do better, say, send the datanode a signal to tell it stop using a disk an operator has designated 'bad'. This would be like option #2 above minus the need to stop and restart the datanode. Ideally the disk would become unmountable after a while.

      Nice to have would be being able to tell the datanode to restart using a disk after its been replaced.

      Attachments

        1. hdfs-4239.patch
          19 kB
          Jimmy Xiang
        2. hdfs-4239_v5.patch
          48 kB
          Jimmy Xiang
        3. hdfs-4239_v4.patch
          48 kB
          Jimmy Xiang
        4. hdfs-4239_v3.patch
          42 kB
          Jimmy Xiang
        5. hdfs-4239_v2.patch
          41 kB
          Jimmy Xiang

        Issue Links

          Activity

            People

              yzhangal Yongjun Zhang
              stack Michael Stack
              Votes:
              0 Vote for this issue
              Watchers:
              24 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: