Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-163

If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Fixed
    • 0.2.0
    • 0.3.0
    • None
    • None

    Description

      I observed that sometime, if a file of a data node is not mounted properly, it may not be writable. In this case, any data writes will fail. The name node should stop assigning new blocks to that data node. The webpage should show that node is in an abnormal state.

      Attachments

        1. disk.patch
          19 kB
          Hairong Kuang

        Issue Links

          Activity

            People

              hairong Hairong Kuang
              runping Runping Qi
              Votes:
              1 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: