Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-6574

make sure addToDeadNodes() only be called when there's a connection issue occur

    Details

    • Type: Improvement Improvement
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: 3.0.0, 2.5.0
    • Fix Version/s: None
    • Component/s: hdfs-client
    • Labels:
      None

      Description

      My colleague Jianwei Cui found in a HBase testing scenario, once a bad disk occured, the local read will be skipped and lots of remote reads be requested for a lengthy time, say, tens of minutes, then we had to trigger a compaction to help recovering the locality and read latency.
      It turned out relating with the addToDeadNodes(), imaging a disk in local node has something wrong, current impl will add the entity local node to the dead node list, then all other good disks in local node could not get read request any more.
      So better choices here to me, seems:
      1) tell the detail IOException really is a connection related exception, then call addToDeadNodes(). or
      2) tell the IOException is related with bad block/disk, w/o call addToDeadNodes(); else call addToDeadNodes().

      another thing need to consider is if we have got a disk exception from one node, should we refresh the locatedBlocks info from nn to clear all rotten caching for that bad disk of the node ? it'll be heavy somehow if it's a huge size file...

      We have a plan to make a patch soon for our internal hadoop branch, due to it'll degrade HBase read performance severely once a sick disk ocurred, also we'd like to contribute to community if you think this is not too crazy... stack

        Activity

        Liang Xie created issue -
        Liang Xie made changes -
        Field Original Value New Value
        Description My colleague [~cuijianwei] found in a HBase testing scenario, once a bad disk occured, the local read will be skipped and lots of remote read be requested for a lengthy time, say, tens of minutes, then we had to trigger a compaction to help recovering the locality and read latency.
        It turned out relating with the addToDeadNodes(), imaging one disk in local node has something wrong, current impl will add the local node to the dead node list, then all other good disks in local node could not service any read request.
        So better chooses here to me, seems:
        1) tell the detail IOException really is a connection related exception, then call addToDeadNodes().
        2) tell the IOException is related with bad block/disk, w/o call addToDeadNodes(); else call addToDeadNodes().

        another thing need to consider is if we have got a disk exception from one node, should we refresh the locatedBlocks info from nn to clear all rotten caching for that bad disk of the node ? it'll be heavy somehow if it's a huge size file...

        I have a plan to make a patch for our internal hadoop branch, due to it'll degrade HBase read performance severely, also i'd like to contribute to community if you think this proposal is not too crazy... [~stack]
        My colleague [~cuijianwei] found in a HBase testing scenario, once a bad disk occured, the local read will be skipped and lots of remote reads be requested for a lengthy time, say, tens of minutes, then we had to trigger a compaction to help recovering the locality and read latency.
        It turned out relating with the addToDeadNodes(), imaging a disk in local node has something wrong, current impl will add the entity local node to the dead node list, then all other good disks in local node could not get read request any more.
        So better choices here to me, seems:
        1) tell the detail IOException really is a connection related exception, then call addToDeadNodes(). or
        2) tell the IOException is related with bad block/disk, w/o call addToDeadNodes(); else call addToDeadNodes().

        another thing need to consider is if we have got a disk exception from one node, should we refresh the locatedBlocks info from nn to clear all rotten caching for that bad disk of the node ? it'll be heavy somehow if it's a huge size file...

        We have a plan to make a patch soon for our internal hadoop branch, due to it'll degrade HBase read performance severely once a sick disk ocurred, also we'd like to contribute to community if you think this is not too crazy... [~stack]

          People

          • Assignee:
            Liang Xie
            Reporter:
            Liang Xie
          • Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

            • Created:
              Updated:

              Development