Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-9734

Refactoring of checksum failure report related codes

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 3.0.0-alpha1
    • Component/s: None
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Reviewed

      Description

      This was from discussion with Jing Zhao in HDFS-9646. There is some duplicate codes between client and datanode sides:

          private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node,
              Map<ExtendedBlock, Set<DatanodeInfo>> corruptionMap) {
            Set<DatanodeInfo> dnSet = corruptionMap.get(blk);
            if (dnSet == null) {
              dnSet = new HashSet<>();
              corruptionMap.put(blk, dnSet);
            }
            if (!dnSet.contains(node)) {
              dnSet.add(node);
            }
          }
      

      This would resolve the duplication and also simplify the codes some bit.

        Attachments

        1. HADOOP-12744-v1.patch
          29 kB
          Kai Zheng
        2. HADOOP-12744-v2.patch
          33 kB
          Kai Zheng
        3. HDFS-9734-v3.patch
          33 kB
          Kai Zheng
        4. HDFS-9734-v4.patch
          33 kB
          Kai Zheng
        5. HDFS-9734-v5.patch
          29 kB
          Kai Zheng
        6. HDFS-9734-v6.patch
          29 kB
          Kai Zheng
        7. HDFS-9734-v7.patch
          30 kB
          Kai Zheng
        8. HDFS-9734-v8.patch
          30 kB
          Zhe Zhang

          Activity

            People

            • Assignee:
              drankye Kai Zheng
              Reporter:
              drankye Kai Zheng
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: