Hadoop Common
  1. Hadoop Common
  2. HADOOP-3388

TestDatanodeBlockScanner failed while trying to corrupt replicas

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.18.0
    • Component/s: test
    • Labels:
      None

      Description

      Blocks now have generation stamp associated with them. This unit test does a Block.toString() to find out the name of the block. Instead it should use Block.getBlockName().

      1. patch.txt
        0.8 kB
        dhruba borthakur

        Issue Links

          Activity

          Hide
          dhruba borthakur added a comment -

          I just committed this.

          Show
          dhruba borthakur added a comment - I just committed this.
          Hide
          Lohit Vijayarenu added a comment -

          +1
          I hit the same failure yesterday and had this change locally.

          Show
          Lohit Vijayarenu added a comment - +1 I hit the same failure yesterday and had this change locally.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12382020/patch.txt
          against trunk revision 656270.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified tests.

          -1 patch. The patch command could not apply the patch.

          Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2465/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12382020/patch.txt against trunk revision 656270. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 1 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2465/console This message is automatically generated.
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #491 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/491/ )
          Hide
          dhruba borthakur added a comment -

          I would like to request Lohit to review this patch.

          Show
          dhruba borthakur added a comment - I would like to request Lohit to review this patch.
          Hide
          dhruba borthakur added a comment -

          svn diff
          Index: src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java
          ===================================================================
          — src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java (revision 656131)
          +++ src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java (working copy)
          @@ -174,7 +174,7 @@
          fs = cluster.getFileSystem();
          Path file1 = new Path("/tmp/testBlockVerification/file1");
          DFSTestUtil.createFile(fs, file1, 1024, (short)3, 0);

          • String block = DFSTestUtil.getFirstBlock(fs, file1).toString();
            + String block = DFSTestUtil.getFirstBlock(fs, file1).getBlockName();
            dfsClient = new DFSClient(new InetSocketAddress("localhost",
            cluster.getNameNodePort()), conf);

          The above patch fixes only part of the problem. The other part is probably related to the CorruptReplicaMap that has recently been committed.

          Show
          dhruba borthakur added a comment - svn diff Index: src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java =================================================================== — src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java (revision 656131) +++ src/test/org/apache/hadoop/dfs/TestDatanodeBlockScanner.java (working copy) @@ -174,7 +174,7 @@ fs = cluster.getFileSystem(); Path file1 = new Path("/tmp/testBlockVerification/file1"); DFSTestUtil.createFile(fs, file1, 1024, (short)3, 0); String block = DFSTestUtil.getFirstBlock(fs, file1).toString(); + String block = DFSTestUtil.getFirstBlock(fs, file1).getBlockName(); dfsClient = new DFSClient(new InetSocketAddress("localhost", cluster.getNameNodePort()), conf); The above patch fixes only part of the problem. The other part is probably related to the CorruptReplicaMap that has recently been committed.

            People

            • Assignee:
              dhruba borthakur
              Reporter:
              dhruba borthakur
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development