Hadoop HDFS
  1. Hadoop HDFS
  2. HDFS-456

Problems with dfs.name.edits.dirs as URI

    Details

    • Type: Bug Bug
    • Status: Closed
    • Priority: Blocker Blocker
    • Resolution: Fixed
    • Affects Version/s: 0.21.0
    • Fix Version/s: 0.21.0
    • Component/s: namenode
    • Labels:
      None
    • Hadoop Flags:
      Reviewed

      Description

      There are several problems with recent commit of HDFS-396.

      1. It does not work with default configuration "file:///". Throws IllegalArgumentException.
      2. ALL hdfs tests fail on Windows because "C:\mypath" is treated as an illegal URI. Backward compatibility is not provided.
      3. IllegalArgumentException should not be thrown within hdfs code because it is a RuntimException. We should throw IOException instead. This was recently discussed in another jira.
      4. Why do we commit patches without running unit tests and test-patch? This is the minimum requirement for a patch to qualify as committable, right?
      1. EditsDirsWin.patch
        23 kB
        Konstantin Shvachko
      2. EditsDirsWin.patch
        22 kB
        Konstantin Shvachko
      3. EditsDirsWin.patch
        19 kB
        Konstantin Shvachko
      4. HDFS-456-clean.patch
        11 kB
        Jonathan Gray
      5. HDFS-456.patch
        11 kB
        Luca Telloli
      6. HDFS-456.patch
        10 kB
        Luca Telloli
      7. HDFS-456.patch
        10 kB
        Luca Telloli
      8. HDFS-456.patch
        8 kB
        Luca Telloli
      9. failing-tests.zip
        52 kB
        Luca Telloli
      10. HDFS-456.patch
        7 kB
        Luca Telloli
      11. HDFS-456.patch
        5 kB
        Luca Telloli
      12. HDFS-456.patch
        3 kB
        Luca Telloli
      13. HDFS-456.patch
        3 kB
        Luca Telloli

        Issue Links

          Activity

          Konstantin Shvachko created issue -
          Konstantin Shvachko made changes -
          Field Original Value New Value
          Link This issue blocks HDFS-396 [ HDFS-396 ]
          Luca Telloli made changes -
          Assignee Luca Telloli [ lucat ]
          Hide
          Mahadev konar added a comment -

          konstantin,
          do the tests fail on linux as well? (you just mentioned tests failing on windows)

          Show
          Mahadev konar added a comment - konstantin, do the tests fail on linux as well? (you just mentioned tests failing on windows)
          Hide
          Flavio Junqueira added a comment -

          Konstantin, It would be great if you could take a look at the comments in HDFS-396. You would perhaps see that it has been tested and has even been through hudson. The checks that failed in hudson have been justified. Perhaps the problem is that tests don't cover enough?

          Show
          Flavio Junqueira added a comment - Konstantin, It would be great if you could take a look at the comments in HDFS-396 . You would perhaps see that it has been tested and has even been through hudson. The checks that failed in hudson have been justified. Perhaps the problem is that tests don't cover enough?
          Hide
          Mahadev konar added a comment -

          i think the problem is that the tests fail on windows (hudson run on ubuntu/solaris).

          luca,
          can you run the tests on windows and see whats failing?

          Show
          Mahadev konar added a comment - i think the problem is that the tests fail on windows (hudson run on ubuntu/solaris). luca, can you run the tests on windows and see whats failing?
          Hide
          Luca Telloli added a comment -

          Mahadev, yes, tests are probably failing on Windows since the patch has been tested thoroughly under Linux and MacOsX.
          I have requested a virtual machine with windows installed, since at the moment I don't have any windows machine available.

          Konstantin, I'd like to understand better what you mean with your #1. Do you want to assign root "/" as a Storage Directory?
          Also, I would remove #4 from your comments, I think it's unfair, incorrect, and not useful to the issue. I assume that a developer is not requested to have any possible environment ready to test the patch locally; instead, the testing system (hudson?) should take care of it.

          Show
          Luca Telloli added a comment - Mahadev, yes, tests are probably failing on Windows since the patch has been tested thoroughly under Linux and MacOsX. I have requested a virtual machine with windows installed, since at the moment I don't have any windows machine available. Konstantin, I'd like to understand better what you mean with your #1. Do you want to assign root "/" as a Storage Directory? Also, I would remove #4 from your comments, I think it's unfair, incorrect, and not useful to the issue. I assume that a developer is not requested to have any possible environment ready to test the patch locally; instead, the testing system (hudson?) should take care of it.
          Hide
          Raghu Angadi added a comment -

          Luca, (1) in Konstantin's comment is mostly not related to HDFS-356. I think tests were done for the patch but it is a known issue that we don't test for windows but generally would like to fix windows bugs when reported.

          I think you can test this on Linux, but by passing in a Windows path for namenode directory ("C:\test"). It may not be necessary for you find a windows machine.

          Mostly "C:" is making is an illegal URI. We need to have a fix for windows paths :
          a. Does URI class or utilities help in handling windows path? Otherwise, we need treat windows paths specially.. something like what firefox does ("c:/" in url bar is an error, but "c:\" gets mapped to "file:///c:/").

          Some time back we used to ask ourselves "What would windows do?" question more often. For this important change, we didn't. This is just another example of how there is "simple change"... there is a tough balance between how much we want to ask a contributor to tweak, test, and refine a patch and making it smooth and simple to contribute. just my 2 cents.

          Show
          Raghu Angadi added a comment - Luca, (1) in Konstantin's comment is mostly not related to HDFS-356 . I think tests were done for the patch but it is a known issue that we don't test for windows but generally would like to fix windows bugs when reported. I think you can test this on Linux, but by passing in a Windows path for namenode directory ("C:\test"). It may not be necessary for you find a windows machine. Mostly "C:" is making is an illegal URI. We need to have a fix for windows paths : a. Does URI class or utilities help in handling windows path? Otherwise, we need treat windows paths specially.. something like what firefox does ("c:/" in url bar is an error, but "c:\" gets mapped to "file:///c:/"). Some time back we used to ask ourselves "What would windows do?" question more often. For this important change, we didn't. This is just another example of how there is "simple change"... there is a tough balance between how much we want to ask a contributor to tweak, test, and refine a patch and making it smooth and simple to contribute. just my 2 cents.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > ... we need treat windows paths specially.. something like what firefox does ("c:/" in url bar is an error, but "c:\" gets mapped to "file:///c:/").

          +1
          I think converting Windows path c:\foo to URI file:///c:/foo is a standard but not specific to firefox.

          BTW, since we assume Cygwin in Windows, could we just use /cygdrive/c/foo?

          Show
          Tsz Wo Nicholas Sze added a comment - > ... we need treat windows paths specially.. something like what firefox does ("c:/" in url bar is an error, but "c:\" gets mapped to "file:///c:/"). +1 I think converting Windows path c:\foo to URI file:///c:/foo is a standard but not specific to firefox. BTW, since we assume Cygwin in Windows, could we just use /cygdrive/c/foo?
          Hide
          Konstantin Shvachko added a comment -

          Raghu is right, my comment #1 an #3 are not related to the edits patch. This should be taken care in a different jira.
          I ran tests on Linux and did not see failures, except for TestBackupNode, which is a known issue.
          Please do not forget to past unit test and test-patch results before committing is Hudson is off.
          I did not look deep into the problem, but I thought we are calling getCanonicalPath() to read edits files, which adds the drive letter. So may be the solution should be to call toURI() on the path before anything else. This should convert the path (even if it is a win path) to correct uri, but if the path is already a URI it will remain the same. Then we can start analyzing the scheme and the authority.

          Show
          Konstantin Shvachko added a comment - Raghu is right, my comment #1 an #3 are not related to the edits patch. This should be taken care in a different jira. I ran tests on Linux and did not see failures, except for TestBackupNode, which is a known issue. Please do not forget to past unit test and test-patch results before committing is Hudson is off. I did not look deep into the problem, but I thought we are calling getCanonicalPath() to read edits files, which adds the drive letter. So may be the solution should be to call toURI() on the path before anything else. This should convert the path (even if it is a win path) to correct uri, but if the path is already a URI it will remain the same. Then we can start analyzing the scheme and the authority.
          Hide
          Konstantin Shvachko added a comment -

          In FSNamesystem.getNamespaceDirs() I replaced

          -    u = new URI("file://" + new File(name).getAbsolutePath());
          +    u = new File(name).getAbsoluteFile().toURI();
          

          And the name-node started well.

          Some additional comments:

          1. getCanonicalFile() is preferrable to getAbsoluteFile() since the former replaces . and .. and resolves links.
          2. You should use LOG.error(message, exception) instead of LOG.error(message + exception.getMessage()). I saw this in 4 places.
          3. When file directories are specified as files rather than URIs the warning should explicitly say that that this api is deprecated. Instead of
            LOG.warn("Scheme is undefined for " + name);
            LOG.warn("Please check your file system configuration in " +
                       "hdfs-site.xml");
            

            I'd rather say

            LOG.warn("Use of file paths for " + propertyName + " is deprecated. Instead URIs instead.");
            

            The key words is deprecated, so that we could replace it with an error in next release, and URI so that users new what to do.

          Show
          Konstantin Shvachko added a comment - In FSNamesystem.getNamespaceDirs() I replaced - u = new URI( "file: //" + new File(name).getAbsolutePath()); + u = new File(name).getAbsoluteFile().toURI(); And the name-node started well. Some additional comments: getCanonicalFile() is preferrable to getAbsoluteFile() since the former replaces . and .. and resolves links. You should use LOG.error(message, exception) instead of LOG.error(message + exception.getMessage()) . I saw this in 4 places. When file directories are specified as files rather than URIs the warning should explicitly say that that this api is deprecated . Instead of LOG.warn( "Scheme is undefined for " + name); LOG.warn( "Please check your file system configuration in " + "hdfs-site.xml" ); I'd rather say LOG.warn( "Use of file paths for " + propertyName + " is deprecated. Instead URIs instead." ); The key words is deprecated, so that we could replace it with an error in next release, and URI so that users new what to do.
          Hide
          Luca Telloli added a comment -

          Attaching a patch for this issue.

          • All URI are now built using
            new File(name).getCanonicalPath().toURI()
            as Konstantin suggested.
          • LOG.error messages have been updated
          • from the messages above I understand that IllegalArgumentException is not an issue for this patch, correct?

          I see the queue for HDFS on hudson is not working, so I won't submit the patch. I'm running the test locally and I'll post the output afterwards.

          Show
          Luca Telloli added a comment - Attaching a patch for this issue. All URI are now built using new File(name).getCanonicalPath().toURI() as Konstantin suggested. LOG.error messages have been updated from the messages above I understand that IllegalArgumentException is not an issue for this patch, correct? I see the queue for HDFS on hudson is not working, so I won't submit the patch. I'm running the test locally and I'll post the output afterwards.
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12412560 ]
          Hide
          Flavio Junqueira added a comment -

          +1, patch looks good to me. I wonder if we can also run on a windows system just to make sure that we are not missing anything.

          Show
          Flavio Junqueira added a comment - +1, patch looks good to me. I wonder if we can also run on a windows system just to make sure that we are not missing anything.
          Hide
          Raghu Angadi added a comment -

          Thanks for the fix.

          Minor change :

          +          u = new File(name).getCanonicalFile().toURI();
          +          LOG.warn("Use of file paths is deprecated for property " 
          +              + propertyName);
          +          LOG.warn("Please update your configuration");
          

          This prints two separate lines. Could you merge it? We usually have one for for that entire message so that each line is self contained.. makes parsing and grepping easy (stack traces are an exception of course). Also it might be better to suggest users to use URI in the message.

          Show
          Raghu Angadi added a comment - Thanks for the fix. Minor change : + u = new File(name).getCanonicalFile().toURI(); + LOG.warn("Use of file paths is deprecated for property " + + propertyName); + LOG.warn("Please update your configuration"); This prints two separate lines. Could you merge it? We usually have one for for that entire message so that each line is self contained.. makes parsing and grepping easy (stack traces are an exception of course). Also it might be better to suggest users to use URI in the message.
          Hide
          Luca Telloli added a comment -

          Latest attachment fixes the LOG.warn() issue

          Show
          Luca Telloli added a comment - Latest attachment fixes the LOG.warn() issue
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12412587 ]
          Hide
          Luca Telloli added a comment -

          Unit tests seem to have issues under Windows, and the issues don't seem all related to this patch. To prove that, I reverted HDFS-396 on a fresh trunk and the same tests fail in there.

          Given that, I'm attaching a patch that has some Windows specific code which solve some unit tests issues but not all of them. In particular, the windows specific code looks like:

              	  if(name.matches("^[a-zA-Z]:.*")){
              		  u = new File(name).getCanonicalFile().toURI();
              	  } else {
              		  // process value as URI 
              		  u = new URI(name);
              	  }
          

          where the condition checks for a windows filename

          The other unit test problems will be treated in HDFS-462

          Show
          Luca Telloli added a comment - Unit tests seem to have issues under Windows, and the issues don't seem all related to this patch. To prove that, I reverted HDFS-396 on a fresh trunk and the same tests fail in there. Given that, I'm attaching a patch that has some Windows specific code which solve some unit tests issues but not all of them. In particular, the windows specific code looks like: if(name.matches("^[a-zA-Z]:.*")){ u = new File(name).getCanonicalFile().toURI(); } else { // process value as URI u = new URI(name); } where the condition checks for a windows filename The other unit test problems will be treated in HDFS-462
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12412635 ]
          Konstantin Shvachko made changes -
          Link This issue is blocked by HDFS-462 [ HDFS-462 ]
          Hide
          Konstantin Shvachko added a comment -

          Could you please test your patch now that HDFS-462 is committed.

          Show
          Konstantin Shvachko added a comment - Could you please test your patch now that HDFS-462 is committed.
          Hide
          Luca Telloli added a comment -

          I run the tests:

               [exec] +1 overall.  
               [exec] 
               [exec]     +1 @author.  The patch does not contain any @author tags.
               [exec] 
               [exec]     +1 tests included.  The patch appears to include 3 new or modified tests.
               [exec] 
               [exec]     +1 javadoc.  The javadoc tool did not generate any warning messages.
               [exec] 
               [exec]     +1 javac.  The applied patch does not increase the total number of javac compiler warnings.
               [exec] 
               [exec]     +1 findbugs.  The patch does not introduce any new Findbugs warnings.
               [exec] 
               [exec]     +1 release audit.  The applied patch does not increase the total number of release audit warnings.
               [exec] 
          

          Results on core tests are one failing test:

              [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 122.279 sec
              [junit] Test org.apache.hadoop.cli.TestHDFSCLI FAILED
          

          with the following error:

          2009-07-14 10:40:01,804 INFO  cli.TestCLI (TestCLI.java:displayResults(180)) -
          2009-07-14 10:40:01,804 INFO  cli.TestCLI (TestCLI.java:displayResults(184)) - Summary results:
          2009-07-14 10:40:01,804 INFO  cli.TestCLI (TestCLI.java:displayResults(185)) - ----------------------------------
          2009-07-14 10:40:01,805 INFO  cli.TestCLI (TestCLI.java:displayResults(205)) -                Testing mode: test2009-07-14 10:40:01,805 INFO  cli.TestCLI (TestCLI.java:displayResults(206)) -
          2009-07-14 10:40:01,805 INFO  cli.TestCLI (TestCLI.java:displayResults(207)) -              Overall result: --- FAIL ---2009-07-14 10:40:01,805 INFO  cli.TestCLI (TestCLI.java:displayResults(215)) -                # Tests pass: 621 (99%)
          2009-07-14 10:40:01,805 INFO  cli.TestCLI (TestCLI.java:displayResults(217)) -                # Tests fail: 2 (0%)2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(221)) -          # Validations done: 1819 (each test may 
          do multiple validations)2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(224)) -
          2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(225)) - Failing tests:2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(226)) - --------------
          2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(232)) - 557: help: help for rm2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(232)) - 558: help: help for rmr
          2009-07-14 10:40:01,806 INFO  cli.TestCLI (TestCLI.java:displayResults(242)) -
          
          Show
          Luca Telloli added a comment - I run the tests: [exec] +1 overall. [exec] [exec] +1 @author. The patch does not contain any @author tags. [exec] [exec] +1 tests included. The patch appears to include 3 new or modified tests. [exec] [exec] +1 javadoc. The javadoc tool did not generate any warning messages. [exec] [exec] +1 javac. The applied patch does not increase the total number of javac compiler warnings. [exec] [exec] +1 findbugs. The patch does not introduce any new Findbugs warnings. [exec] [exec] +1 release audit. The applied patch does not increase the total number of release audit warnings. [exec] Results on core tests are one failing test: [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 122.279 sec [junit] Test org.apache.hadoop.cli.TestHDFSCLI FAILED with the following error: 2009-07-14 10:40:01,804 INFO cli.TestCLI (TestCLI.java:displayResults(180)) - 2009-07-14 10:40:01,804 INFO cli.TestCLI (TestCLI.java:displayResults(184)) - Summary results: 2009-07-14 10:40:01,804 INFO cli.TestCLI (TestCLI.java:displayResults(185)) - ---------------------------------- 2009-07-14 10:40:01,805 INFO cli.TestCLI (TestCLI.java:displayResults(205)) - Testing mode: test2009-07-14 10:40:01,805 INFO cli.TestCLI (TestCLI.java:displayResults(206)) - 2009-07-14 10:40:01,805 INFO cli.TestCLI (TestCLI.java:displayResults(207)) - Overall result: --- FAIL ---2009-07-14 10:40:01,805 INFO cli.TestCLI (TestCLI.java:displayResults(215)) - # Tests pass: 621 (99%) 2009-07-14 10:40:01,805 INFO cli.TestCLI (TestCLI.java:displayResults(217)) - # Tests fail: 2 (0%)2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(221)) - # Validations done: 1819 (each test may do multiple validations)2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(224)) - 2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(225)) - Failing tests:2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(226)) - -------------- 2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(232)) - 557: help: help for rm2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(232)) - 558: help: help for rmr 2009-07-14 10:40:01,806 INFO cli.TestCLI (TestCLI.java:displayResults(242)) -
          Hide
          Luca Telloli added a comment -

          I'm sorry, I just realized I run the tests under Linux. I'll have it done on Windows and update the comments accordingly.

          Show
          Luca Telloli added a comment - I'm sorry, I just realized I run the tests under Linux. I'll have it done on Windows and update the comments accordingly.
          Hide
          Luca Telloli added a comment -

          I run the tests on windows, using trunk + this patch and trunk + reverted 396, to see whether failures are due to this patch or not. I fixed a couple of error in the patch that I'm now posting.
          To find some additional issues I run each test both on a directory containing spaces in path (like c:\Documents and Settings) and a directory not containing spaces, which apparently made the difference.

          The impression is that windows builds have been neglected since some time, due to the fact that some tests fail on both pre and post patch code.

          The outcome is that the remaining failing tests (which fail only on windows) are not related to this patch. Details follow for public discussion:

          FIXED TESTS:

          by modifying code:

          Testsuite: org.apache.hadoop.hdfs.TestDFSShell
           Tests run: 14, Failures: 1, Errors: 3, Time elapsed: 64.25 sec
          
          Testsuite: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore
           Tests run: 2, Failures: 1, Errors: 1, Time elapsed: 27.172 sec
          

          without modifying code:

           
          Testsuite: org.apache.hadoop.hdfs.TestFileAppend2
           Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 99.781 sec
           
          Testsuite: org.apache.hadoop.hdfs.TestReplication
           Tests run: 5, Failures: 0, Errors: 1, Time elapsed: 48.656 sec
           

          FAILS ON BOTH:

          if the path contains spaces (otherwise works):

          Testsuite: org.apache.hadoop.hdfs.TestDatanodeBlockScanner
           Tests run: 4, Failures: 0, Errors: 1, Time elapsed: 118.078 sec
           
          Testsuite: org.apache.hadoop.fs.TestUrlStreamHandler
           Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 23.969 sec
           

          other causes:

          Testsuite: org.apache.hadoop.cli.TestHDFSCLI
           Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 108.047 sec
           
          Testsuite: org.apache.hadoop.hdfs.TestHDFSTrash
           Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 20.171 sec
           
          Testcase: testTrash took 1.609 sec
          Testcase: testNonDefaultFS took 0.188 sec
                  FAILED
          null
          junit.framework.AssertionFailedError: null
                  at org.apache.hadoop.fs.TestTrash.checkTrash(TestTrash.java:62)
                  at org.apache.hadoop.fs.TestTrash.trashNonDefaultFS(TestTrash.java:335)
                  at org.apache.hadoop.hdfs.TestHDFSTrash.testNonDefaultFS(TestHDFSTrash.j
          ava:61)
                  at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
                  at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
                  at junit.extensions.TestSetup.run(TestSetup.java:27)
          
          Testsuite: org.apache.hadoop.hdfs.TestHDFSServerPorts
           Tests run: 4, Failures: 4, Errors: 0, Time elapsed: 7.437 sec
           
          Testcase: testNameNodePorts took 2.906 sec
                  FAILED
          null
          junit.framework.AssertionFailedError: null
                  at org.apache.hadoop.hdfs.TestHDFSServerPorts.testNameNodePorts(TestHDFS
          ServerPorts.java:256)
          
          Testcase: testDataNodePorts took 1.515 sec
                  FAILED
          null
          junit.framework.AssertionFailedError: null
                  at org.apache.hadoop.hdfs.TestHDFSServerPorts.testDataNodePorts(TestHDFS
          ServerPorts.java:291)
          
          Testcase: testSecondaryNodePorts took 0.938 sec
                  FAILED
          null
          junit.framework.AssertionFailedError: null
                  at org.apache.hadoop.hdfs.TestHDFSServerPorts.testSecondaryNodePorts(Tes
          tHDFSServerPorts.java:319)
          
          Testcase: testBackupNodePorts took 1.312 sec
                  FAILED
          Backup started on same port as Namenode
          junit.framework.AssertionFailedError: Backup started on same port as Namenode
                  at org.apache.hadoop.hdfs.TestHDFSServerPorts.testBackupNodePorts(TestHD
          FSServerPorts.java:348)
          
          Show
          Luca Telloli added a comment - I run the tests on windows, using trunk + this patch and trunk + reverted 396, to see whether failures are due to this patch or not. I fixed a couple of error in the patch that I'm now posting. To find some additional issues I run each test both on a directory containing spaces in path (like c:\Documents and Settings) and a directory not containing spaces, which apparently made the difference. The impression is that windows builds have been neglected since some time, due to the fact that some tests fail on both pre and post patch code. The outcome is that the remaining failing tests (which fail only on windows) are not related to this patch. Details follow for public discussion: FIXED TESTS: by modifying code: Testsuite: org.apache.hadoop.hdfs.TestDFSShell Tests run: 14, Failures: 1, Errors: 3, Time elapsed: 64.25 sec Testsuite: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore Tests run: 2, Failures: 1, Errors: 1, Time elapsed: 27.172 sec without modifying code: Testsuite: org.apache.hadoop.hdfs.TestFileAppend2 Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 99.781 sec Testsuite: org.apache.hadoop.hdfs.TestReplication Tests run: 5, Failures: 0, Errors: 1, Time elapsed: 48.656 sec FAILS ON BOTH: if the path contains spaces (otherwise works): Testsuite: org.apache.hadoop.hdfs.TestDatanodeBlockScanner Tests run: 4, Failures: 0, Errors: 1, Time elapsed: 118.078 sec Testsuite: org.apache.hadoop.fs.TestUrlStreamHandler Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 23.969 sec other causes: Testsuite: org.apache.hadoop.cli.TestHDFSCLI Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 108.047 sec Testsuite: org.apache.hadoop.hdfs.TestHDFSTrash Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 20.171 sec Testcase: testTrash took 1.609 sec Testcase: testNonDefaultFS took 0.188 sec FAILED null junit.framework.AssertionFailedError: null at org.apache.hadoop.fs.TestTrash.checkTrash(TestTrash.java:62) at org.apache.hadoop.fs.TestTrash.trashNonDefaultFS(TestTrash.java:335) at org.apache.hadoop.hdfs.TestHDFSTrash.testNonDefaultFS(TestHDFSTrash.j ava:61) at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24) at junit.extensions.TestSetup$1.protect(TestSetup.java:23) at junit.extensions.TestSetup.run(TestSetup.java:27) Testsuite: org.apache.hadoop.hdfs.TestHDFSServerPorts Tests run: 4, Failures: 4, Errors: 0, Time elapsed: 7.437 sec Testcase: testNameNodePorts took 2.906 sec FAILED null junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.TestHDFSServerPorts.testNameNodePorts(TestHDFS ServerPorts.java:256) Testcase: testDataNodePorts took 1.515 sec FAILED null junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.TestHDFSServerPorts.testDataNodePorts(TestHDFS ServerPorts.java:291) Testcase: testSecondaryNodePorts took 0.938 sec FAILED null junit.framework.AssertionFailedError: null at org.apache.hadoop.hdfs.TestHDFSServerPorts.testSecondaryNodePorts(Tes tHDFSServerPorts.java:319) Testcase: testBackupNodePorts took 1.312 sec FAILED Backup started on same port as Namenode junit.framework.AssertionFailedError: Backup started on same port as Namenode at org.apache.hadoop.hdfs.TestHDFSServerPorts.testBackupNodePorts(TestHD FSServerPorts.java:348)
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12413450 ]
          Hide
          Konstantin Shvachko added a comment -

          Could you please list the tests that fail on Windows after your latest patch is applied.
          Could you please attach log files for the failed tests.

          Show
          Konstantin Shvachko added a comment - Could you please list the tests that fail on Windows after your latest patch is applied. Could you please attach log files for the failed tests.
          Hide
          Luca Telloli added a comment -

          Currently failing tests are:

              [junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED
              [junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED
              [junit] Test org.apache.hadoop.hdfs.TestModTime FAILED
              [junit] Test org.apache.hadoop.hdfs.TestReplication FAILED
          

          Attaching a .zip file containing the logs

          Show
          Luca Telloli added a comment - Currently failing tests are: [junit] Test org.apache.hadoop.hdfs.TestHDFSServerPorts FAILED [junit] Test org.apache.hadoop.hdfs.TestHDFSTrash FAILED [junit] Test org.apache.hadoop.hdfs.TestModTime FAILED [junit] Test org.apache.hadoop.hdfs.TestReplication FAILED Attaching a .zip file containing the logs
          Luca Telloli made changes -
          Attachment failing-tests.zip [ 12413571 ]
          Hide
          Konstantin Shvachko added a comment -

          TestReplication does not fail for me (using trunk). Could you please check this one.
          TestHDFSServerPorts and TestHDFSTrash do fail on Windows only. We should file an issue if there isn't one already.
          TestModTime started failing when I updated trunk, was not failing on 2 days old trunk, when you were running your tests. Don't know what it could be related to.

          Show
          Konstantin Shvachko added a comment - TestReplication does not fail for me (using trunk). Could you please check this one. TestHDFSServerPorts and TestHDFSTrash do fail on Windows only. We should file an issue if there isn't one already. TestModTime started failing when I updated trunk, was not failing on 2 days old trunk, when you were running your tests. Don't know what it could be related to.
          Hide
          Luca Telloli added a comment -

          Today I've got different results from yours, I'm not sure if it's related to my environment configuration.

          I tried these 3 configurations:
          1 - trunk
          2 - trunk with HDFS-456 applied
          3 - trunk with HDFS-396 reverted. 396 doesn't revert fully anymore but the problem is related to CreateEditLog.java, so not relevant to this issue

          My outcome is:
          1 - TestReplication fails with messages such as, which I guess is expected:

          Testcase: testPendingReplicationRetry took 0.469 sec
                  Caused an ERROR
          All specified directories are not accessible or do not exist.
          java.io.IOException: All specified directories are not accessible or do not exist.
          

          2 - TestReplication failed on testReplicateLenMismatchedBlock with error:

          Testcase: testReplicateLenMismatchedBlock took 2.375 sec
                  Caused an ERROR
          The requested operation cannot be performed on a file with a user-mapped section open
          java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open
                  at java.io.RandomAccessFile.setLength(Native Method)
                  at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.changeReplicaLength(TestDatanodeBlockScanner.java:419)
                  at org.apache.hadoop.hdfs.TestReplication.changeBlockLen(TestReplication.java:424)
                  at org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock(TestReplication.java:403)
          

          3 - TestReplication passed correctly

          luca@yahoo-a170c7579 /tmp/patches/trunk-applied
          $ tail -n 6 build/test/TEST-org.apache.hadoop.hdfs.TestReplication.txt
          
          Testcase: testBadBlockReportOnTransfer took 24.61 sec
          Testcase: testReplicationSimulatedStorag took 3.562 sec
          Testcase: testReplication took 9.359 sec
          Testcase: testPendingReplicationRetry took 23.797 sec
          Testcase: testReplicateLenMismatchedBlock took 10.953 sec
          

          so at the end everything seems fine. I'm still puzzled on why the test didn't fail today and failed yesterday

          Show
          Luca Telloli added a comment - Today I've got different results from yours, I'm not sure if it's related to my environment configuration. I tried these 3 configurations: 1 - trunk 2 - trunk with HDFS-456 applied 3 - trunk with HDFS-396 reverted. 396 doesn't revert fully anymore but the problem is related to CreateEditLog.java, so not relevant to this issue My outcome is: 1 - TestReplication fails with messages such as, which I guess is expected: Testcase: testPendingReplicationRetry took 0.469 sec Caused an ERROR All specified directories are not accessible or do not exist. java.io.IOException: All specified directories are not accessible or do not exist. 2 - TestReplication failed on testReplicateLenMismatchedBlock with error: Testcase: testReplicateLenMismatchedBlock took 2.375 sec Caused an ERROR The requested operation cannot be performed on a file with a user-mapped section open java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.changeReplicaLength(TestDatanodeBlockScanner.java:419) at org.apache.hadoop.hdfs.TestReplication.changeBlockLen(TestReplication.java:424) at org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock(TestReplication.java:403) 3 - TestReplication passed correctly luca@yahoo-a170c7579 /tmp/patches/trunk-applied $ tail -n 6 build/test/TEST-org.apache.hadoop.hdfs.TestReplication.txt Testcase: testBadBlockReportOnTransfer took 24.61 sec Testcase: testReplicationSimulatedStorag took 3.562 sec Testcase: testReplication took 9.359 sec Testcase: testPendingReplicationRetry took 23.797 sec Testcase: testReplicateLenMismatchedBlock took 10.953 sec so at the end everything seems fine. I'm still puzzled on why the test didn't fail today and failed yesterday
          Hide
          Luca Telloli added a comment -

          To answer my own question, I run the test on trunk + HDFS-496 30 times consecutively, for 2 times.
          The first time it passed 12 times, and failed 18; the second time it failed 23 times and passed 7 times.
          In the second batch I saved the test logs and they all fail for the same error below, which is the same error that makes the test fail with HDFS-396 reverted, so that I'd conclude that this patch does not impact that test. The error seems related to something left open, and I assume it's OS specific.

          Testcase: testReplicateLenMismatchedBlock took 1.984 sec
                  Caused an ERROR
          The requested operation cannot be performed on a file with a user-mapped section
           open
          java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open
                  at java.io.RandomAccessFile.setLength(Native Method)
                  at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.changeReplicaLength(TestDatanodeBlockScanner.java:419)
                  at org.apache.hadoop.hdfs.TestReplication.changeBlockLen(TestReplication.java:424)
                  at org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock(TestReplication.java:403)
          
          Show
          Luca Telloli added a comment - To answer my own question, I run the test on trunk + HDFS-496 30 times consecutively, for 2 times. The first time it passed 12 times, and failed 18; the second time it failed 23 times and passed 7 times. In the second batch I saved the test logs and they all fail for the same error below, which is the same error that makes the test fail with HDFS-396 reverted, so that I'd conclude that this patch does not impact that test. The error seems related to something left open, and I assume it's OS specific. Testcase: testReplicateLenMismatchedBlock took 1.984 sec Caused an ERROR The requested operation cannot be performed on a file with a user-mapped section open java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at org.apache.hadoop.hdfs.TestDatanodeBlockScanner.changeReplicaLength(TestDatanodeBlockScanner.java:419) at org.apache.hadoop.hdfs.TestReplication.changeBlockLen(TestReplication.java:424) at org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock(TestReplication.java:403)
          Hide
          Luca Telloli added a comment -

          After a personal discussion with Konstantin, I simplified the construction of the URL eliminating the custom windows check in favor of the more generic sequence below:

           for(String name : dirNames) {
                URI u = null;
                try {
                  u = new URI(name);
                } catch (Exception e){
                  LOG.warn("Path " + name + " should be specified as a URI " +
                  "in configuration files. Reverting to the default file scheme");
                }
                // if uri is null or scheme is undefined, then assume it's file://
                if(u == null || u.getScheme() == null){
                  System.out.println("Using default scheme");
                  try {
                    u = new File(name).getCanonicalFile().toURI();
                  } catch (IOException e) {
                    LOG.error("Error while processing element as URI: " + name);
                  }
                }
                if (u != null)
                  dirs.add(u);
              }
             

          The idea is that Windows will raise an exception when trying to construct a url from a windows file name in the first attempt, so it'll be reverted to the default scheme construction procedure:

          new File(name).getCanonicalFile().toURI();
          
          Show
          Luca Telloli added a comment - After a personal discussion with Konstantin, I simplified the construction of the URL eliminating the custom windows check in favor of the more generic sequence below: for(String name : dirNames) { URI u = null; try { u = new URI(name); } catch (Exception e){ LOG.warn("Path " + name + " should be specified as a URI " + "in configuration files. Reverting to the default file scheme"); } // if uri is null or scheme is undefined, then assume it's file:// if(u == null || u.getScheme() == null){ System.out.println("Using default scheme"); try { u = new File(name).getCanonicalFile().toURI(); } catch (IOException e) { LOG.error("Error while processing element as URI: " + name); } } if (u != null) dirs.add(u); } The idea is that Windows will raise an exception when trying to construct a url from a windows file name in the first attempt, so it'll be reverted to the default scheme construction procedure: new File(name).getCanonicalFile().toURI();
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12416583 ]
          Luca Telloli made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12416583/HDFS-456.patch
          against trunk revision 804127.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 12 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12416583/HDFS-456.patch against trunk revision 804127. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 12 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/67/console This message is automatically generated.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          A few nits:

          • change numberOfVolumes() to private
          • keep handleDiskError(String errMsgr) private
          • +    // to shutdonw DN completely and don't want NN to remove it.
            

            "shutdonw" should be "shutdown".

          Show
          Tsz Wo Nicholas Sze added a comment - A few nits: change numberOfVolumes() to private keep handleDiskError(String errMsgr) private + // to shutdonw DN completely and don't want NN to remove it. "shutdonw" should be "shutdown".
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Oops, my previous comments were not for this issue. Sorry.

          Show
          Tsz Wo Nicholas Sze added a comment - Oops, my previous comments were not for this issue. Sorry.
          Hide
          Suresh Srinivas added a comment -

          Comments:

          1. FSImage.getDirectories() in this method, the path for storage directories is interpreted only as File. Could this also have URI?
          2. FSImage.java - when creating new URI please catch specific exception URISyntaxExcepion instead of blanket Exception
          3. FSImage.java - has System.out.println - please remove it
          4. FSImage.java - when trying to process name as file, on exception, should the error say "Error while processing element as URI or File: " + name
          5. Interpreting a path as URI, and on failure as file is repeated in multiple places. Move this into a util, instead of duplicating the code. Please add unit tests to test this method on various platforms to ensure different configuration (that is files, windows files, URIs) are handled. I would like to review the unit tests to see if we are catching all the conditions. These tests need to pass both on Windows and Linux.
          6. In some of the tests, replace(' ', '+'); has been removed. Not sure if this is intentional?
          7. Please use two spaces instead of tab
          Show
          Suresh Srinivas added a comment - Comments: FSImage.getDirectories() in this method, the path for storage directories is interpreted only as File. Could this also have URI? FSImage.java - when creating new URI please catch specific exception URISyntaxExcepion instead of blanket Exception FSImage.java - has System.out.println - please remove it FSImage.java - when trying to process name as file, on exception, should the error say "Error while processing element as URI or File: " + name Interpreting a path as URI, and on failure as file is repeated in multiple places. Move this into a util, instead of duplicating the code. Please add unit tests to test this method on various platforms to ensure different configuration (that is files, windows files, URIs) are handled. I would like to review the unit tests to see if we are catching all the conditions. These tests need to pass both on Windows and Linux. In some of the tests, replace(' ', '+'); has been removed. Not sure if this is intentional? Please use two spaces instead of tab
          Hide
          Luca Telloli added a comment -

          Suresh, thanks for the detailed comments, my replies are below.

          In general, after some tests, I came up with an update in the URI creation logic:

          • if the uri is correctly specified, the URI constructor will take care of it
          • if the user specified a directory/file and not a uri, then the file constructor will be invoked, and the URi constructed from there
          • if the scheme is file, then the authoritative part should not be a Windows like drive letter, like C:

          The last one comes after reading this page: http://blogs.msdn.com/ie/archive/2006/12/06/file-uris-in-windows.aspx

          Let me know if you think that the logic is solid and the unit test is good enough, thanks!

          FSImage.getDirectories() in this method, the path for storage directories is interpreted only as File. Could this also have URI?

          I'm not 100% sure on this. in FSImage.getDirectories() the assumption is that the list of directories should be already correctly set from the initialization methods no? What I mean is that that method is not reading any configuration property

          FSImage.java - when creating new URI please catch specific exception URISyntaxExcepion instead of blanket Exception

          Done

          FSImage.java - has System.out.println - please remove it

          Done: sorry about it

          FSImage.java - when trying to process name as file, on exception, should the error say "Error while processing element as URI or File: " + name

          Updated

          Interpreting a path as URI, and on failure as file is repeated in multiple places. Move this into a util, instead of duplicating the code.

          Done.

          Please add unit tests to test this method on various platforms to ensure different configuration (that is files, windows files, URIs) are handled. I would like to review the unit tests to see if we are catching all the conditions. These tests need to pass both on Windows and Linux

          I added a unit test as requested.

          In some of the tests, replace(' ', '+'); has been removed. Not sure if this is intentional?

          Was intentional two patches ago. I removed it though. I think some tests on windows behave weirdly because of directories with spaces (that is, if you run them in c:\Documents and Settings the outcome is different than running them in /cygdrive/c/tmp/) but this should be the subject of a different patch in case.

          Show
          Luca Telloli added a comment - Suresh, thanks for the detailed comments, my replies are below. In general, after some tests, I came up with an update in the URI creation logic: if the uri is correctly specified, the URI constructor will take care of it if the user specified a directory/file and not a uri, then the file constructor will be invoked, and the URi constructed from there if the scheme is file, then the authoritative part should not be a Windows like drive letter, like C: The last one comes after reading this page: http://blogs.msdn.com/ie/archive/2006/12/06/file-uris-in-windows.aspx Let me know if you think that the logic is solid and the unit test is good enough, thanks! FSImage.getDirectories() in this method, the path for storage directories is interpreted only as File. Could this also have URI? I'm not 100% sure on this. in FSImage.getDirectories() the assumption is that the list of directories should be already correctly set from the initialization methods no? What I mean is that that method is not reading any configuration property FSImage.java - when creating new URI please catch specific exception URISyntaxExcepion instead of blanket Exception Done FSImage.java - has System.out.println - please remove it Done: sorry about it FSImage.java - when trying to process name as file, on exception, should the error say "Error while processing element as URI or File: " + name Updated Interpreting a path as URI, and on failure as file is repeated in multiple places. Move this into a util, instead of duplicating the code. Done. Please add unit tests to test this method on various platforms to ensure different configuration (that is files, windows files, URIs) are handled. I would like to review the unit tests to see if we are catching all the conditions. These tests need to pass both on Windows and Linux I added a unit test as requested. In some of the tests, replace(' ', '+'); has been removed. Not sure if this is intentional? Was intentional two patches ago. I removed it though. I think some tests on windows behave weirdly because of directories with spaces (that is, if you run them in c:\Documents and Settings the outcome is different than running them in /cygdrive/c/tmp/) but this should be the subject of a different patch in case.
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12417737 ]
          Luca Telloli made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Luca Telloli made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12417737/HDFS-456.patch
          against trunk revision 808193.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          -1 release audit. The applied patch generated 148 release audit warnings (more than the trunk's current 147 warnings).

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/testReport/
          Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12417737/HDFS-456.patch against trunk revision 808193. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 148 release audit warnings (more than the trunk's current 147 warnings). -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/93/console This message is automatically generated.
          Luca Telloli made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Hide
          Luca Telloli added a comment -

          Fixed javadoc.
          Not sure about the failing test though.

          Show
          Luca Telloli added a comment - Fixed javadoc. Not sure about the failing test though.
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12417979 ]
          Luca Telloli made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12417979/HDFS-456.patch
          against trunk revision 808670.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          -1 release audit. The applied patch generated 148 release audit warnings (more than the trunk's current 147 warnings).

          -1 core tests. The patch failed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/testReport/
          Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12417979/HDFS-456.patch against trunk revision 808670. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. -1 release audit. The applied patch generated 148 release audit warnings (more than the trunk's current 147 warnings). -1 core tests. The patch failed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/testReport/ Release audit warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/patchprocess/releaseAuditDiffWarnings.txt Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-vesta.apache.org/99/console This message is automatically generated.
          Hide
          Luca Telloli added a comment -

          No more audit warnings I swear!

          Show
          Luca Telloli added a comment - No more audit warnings I swear!
          Luca Telloli made changes -
          Attachment HDFS-456.patch [ 12417997 ]
          Luca Telloli made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Luca Telloli made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Luca Telloli made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Luca Telloli made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12417997/HDFS-456.patch
          against trunk revision 810337.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 6 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12417997/HDFS-456.patch against trunk revision 810337. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 6 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/6/console This message is automatically generated.
          Hide
          Tsz Wo Nicholas Sze added a comment -

          > No more audit warnings I swear!
          Well done! You got the first successful build in the new hudson machine.

          Really hope that this can be committed soon. Otherwise, we cannot run hdfs on Windows.

          Show
          Tsz Wo Nicholas Sze added a comment - > No more audit warnings I swear! Well done! You got the first successful build in the new hudson machine. Really hope that this can be committed soon. Otherwise, we cannot run hdfs on Windows.
          Hide
          Konstantin Shvachko added a comment -

          Making it a blocker for 0.21

          Show
          Konstantin Shvachko added a comment - Making it a blocker for 0.21
          Konstantin Shvachko made changes -
          Priority Major [ 3 ] Blocker [ 1 ]
          Hide
          Luca Telloli added a comment -

          Is there anything preventing this one to be committed? QA seemed fine in the last run.

          Show
          Luca Telloli added a comment - Is there anything preventing this one to be committed? QA seemed fine in the last run.
          Hide
          Jonathan Gray added a comment -

          Over in HBase, we have not been able to run our unit tests on Windows in 0.21 because of URI-related errors in MiniDFSCluster.

          With this patch, I am now able to run our unit tests on Windows. I used to see messages like this:

          2009-10-23 13:18:44,224 ERROR [main] namenode.FSNamesystem(354): Error while processing URI: C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1. The error message was: Illegal character in opaque part at index 2: C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1
          

          With the patch, I now see:

          2009-10-23 13:28:55,823 WARN  [main] common.Util(50): Path C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1 should be specified as a URI in configuration files. Reverting to the default file scheme
          

          And tests now run as expected. +1 from me and others over in HBase.

          Attached is a patch that applies cleanly to the current 0.21 branch, no other changes. Thanks Luca!

          Show
          Jonathan Gray added a comment - Over in HBase, we have not been able to run our unit tests on Windows in 0.21 because of URI-related errors in MiniDFSCluster. With this patch, I am now able to run our unit tests on Windows. I used to see messages like this: 2009-10-23 13:18:44,224 ERROR [main] namenode.FSNamesystem(354): Error while processing URI: C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1. The error message was: Illegal character in opaque part at index 2: C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1 With the patch, I now see: 2009-10-23 13:28:55,823 WARN [main] common.Util(50): Path C:\eclipse\HBase 0.21 Trunk SVN\build\hbase\test\dfs\name1 should be specified as a URI in configuration files. Reverting to the default file scheme And tests now run as expected. +1 from me and others over in HBase. Attached is a patch that applies cleanly to the current 0.21 branch, no other changes. Thanks Luca!
          Jonathan Gray made changes -
          Attachment HDFS-456-clean.patch [ 12423058 ]
          Konstantin Shvachko made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Assignee Luca Telloli [ lucat ] Konstantin Shvachko [ shv ]
          Hide
          Konstantin Shvachko added a comment -

          This patch is based on the previous patches submitted by Luca, and updated by Jonathan.

          • I removed unused testing for incorrect URI containing windows drive letters.
          • Improved some message reporting and JavaDoc comments.
          • Replaced storage directory declaration in hdfs-default.xml with URIs for "dfs.namenode.name.dir" and "dfs.namenode.checkpoint.dir".
          • I fixed MiniDFSCluster so that it sets URIs as storage directories not files.
          • Fixed other tests, which set or use storage directories besides the mini-cluster.
          Show
          Konstantin Shvachko added a comment - This patch is based on the previous patches submitted by Luca, and updated by Jonathan. I removed unused testing for incorrect URI containing windows drive letters. Improved some message reporting and JavaDoc comments. Replaced storage directory declaration in hdfs-default.xml with URIs for "dfs.namenode.name.dir" and "dfs.namenode.checkpoint.dir". I fixed MiniDFSCluster so that it sets URIs as storage directories not files. Fixed other tests, which set or use storage directories besides the mini-cluster.
          Konstantin Shvachko made changes -
          Attachment EditsDirsWin.patch [ 12427420 ]
          Konstantin Shvachko made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Konstantin Shvachko added a comment -

          I ran tests on Windows and on Linux.
          Linux run was good. Windows tests fail because of HDFS-815 and HDFS-816. I think these are different issues unrelated to this patch.
          In order to reproduce them though one will need to apply this patch.

          Show
          Konstantin Shvachko added a comment - I ran tests on Windows and on Linux. Linux run was good. Windows tests fail because of HDFS-815 and HDFS-816 . I think these are different issues unrelated to this patch. In order to reproduce them though one will need to apply this patch.
          Konstantin Shvachko made changes -
          Link This issue blocks HDFS-815 [ HDFS-815 ]
          Konstantin Shvachko made changes -
          Link This issue is blocked by HDFS-816 [ HDFS-816 ]
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12427420/EditsDirsWin.patch
          against trunk revision 888538.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12427420/EditsDirsWin.patch against trunk revision 888538. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/138/console This message is automatically generated.
          Hide
          Suresh Srinivas added a comment -

          Comments:

          1. Some of the lines in Tests exceed 80 chars
          2. TestHDFSServerPorts.java - when creating URI string why is File.getCanonicalFile().toURI() not used? This is also applicable to other tests.
          3. TestGetUriFromString.testAbsolutePathAsURI - is there a reason why OS check is needed? The test works on unix for both windows and unix paths.
          Show
          Suresh Srinivas added a comment - Comments: Some of the lines in Tests exceed 80 chars TestHDFSServerPorts.java - when creating URI string why is File.getCanonicalFile().toURI() not used? This is also applicable to other tests. TestGetUriFromString.testAbsolutePathAsURI - is there a reason why OS check is needed? The test works on unix for both windows and unix paths.
          Hide
          Konstantin Shvachko added a comment -

          This addresses Suresh's comments.
          I did some more refactoring of the patch: I removed unused constants. Several contsant names were not up to our code style (not all capital), and I factored out two commonly used Util methods. I guess other people may forget to use canonical file name the I did, so I used Util.fileAsURI() method.

          Show
          Konstantin Shvachko added a comment - This addresses Suresh's comments. I did some more refactoring of the patch: I removed unused constants. Several contsant names were not up to our code style (not all capital), and I factored out two commonly used Util methods. I guess other people may forget to use canonical file name the I did, so I used Util.fileAsURI() method.
          Konstantin Shvachko made changes -
          Attachment EditsDirsWin.patch [ 12427661 ]
          Konstantin Shvachko made changes -
          Status Patch Available [ 10002 ] Open [ 1 ]
          Konstantin Shvachko made changes -
          Status Open [ 1 ] Patch Available [ 10002 ]
          Hide
          Suresh Srinivas added a comment -

          Comments:

          1. Util.stringCollectionAsURIs - javadoc for @return is empty
          2. Optional: MiniDFSCluster - fix two existing javadoc issues - restartDataNode() remove comma after dnport and keepPort and remove throws clause for getDataDirectory()
          3. Optional: TestStorageRestore.getFileMD5 fix the javadoc comment @param by adding parameter name
          Show
          Suresh Srinivas added a comment - Comments: Util.stringCollectionAsURIs - javadoc for @return is empty Optional: MiniDFSCluster - fix two existing javadoc issues - restartDataNode() remove comma after dnport and keepPort and remove throws clause for getDataDirectory() Optional: TestStorageRestore.getFileMD5 fix the javadoc comment @param by adding parameter name
          Hide
          Konstantin Shvachko added a comment -

          This fixes JavaDoc.

          Show
          Konstantin Shvachko added a comment - This fixes JavaDoc.
          Konstantin Shvachko made changes -
          Attachment EditsDirsWin.patch [ 12427666 ]
          Hide
          Suresh Srinivas added a comment -

          +1 for the patch

          Show
          Suresh Srinivas added a comment - +1 for the patch
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12427661/EditsDirsWin.patch
          against trunk revision 889464.

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 15 new or modified tests.

          -1 javadoc. The javadoc tool appears to have generated 1 warning messages.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 findbugs. The patch does not introduce any new Findbugs warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed core unit tests.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/testReport/
          Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12427661/EditsDirsWin.patch against trunk revision 889464. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 15 new or modified tests. -1 javadoc. The javadoc tool appears to have generated 1 warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. +1 contrib tests. The patch passed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/142/console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          I just committed this.

          Show
          Konstantin Shvachko added a comment - I just committed this.
          Konstantin Shvachko made changes -
          Status Patch Available [ 10002 ] Resolved [ 5 ]
          Hadoop Flags [Reviewed]
          Resolution Fixed [ 1 ]
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #170 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/170/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #170 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/170/ )
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk-Commit #145 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/145/)

          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk-Commit #145 (See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk-Commit/145/ )
          Hide
          Hudson added a comment -

          Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #148 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/148/)

          Show
          Hudson added a comment - Integrated in Hdfs-Patch-h5.grid.sp2.yahoo.net #148 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h5.grid.sp2.yahoo.net/148/ )
          Konstantin Shvachko made changes -
          Link This issue relates to HDFS-873 [ HDFS-873 ]
          Hide
          Hudson added a comment -

          Integrated in Hdfs-Patch-h2.grid.sp2.yahoo.net #94 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/94/)

          Show
          Hudson added a comment - Integrated in Hdfs-Patch-h2.grid.sp2.yahoo.net #94 (See http://hudson.zones.apache.org/hudson/job/Hdfs-Patch-h2.grid.sp2.yahoo.net/94/ )
          Tom White made changes -
          Status Resolved [ 5 ] Closed [ 6 ]

            People

            • Assignee:
              Konstantin Shvachko
              Reporter:
              Konstantin Shvachko
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development