Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 0.15.0
    • Fix Version/s: 0.16.0
    • Component/s: conf
    • Labels:
      None

      Description

      Looked at the issues related to port rolling. My impression is that port rolling is required only for the unit tests to run.
      Even the name-node port should roll there, which we don't have now, in order to be able to start 2 cluster for testing say dist cp.

      For real clusters on the contrary port rolling is not desired and some times even prohibited.
      So we should have a way of to ban port rolling. My proposition is to

      1. use ephemeral port 0 if port rolling is desired
      2. if a specific port is specified then port rolling should not happen at all, meaning that a
        server is either able or not able to start on that particular port.

      The desired port is specified via configuration parameters.

      • Name-node: fs.default.name = host:port
      • Data-node: dfs.datanode.port
      • Job-tracker: mapred.job.tracker = host:port
      • Task-tracker: mapred.task.tracker.report.bindAddress = host
        Task-tracker currently does not have an option to specify port, it always uses the ephemeral port 0,
        and therefore I propose to add one.
      • Secondary node does not need a port to listen on.

      For info servers we have two sets of config variables *.info.bindAddress and *.info.port
      except for the task tracker, which calls them *.http.bindAddress and *.http.port instead of "info".
      With respect to the info servers I propose to completely eliminate the port parameters, and form
      *.info.bindAddress = host:port
      Info servers should do the same thing, namely start or fail on the specified port if it is not 0,
      and start on any free port if it is ephemeral.

      For the task-tracker I would rename tasktracker.http.bindAddress to mapred.task.tracker.info.bindAddress
      For the data-node the info dfs.datanode.info.bindAddress should be included into the default config.
      Is there a reason why it is not there?

      This is the summary of proposed changes:

      Server current name = value proposed name = value
      NameNode fs.default.name = host:port same
        dfs.info.bindAddress = host dfs.http.bindAddress = host:port
      DataNode dfs.datanode.bindAddress = host dfs.datanode.bindAddress = host:port
        dfs.datanode.port = port eliminate
        dfs.datanode.info.bindAddress = host dfs.datanode.http.bindAddress = host:port
        dfs.datanode.info.port = port eliminate
      JobTracker mapred.job.tracker = host:port same
        mapred.job.tracker.info.bindAddress = host mapred.job.tracker.http.bindAddress = host:port
        mapred.job.tracker.info.port = port eliminate
      TaskTracker mapred.task.tracker.report.bindAddress = host mapred.task.tracker.report.bindAddress = host:port
        tasktracker.http.bindAddress = host mapred.task.tracker.http.bindAddress = host:port
        tasktracker.http.port = port eliminate
      SecondaryNameNode dfs.secondary.info.bindAddress = host dfs.secondary.http.bindAddress = host:port
        dfs.secondary.info.port = port eliminate

      Do we also want to set some uniform naming convention for the configuration variables?
      Like having hdfs instead of dfs, or info instead of http, or systematically using either datanode
      or data.node would make that look better in my opinion.

      So these are all api changes. I would really like some feedback on this, especially from
      people who deal with configuration issues on practice.

      1. FixedPorts3.patch
        63 kB
        Konstantin Shvachko
      2. FixedPorts4.patch
        63 kB
        Konstantin Shvachko
      3. port.stack
        14 kB
        dhruba borthakur

        Issue Links

          Activity

          Hide
          Arun C Murthy added a comment -

          +1

          In my experience, port-rolling via hand-crafted code is fundamentally brittle and prone to failure. The better way to say 'I don't care about starting this specific service on a well-known port' is to just pass 0 as the port and let the OS pick an ephemeral port, which is precisely what we did via HADOOP-1085.

          My impression is that port rolling is required only for the unit tests to run.

          Also, it is safer to let the OS pick ephemeral ports in places where do not care about having a well-known port e.g. the tasktracker's rpc port for the child-jvm.

          Do we also want to set some uniform naming convention for the configuration variables?

          +1

          Show
          Arun C Murthy added a comment - +1 In my experience, port-rolling via hand-crafted code is fundamentally brittle and prone to failure. The better way to say 'I don't care about starting this specific service on a well-known port' is to just pass 0 as the port and let the OS pick an ephemeral port, which is precisely what we did via HADOOP-1085 . My impression is that port rolling is required only for the unit tests to run. Also, it is safer to let the OS pick ephemeral ports in places where do not care about having a well-known port e.g. the tasktracker's rpc port for the child-jvm. Do we also want to set some uniform naming convention for the configuration variables? +1
          Hide
          Tsz Wo Nicholas Sze added a comment -

          Not sure the following is related to this issue:

          The static method DataNode.createSocketAddr(String target) is used everywhere. It is better to move it to org.apache.hadoop.net.NetUtils.

          Similarly, StatusHttpServer is in the org.apache.hadoop.mapred package, which is a wrong place.

          Show
          Tsz Wo Nicholas Sze added a comment - Not sure the following is related to this issue: The static method DataNode.createSocketAddr(String target) is used everywhere. It is better to move it to org.apache.hadoop.net.NetUtils. Similarly, StatusHttpServer is in the org.apache.hadoop.mapred package, which is a wrong place.
          Hide
          Allen Wittenauer added a comment -

          The primary issue is that hadoop isn't predictable. The randomization of ports (including picking purely ephemeral ports) is that it is impossible to:

          1) Make sure Hadoop doesn't sit on a port that another application may be using unless that application is already running.

          2) Create firewall rules that prevents connection to Hadoop services.

          3) Create QoS (Quality of Service) settings such that HDFS has a higher/lower priority vs. some other service.

          I really want predictability. I want to be able to say that my JobTracker is always using port a,b,c, my namenode is always using ports d,e,f, my datanode is always using port g,h,i, etc. If that port is in use, then it should be perfectly accepting to have that process fail. Predictability == management when we're talking about large scale administration.

          As Arun pointed out, if someone really doesn't care what ports these services run on, then using 0 should be a reliable equivalent to using port rolling.

          As to the names, I'd prefer 'hdfs' over 'dfs' if only because a lot of the people I talk to always follow up with "Why are you using Microsoft's DFS?". sigh I prefer 'http' over 'info' if only because most people are more likely to recognize that a web interface is sitting on that port and that it might require extra care. [The (ab)use of that port by Hadoop is another issue... ;) ]

          One concern I have is what happens if I have multiple interfaces (nics). How does it work if I only want to bind to one interface or if I want to bind to all of them or I want to bind to different ports on those different interfaces?

          Show
          Allen Wittenauer added a comment - The primary issue is that hadoop isn't predictable. The randomization of ports (including picking purely ephemeral ports) is that it is impossible to: 1) Make sure Hadoop doesn't sit on a port that another application may be using unless that application is already running. 2) Create firewall rules that prevents connection to Hadoop services. 3) Create QoS (Quality of Service) settings such that HDFS has a higher/lower priority vs. some other service. I really want predictability. I want to be able to say that my JobTracker is always using port a,b,c, my namenode is always using ports d,e,f, my datanode is always using port g,h,i, etc. If that port is in use, then it should be perfectly accepting to have that process fail. Predictability == management when we're talking about large scale administration. As Arun pointed out, if someone really doesn't care what ports these services run on, then using 0 should be a reliable equivalent to using port rolling. As to the names, I'd prefer 'hdfs' over 'dfs' if only because a lot of the people I talk to always follow up with "Why are you using Microsoft's DFS?". sigh I prefer 'http' over 'info' if only because most people are more likely to recognize that a web interface is sitting on that port and that it might require extra care. [The (ab)use of that port by Hadoop is another issue... ;) ] One concern I have is what happens if I have multiple interfaces (nics). How does it work if I only want to bind to one interface or if I want to bind to all of them or I want to bind to different ports on those different interfaces?
          Hide
          Konstantin Shvachko added a comment - - edited

          This patch

          1. Changes behavior of the following hadoop servers NameNode, DataNode, SecondaryNameNode, JobTracker, TaskTracker
            with respect to port rolling.
            The new behavior is:
            • when a specific port is provided the server must either start on that port
              or fail by throwing java.net.BindException.
            • if the port = 0 (ephemeral) then the server should choose a free port and start on it.
          2. Introduces 2 new unit tests TestHDFSServerPorts and TestMRServerPorts, which verify the new behavior.
          3. All port parameters in hadooop configuration are incorporated into respective
            addresses see the table of changes above.
          4. Renames *.info.bindAddress to *.http.bindAddress as requested.
          5. Modifies StatusHttpServer, which returns BindException in case the port is busy instead of a generic IOException.
          6. Introduces FSNamesystem.initialize() so that the FSNamesystem could be destroyed if an exception is thrown inside the construction.
          7. Moves DataNode..createSocketAddr() into NetUtils, as requested.
          8. Fixes NullPointerException in JobTracker and NameNode, which is thrown during shutdown when I run the new tests
            because some members are null.
          Show
          Konstantin Shvachko added a comment - - edited This patch Changes behavior of the following hadoop servers NameNode, DataNode, SecondaryNameNode, JobTracker, TaskTracker with respect to port rolling. The new behavior is: when a specific port is provided the server must either start on that port or fail by throwing java.net.BindException. if the port = 0 (ephemeral) then the server should choose a free port and start on it. Introduces 2 new unit tests TestHDFSServerPorts and TestMRServerPorts, which verify the new behavior. All port parameters in hadooop configuration are incorporated into respective addresses see the table of changes above. Renames *.info.bindAddress to *.http.bindAddress as requested. Modifies StatusHttpServer, which returns BindException in case the port is busy instead of a generic IOException. Introduces FSNamesystem.initialize() so that the FSNamesystem could be destroyed if an exception is thrown inside the construction. Moves DataNode..createSocketAddr() into NetUtils, as requested. Fixes NullPointerException in JobTracker and NameNode, which is thrown during shutdown when I run the new tests because some members are null.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12370036/FixedPorts.patch
          against trunk revision r597959.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12370036/FixedPorts.patch against trunk revision r597959. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1153/console This message is automatically generated.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12370036/FixedPorts.patch
          against trunk revision r598152.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce 3 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12370036/FixedPorts.patch against trunk revision r598152. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 3 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1158/console This message is automatically generated.
          Hide
          Konstantin Shvachko added a comment -

          FindBugs problem HADOOP-2272
          Resubmitting the patch.

          Show
          Konstantin Shvachko added a comment - FindBugs problem HADOOP-2272 Resubmitting the patch.
          Hide
          Konstantin Shvachko added a comment -

          All three findbugs reported during the last run are old bugs, not introduced by the patch.
          I fixed findbugs in NamenodeFsck all 3 of them .
          But the two in FSNamesystem related to "Write to static field FSNamesystem.fsNamesystemObject" cannot be fixed.
          This is done intensionally and the warning should be ignored.
          The patch is updated to current trunk.

          Show
          Konstantin Shvachko added a comment - All three findbugs reported during the last run are old bugs, not introduced by the patch. I fixed findbugs in NamenodeFsck all 3 of them . But the two in FSNamesystem related to "Write to static field FSNamesystem.fsNamesystemObject" cannot be fixed. This is done intensionally and the warning should be ignored. The patch is updated to current trunk.
          Hide
          Konstantin Shvachko added a comment -

          Adding 2 unti tests: TestHDFSServerPorts and TestMRServerPorts

          Show
          Konstantin Shvachko added a comment - Adding 2 unti tests: TestHDFSServerPorts and TestMRServerPorts
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12370240/FixedPorts2.patch
          against trunk revision r599534.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce 3 new Findbugs warnings.

          core tests -1. The patch failed core unit tests.

          contrib tests -1. The patch failed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12370240/FixedPorts2.patch against trunk revision r599534. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 3 new Findbugs warnings. core tests -1. The patch failed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1208/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          +1 code looks good.

          Show
          dhruba borthakur added a comment - +1 code looks good.
          Hide
          dhruba borthakur added a comment -

          While running unit tests on trunk with this patch, I got a timeout for

          [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
          [junit] Test org.apache.hadoop.dfs.TestHDFSServerPorts FAILED (timeout)

          I will attach the stack trace to this JIRA.

          Show
          dhruba borthakur added a comment - While running unit tests on trunk with this patch, I got a timeout for [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] Test org.apache.hadoop.dfs.TestHDFSServerPorts FAILED (timeout) I will attach the stack trace to this JIRA.
          Hide
          dhruba borthakur added a comment -

          Stack trace of TestHDFSServerPorts when it was hung.

          Show
          dhruba borthakur added a comment - Stack trace of TestHDFSServerPorts when it was hung.
          Hide
          Konstantin Shvachko added a comment -

          Dhruba, thanks for the feedback. I finally realized why the new tests were sometimes failing.
          The problem is with the clients.

          Example 1: The name-node instantiates Trash, which creates a DFSClient (even if trash is disabled).
          When the name-node stops this DFSClient remains up and the Secondary name-node would not start,
          because it cannot create a client. Namely the secondary nn just hangs trying to connect to the
          main name-node (RPC.waitForProxy()).

          Example 2: Similar thing happens with the JobTracker, which also creates a DFSClient in order
          to remove a file. But never closes it. So the next start of the JobTracker would hang the same
          way as in the previous example.

          In both cases if you wait long enough the clients eventually dies, that is why the failure is
          not stable.

          I am closing the clients inside my tests now. Closing clients within Trash or JobTracker breaks
          other unit tests, because the clients are static object, and closing a client once would destroy
          that object for everybody else, who opened the client inside the same JVM.
          Fixing that is beyond the scope of this patch, I'll open another issue related to the problem.

          All tests pass now.
          As I mentioned before, the findBugs warning about assigning to static fields will remain unfixed.

          Show
          Konstantin Shvachko added a comment - Dhruba, thanks for the feedback. I finally realized why the new tests were sometimes failing. The problem is with the clients. Example 1: The name-node instantiates Trash, which creates a DFSClient (even if trash is disabled). When the name-node stops this DFSClient remains up and the Secondary name-node would not start, because it cannot create a client. Namely the secondary nn just hangs trying to connect to the main name-node (RPC.waitForProxy()). Example 2: Similar thing happens with the JobTracker, which also creates a DFSClient in order to remove a file. But never closes it. So the next start of the JobTracker would hang the same way as in the previous example. In both cases if you wait long enough the clients eventually dies, that is why the failure is not stable. I am closing the clients inside my tests now. Closing clients within Trash or JobTracker breaks other unit tests, because the clients are static object, and closing a client once would destroy that object for everybody else, who opened the client inside the same JVM. Fixing that is beyond the scope of this patch, I'll open another issue related to the problem. All tests pass now. As I mentioned before, the findBugs warning about assigning to static fields will remain unfixed.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12370903/FixedPorts3.patch
          against trunk revision r600771.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce 2 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12370903/FixedPorts3.patch against trunk revision r600771. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 2 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1254/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Hi Konstantin, I an finding that this patch does not merge cleanly with trunk. Can you pl upload a new patch? thanks.

          Show
          dhruba borthakur added a comment - Hi Konstantin, I an finding that this patch does not merge cleanly with trunk. Can you pl upload a new patch? thanks.
          Hide
          Konstantin Shvachko added a comment -

          This is a newer version.

          Show
          Konstantin Shvachko added a comment - This is a newer version.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12370977/FixedPorts4.patch
          against trunk revision r601111.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce 2 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12370977/FixedPorts4.patch against trunk revision r601111. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 2 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1266/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          I just committed this. Thanks Konstantin!

          Show
          dhruba borthakur added a comment - I just committed this. Thanks Konstantin!
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-Nightly #324 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/324/ )
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-Nightly #325 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/325/ )
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-trunk #385 (See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/385/ )

            People

            • Assignee:
              Konstantin Shvachko
              Reporter:
              Konstantin Shvachko
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development