Hadoop Common
  1. Hadoop Common
  2. HADOOP-1841

IPC server should write repsonses asynchronously

    Details

    • Type: Improvement Improvement
    • Status: Closed
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 0.16.0
    • Component/s: ipc
    • Labels:
      None

      Description

      Hadoop's IPC Server currently writes responses from request handler threads using blocking writes. Performance and scalability might be improved if responses were written asynchronously.

      1. asyncRPC-9.patch
        23 kB
        dhruba borthakur
      2. asyncRPC-8.patch
        23 kB
        dhruba borthakur
      3. asyncRPC-7.patch
        23 kB
        dhruba borthakur
      4. asyncRPC-6.patch
        23 kB
        dhruba borthakur
      5. asyncRPC-5.patch
        23 kB
        dhruba borthakur
      6. asyncRPC-4.patch
        23 kB
        dhruba borthakur
      7. asyncRPC-2.patch
        19 kB
        dhruba borthakur
      8. asyncRPC.patch
        16 kB
        dhruba borthakur
      9. asyncRPC.patch
        17 kB
        Enis Soztutar

        Issue Links

          Activity

          Hide
          Enis Soztutar added a comment -

          I started to work on this issue, but before any code, i would like to discuss a few things.

          Current ipc/rpc handling is done as a 1 dispatcher N worker scenario. The Listener thread asynchronously listens to the socket for incoming connections, accepts them(creating Connection instances), or reads the data and puts in the queue for the handler threads for later processing.
          The handler threads on the other hand, listens to the call queue, removes the first element and process the Call. It first makes the actual call to the method(through Server#call ), gets the return value and writes back the results to output stream(blocking).

          As Doug suggested the performance might be improved if the results would have been written asynchronously. But i am not sure that it will affect the performance, if any, a lot. The costly operation in handler threads is expected to be the method calls. How do others think about this? We should start to think about developing benchmark tests. It would be really cool if we could improved the performance of ipc calls say %10. I am thinking about using jmeter for this, however maybe we need to develop our own benchmarking framework to test all the components of Hadoop. Any thoughts ?

          Show
          Enis Soztutar added a comment - I started to work on this issue, but before any code, i would like to discuss a few things. Current ipc/rpc handling is done as a 1 dispatcher N worker scenario. The Listener thread asynchronously listens to the socket for incoming connections, accepts them(creating Connection instances), or reads the data and puts in the queue for the handler threads for later processing. The handler threads on the other hand, listens to the call queue, removes the first element and process the Call. It first makes the actual call to the method(through Server#call ), gets the return value and writes back the results to output stream(blocking). As Doug suggested the performance might be improved if the results would have been written asynchronously. But i am not sure that it will affect the performance, if any, a lot. The costly operation in handler threads is expected to be the method calls. How do others think about this? We should start to think about developing benchmark tests. It would be really cool if we could improved the performance of ipc calls say %10. I am thinking about using jmeter for this, however maybe we need to develop our own benchmarking framework to test all the components of Hadoop. Any thoughts ?
          Hide
          Raghu Angadi added a comment -

          I agree w.r.t performance. But main purpose of this is not to increase the performance. In fact, the normal case is more data is written in reply than read (except for blocReport() RPCs), in that sense, doing async write in single thread could actually be the bottleneck (but rarely is, we have never seen reading in single thread being a bottleneck). Of course we could write with multiple threads, or all the IPC handlers could be involved in write.

          The main issue this proposes to fix is this : The actual RPC call itself is internal to Server, but writing depends on how fast client is receiving as well. We don't want excternal factors like slow or down clients to influence how fast server handles RPCs (from other clients). We should certainly test with artificially slow clients and should be able to show the difference.

          A few more things :

          • we should keep track of amount of pending reply buffer to a client and not read from it if there is more than a few KB.
          • also we should have limit on over all memory in pending write buffers.
          Show
          Raghu Angadi added a comment - I agree w.r.t performance. But main purpose of this is not to increase the performance. In fact, the normal case is more data is written in reply than read (except for blocReport() RPCs), in that sense, doing async write in single thread could actually be the bottleneck (but rarely is, we have never seen reading in single thread being a bottleneck). Of course we could write with multiple threads, or all the IPC handlers could be involved in write. The main issue this proposes to fix is this : The actual RPC call itself is internal to Server, but writing depends on how fast client is receiving as well. We don't want excternal factors like slow or down clients to influence how fast server handles RPCs (from other clients). We should certainly test with artificially slow clients and should be able to show the difference. A few more things : we should keep track of amount of pending reply buffer to a client and not read from it if there is more than a few KB. also we should have limit on over all memory in pending write buffers.
          Hide
          dhruba borthakur added a comment -

          The close() call cannot lose data because a flush and sync should have been called earlier. However, I added an exception to close() to detect this case if it ever happens. I also removed extra imports from NameNodeMetrics.java.

          Show
          dhruba borthakur added a comment - The close() call cannot lose data because a flush and sync should have been called earlier. However, I added an exception to close() to detect this case if it ever happens. I also removed extra imports from NameNodeMetrics.java.
          Hide
          Enis Soztutar added a comment -

          Dhruba, i believe this patch belongs to HADOOP-1942 smile.

          Show
          Enis Soztutar added a comment - Dhruba, i believe this patch belongs to HADOOP-1942 smile .
          Hide
          dhruba borthakur added a comment -

          Hi Enis, I took the liberty of writing some code to do asynchronous RPC responses from the server. This patch has the following:

          1. There is one additional Responder thread, whose job is to write the RPC responses back to the RPC clients. It does non-blocking IO on the socket. Because it does non-blocking IO, only one Responder thread is good enough.

          2. The RPC handler threads queues the responses of the RPCs into buffers associated with the response-channel. These responses are flushed to the client by the Responder thread described above.

          3. A test case that creates 50 threads and issues concurrent RPCs. Each of these RPCs transfers 400K of data from client to server and back from server to client.

          This is very preliminary code but I would like to enhance the test to measure whether this change is making any difference to performance.

          Show
          dhruba borthakur added a comment - Hi Enis, I took the liberty of writing some code to do asynchronous RPC responses from the server. This patch has the following: 1. There is one additional Responder thread, whose job is to write the RPC responses back to the RPC clients. It does non-blocking IO on the socket. Because it does non-blocking IO, only one Responder thread is good enough. 2. The RPC handler threads queues the responses of the RPCs into buffers associated with the response-channel. These responses are flushed to the client by the Responder thread described above. 3. A test case that creates 50 threads and issues concurrent RPCs. Each of these RPCs transfers 400K of data from client to server and back from server to client. This is very preliminary code but I would like to enhance the test to measure whether this change is making any difference to performance.
          Hide
          Enis Soztutar added a comment -

          Hi Dhruba,

          I had written a similar architecture having a pseudo-code something like below, but could not made it pass the tests.

          
          OutputHandler#run() {
           while(running) {
            selector.select();
            //write data
            //register pending calls to selector
           }
          }
          
          Handler#run() {
            //handle the call
            //prepare ByteBuffer
            //add call to the response queue
            selector.wakeup()
          }
          
          

          now looking at your patch, the reason mine did not work is that i've used SelectionKeys and ResponseQueues per Call, rather that Connections.

          Anyway i'm +1 for this after we justify its necessity.

          Show
          Enis Soztutar added a comment - Hi Dhruba, I had written a similar architecture having a pseudo-code something like below, but could not made it pass the tests. OutputHandler#run() { while (running) { selector.select(); //write data //register pending calls to selector } } Handler#run() { //handle the call //prepare ByteBuffer //add call to the response queue selector.wakeup() } now looking at your patch, the reason mine did not work is that i've used SelectionKeys and ResponseQueues per Call, rather that Connections. Anyway i'm +1 for this after we justify its necessity.
          Hide
          Enis Soztutar added a comment -

          I am attaching a slightly modified version, in which Call#response is changed from byte[] to ByteBuffer.

          Show
          Enis Soztutar added a comment - I am attaching a slightly modified version, in which Call#response is changed from byte[] to ByteBuffer.
          Hide
          Raghu Angadi added a comment -

          > This is very preliminary code but I would like to enhance the test to measure whether this change is making any difference to performance.

          Why do you think there will be any improvement in performance? This is really not performance improvement in normal case.

          Show
          Raghu Angadi added a comment - > This is very preliminary code but I would like to enhance the test to measure whether this change is making any difference to performance. Why do you think there will be any improvement in performance? This is really not performance improvement in normal case.
          Hide
          dhruba borthakur added a comment -

          Please let me explain why i think this is a necessary pre-requisite for increasing namenode transaction performance.

          The namenode has 40 threads serving client requests. A majority of these threads are waiting for a transaction to complete, i.e. a disk sync. The disk-sync times are pretty high on loaded systems. If we can avoid doing the edit-log sync from the IPC handler thread, instead do it from a separate Sycner thread, we can free up the IPC handler thread to process new client requests. To do this, we need the ability to be able to respond to a RPC from a thread that is separate from the IPC handler thread.

          Enis: Thanks for looking over the patch I submitted and making enhancements to it.

          Show
          dhruba borthakur added a comment - Please let me explain why i think this is a necessary pre-requisite for increasing namenode transaction performance. The namenode has 40 threads serving client requests. A majority of these threads are waiting for a transaction to complete, i.e. a disk sync. The disk-sync times are pretty high on loaded systems. If we can avoid doing the edit-log sync from the IPC handler thread, instead do it from a separate Sycner thread, we can free up the IPC handler thread to process new client requests. To do this, we need the ability to be able to respond to a RPC from a thread that is separate from the IPC handler thread. Enis: Thanks for looking over the patch I submitted and making enhancements to it.
          Hide
          dhruba borthakur added a comment -

          Merged patch with latest trunk. This patch has been lying idle fro a while. I would really like somebody to review this uploaded patch and comment on the JIRA.

          Show
          dhruba borthakur added a comment - Merged patch with latest trunk. This patch has been lying idle fro a while. I would really like somebody to review this uploaded patch and comment on the JIRA.
          Hide
          Doug Cutting added a comment -

          Does it improve performance or reliability?

          Show
          Doug Cutting added a comment - Does it improve performance or reliability?
          Hide
          dhruba borthakur added a comment -

          It improves performance in the face of clients that are very slow to consume output. I am yet to run performance benchmarks (e.g. sort) to prove that it does not decrease performance in the normal case.

          Show
          dhruba borthakur added a comment - It improves performance in the face of clients that are very slow to consume output. I am yet to run performance benchmarks (e.g. sort) to prove that it does not decrease performance in the normal case.
          Hide
          Doug Cutting added a comment -

          I think we need to both show that it doesn't slow things in the normal case, and that it improves performance in this particular case (clients slow to consume output). Do you have a realistic benchmark for that case?

          Show
          Doug Cutting added a comment - I think we need to both show that it doesn't slow things in the normal case, and that it improves performance in this particular case (clients slow to consume output). Do you have a realistic benchmark for that case?
          Hide
          dhruba borthakur added a comment -

          I have a test case that triggers this new portion of code. I can extend it to gather test-timings.

          Show
          dhruba borthakur added a comment - I have a test case that triggers this new portion of code. I can extend it to gather test-timings.
          Hide
          Mukund Madhugiri added a comment -

          I ran nnbench benchmark on 500 nodes and here is the data:

          trunk:

          • createWrite: 4982
          • openRead: 44523
          • rename: 4371
          • delete: 3757

          trunk + patch:

          • createWrite: 5005
          • openRead: 46584
          • rename: 4400
          • delete: 3946
          Show
          Mukund Madhugiri added a comment - I ran nnbench benchmark on 500 nodes and here is the data: trunk: createWrite: 4982 openRead: 44523 rename: 4371 delete: 3757 trunk + patch: createWrite: 5005 openRead: 46584 rename: 4400 delete: 3946
          Hide
          Owen O'Malley added a comment -

          I think on RPC patches, it is a very good idea to run a sort on 500 nodes to make sure that the system remains stable.

          Show
          Owen O'Malley added a comment - I think on RPC patches, it is a very good idea to run a sort on 500 nodes to make sure that the system remains stable.
          Hide
          dhruba borthakur added a comment -

          As expected, this patch does not decrease the performance of the namenode. But I agree with owen that we should determine that it does not affect sort performance too.

          Show
          dhruba borthakur added a comment - As expected, this patch does not decrease the performance of the namenode. But I agree with owen that we should determine that it does not affect sort performance too.
          Hide
          Koji Noguchi added a comment -

          Just wanted to note that we had couple of occasions when one user
          1) Started many dfsclients on a single node
          2) Load on that client node became too high that its rpc read speed slowed down significantly
          3) Namenode became unresponsive

          Rebooting that client node brought back the namenode.

          I'm hoping this patch would solve such cases.

          Show
          Koji Noguchi added a comment - Just wanted to note that we had couple of occasions when one user 1) Started many dfsclients on a single node 2) Load on that client node became too high that its rpc read speed slowed down significantly 3) Namenode became unresponsive Rebooting that client node brought back the namenode. I'm hoping this patch would solve such cases.
          Hide
          dhruba borthakur added a comment -

          This patch is in response to Doug's comments that we need to show improvement in performance in the face of slow clients.

          This patch has a unit test that creates a server with one handler thread. One thread makes an RPC and stops processing the response. Another thread then issues another RPC and it completes successfully even though the first thread has not yet consumed the RPC response. This test passes successfully with this patch whereas it fails with trunk.

          Please let me know is it addresses your concerns. If so, then the only remaining thing to make this patch committable is to demonstrate that it does not degrade performance for sort runs.

          Show
          dhruba borthakur added a comment - This patch is in response to Doug's comments that we need to show improvement in performance in the face of slow clients. This patch has a unit test that creates a server with one handler thread. One thread makes an RPC and stops processing the response. Another thread then issues another RPC and it completes successfully even though the first thread has not yet consumed the RPC response. This test passes successfully with this patch whereas it fails with trunk. Please let me know is it addresses your concerns. If so, then the only remaining thing to make this patch committable is to demonstrate that it does not degrade performance for sort runs.
          Hide
          Doug Cutting added a comment -

          > Please let me know is it addresses your concerns.

          The unit test sounds good. Do we have any particular field uses where we think this will improve performance?

          Show
          Doug Cutting added a comment - > Please let me know is it addresses your concerns. The unit test sounds good. Do we have any particular field uses where we think this will improve performance?
          Hide
          Mukund Madhugiri added a comment -

          I ran sort benchmark on 500 nodes and here is the data:

          trunk:

          • sort: 1.776 hr
          • random writer: 0.459 hr
          • sort validation: 0.341 hr

          trunk + patch:

          • sort: 1.776 hr
          • random writer: 0.423 hr
          • sort validation: 0.450 hr
          Show
          Mukund Madhugiri added a comment - I ran sort benchmark on 500 nodes and here is the data: trunk: sort: 1.776 hr random writer: 0.459 hr sort validation: 0.341 hr trunk + patch: sort: 1.776 hr random writer: 0.423 hr sort validation: 0.450 hr
          Hide
          dhruba borthakur added a comment -

          Mukubd: Thanks for running the performance numbers.

          Doug: This patch (by itself) will not improve performance. It is mostly a bug fix. Koji's earlier comment describes how slow clients caused a denial-of-service of the namenode on a real cluster. Mukund's performance runs show that performance is not impacted. However, I plan on attempting a follow-on patch to NameNode that will move the sync-to-edits-log from occuring in the RPC Handler thread, thus possibly improving scalability. Do you think that this patch is good to go?

          Owen: would you like to review the code again?

          Show
          dhruba borthakur added a comment - Mukubd: Thanks for running the performance numbers. Doug: This patch (by itself) will not improve performance. It is mostly a bug fix. Koji's earlier comment describes how slow clients caused a denial-of-service of the namenode on a real cluster. Mukund's performance runs show that performance is not impacted. However, I plan on attempting a follow-on patch to NameNode that will move the sync-to-edits-log from occuring in the RPC Handler thread, thus possibly improving scalability. Do you think that this patch is good to go? Owen: would you like to review the code again?
          Hide
          dhruba borthakur added a comment -

          Merged with latest trunk.

          Show
          dhruba borthakur added a comment - Merged with latest trunk.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12371003/asyncRPC-5.patch
          against trunk revision r601232.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs -1. The patch appears to introduce 2 new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests -1. The patch failed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12371003/asyncRPC-5.patch against trunk revision r601232. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs -1. The patch appears to introduce 2 new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests -1. The patch failed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1272/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Findbugs warnings.

          Show
          dhruba borthakur added a comment - Findbugs warnings.
          Hide
          dhruba borthakur added a comment -

          Fixed findbugs warnings.

          Show
          dhruba borthakur added a comment - Fixed findbugs warnings.
          Hide
          dhruba borthakur added a comment -

          Fixed findbugs warnings. I would request Owen to review this patch.

          Show
          dhruba borthakur added a comment - Fixed findbugs warnings. I would request Owen to review this patch.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12371062/asyncRPC-6.patch
          against trunk revision r601383.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12371062/asyncRPC-6.patch against trunk revision r601383. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1273/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Can either Doug or Owen please please review this patch? Thanks.

          Show
          dhruba borthakur added a comment - Can either Doug or Owen please please review this patch? Thanks.
          Hide
          Doug Cutting added a comment -
          • re-organizing imports injects noise into the patch
          • any reason we can't make the listener thread also the responder, using a single selector?
          • catching OutOfMemoryError seems suspect, but i guess we already do it...
          • it appears to me there are ways processResponse could exit with an exception without error being set to true. Might it be safer to assume error=true, close the connection in a 'finally' clause, and only set error=false for normal returns?
          • 'status' would better be named 'more'
          Show
          Doug Cutting added a comment - re-organizing imports injects noise into the patch any reason we can't make the listener thread also the responder, using a single selector? catching OutOfMemoryError seems suspect, but i guess we already do it... it appears to me there are ways processResponse could exit with an exception without error being set to true. Might it be safer to assume error=true, close the connection in a 'finally' clause, and only set error=false for normal returns? 'status' would better be named 'more'
          Hide
          dhruba borthakur added a comment -

          Cancelling patch to incorporate Doug's comments,

          Show
          dhruba borthakur added a comment - Cancelling patch to incorporate Doug's comments,
          Hide
          dhruba borthakur added a comment -

          Incorporated Doug's review comments.

          1. Cleaned up imports to reduce patch noise
          2. I kept the Listener and the Responder as separate threads. These two functions are inherently parallel. The Responder needs to take locks on the response queue and putting this functionality in the Listener might interrupt the consumption rate of incoming RPCs.
          3. Fixed processResponse to be inside a try-finally clause.
          4. Renamed variable "status" as "more".

          Show
          dhruba borthakur added a comment - Incorporated Doug's review comments. 1. Cleaned up imports to reduce patch noise 2. I kept the Listener and the Responder as separate threads. These two functions are inherently parallel. The Responder needs to take locks on the response queue and putting this functionality in the Listener might interrupt the consumption rate of incoming RPCs. 3. Fixed processResponse to be inside a try-finally clause. 4. Renamed variable "status" as "more".
          Hide
          Raghu Angadi added a comment -

          Dhruba,

          Could you move couple of debug statements into '{{if (LOG.debugEnabled())}' It looked like they are executed for every RPC.

          Show
          Raghu Angadi added a comment - Dhruba, Could you move couple of debug statements into '{{if (LOG.debugEnabled())}' It looked like they are executed for every RPC.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12371382/asyncRPC-7.patch
          against trunk revision r603000.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12371382/asyncRPC-7.patch against trunk revision r603000. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1309/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          Canceling patch to incorporate Raghu's comments.

          Show
          dhruba borthakur added a comment - Canceling patch to incorporate Raghu's comments.
          Hide
          dhruba borthakur added a comment -

          All debug log messages are inside If LOG.debguEnabled()

          Show
          dhruba borthakur added a comment - All debug log messages are inside If LOG.debguEnabled()
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12371392/asyncRPC-8.patch
          against trunk revision r603084.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12371392/asyncRPC-8.patch against trunk revision r603084. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1311/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          I have incorporated most of Doug's comments. Doug: if you would care to take another look at this one, please let me know. Thanks.

          Show
          dhruba borthakur added a comment - I have incorporated most of Doug's comments. Doug: if you would care to take another look at this one, please let me know. Thanks.
          Hide
          Doug Cutting added a comment -

          Looks good. Should we log the exception caught after the call to doAsyncWrite()?

          Show
          Doug Cutting added a comment - Looks good. Should we log the exception caught after the call to doAsyncWrite()?
          Hide
          dhruba borthakur added a comment -

          Inserted a LOG messages as suggested by Doug.

          Show
          dhruba borthakur added a comment - Inserted a LOG messages as suggested by Doug.
          Hide
          Doug Cutting added a comment -

          +1 This looks good to me.

          Show
          Doug Cutting added a comment - +1 This looks good to me.
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12371560/asyncRPC-9.patch
          against trunk revision r603652.

          @author +1. The patch does not contain any @author tags.

          javadoc +1. The javadoc tool did not generate any warning messages.

          javac +1. The applied patch does not generate any new compiler warnings.

          findbugs +1. The patch does not introduce any new Findbugs warnings.

          core tests +1. The patch passed core unit tests.

          contrib tests +1. The patch passed contrib unit tests.

          Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/testReport/
          Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
          Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/artifact/trunk/build/test/checkstyle-errors.html
          Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12371560/asyncRPC-9.patch against trunk revision r603652. @author +1. The patch does not contain any @author tags. javadoc +1. The javadoc tool did not generate any warning messages. javac +1. The applied patch does not generate any new compiler warnings. findbugs +1. The patch does not introduce any new Findbugs warnings. core tests +1. The patch passed core unit tests. contrib tests +1. The patch passed contrib unit tests. Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/testReport/ Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/artifact/trunk/build/test/checkstyle-errors.html Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1331/console This message is automatically generated.
          Hide
          dhruba borthakur added a comment -

          I just committed this.

          Show
          dhruba borthakur added a comment - I just committed this.
          Hide
          Hudson added a comment -
          Show
          Hudson added a comment - Integrated in Hadoop-Nightly #331 (See http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/331/ )

            People

            • Assignee:
              dhruba borthakur
              Reporter:
              Doug Cutting
            • Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development