Details

    • Type: New Feature New Feature
    • Status: Resolved
    • Priority: Major Major
    • Resolution: Fixed
    • Affects Version/s: 2.0.2-alpha
    • Fix Version/s: 2.1.0-beta
    • Component/s: ipc
    • Labels:
      None
    • Target Version/s:
    • Hadoop Flags:
      Incompatible change, Reviewed
    • Release Note:
      Part of the RPC version 9 change. A service class byte is added after the version byte.

      Description

      One of the next frontiers of Hadoop performance is QoS (Quality of Service). We need QoS support to fight the inevitable "buffer bloat" (including various queues, which are probably necessary for throughput) in our software stack. This is important for mixed workload with different latency and throughput requirements (e.g. OLTP vs OLAP, batch and even compaction I/O) against the same DFS.

      Any potential bottleneck will need to be managed by QoS mechanisms, starting with RPC.

      How about adding a one byte DS (differentiated services) field (a la the 6-bit DS field in IP header) in the RPC header to facilitate the QoS mechanisms (in separate JIRAs)? The byte at a fixed offset (how about 0?) of the header is helpful for implementing high performance QoS mechanisms in switches (software or hardware) and servers with minimum decoding effort.

      1. HADOOP-9194-v2.patch
        13 kB
        Junping Du
      2. HADOOP-9194.patch
        8 kB
        Junping Du

        Issue Links

          Activity

          Hide
          Colin Patrick McCabe added a comment -

          Hi Luke,

          Interesting idea.

          You mention DiffServ (internet protocol differentiated services) in your description. Is there a reason we need our own fields, if the same information is present in DiffServ? It seems like if the switch hardware supports DiffServ, it is unnecessary to have our own field in the RPC header. On the other hand, if the hardware doesn't support DiffServ, adding our own byte will do no good. Am I missing something here?

          (Note that I'm not arguing against having QoS-related fields in general in protobuf messages, just questioning whether it makes sense to put them in the RPC header.)

          Show
          Colin Patrick McCabe added a comment - Hi Luke, Interesting idea. You mention DiffServ (internet protocol differentiated services) in your description. Is there a reason we need our own fields, if the same information is present in DiffServ? It seems like if the switch hardware supports DiffServ, it is unnecessary to have our own field in the RPC header. On the other hand, if the hardware doesn't support DiffServ, adding our own byte will do no good. Am I missing something here? (Note that I'm not arguing against having QoS-related fields in general in protobuf messages, just questioning whether it makes sense to put them in the RPC header.)
          Hide
          Luke Lu added a comment -

          Is there a reason we need our own fields, if the same information is present in DiffServ?

          The field is needed in all layers. IP DS is for IP switches (layer-2/3), but our RPC can use none IP network like IB/RDMA, loopback, shared memory and unix domain socket (cf. your own work on HDFS-347). One specific example would be HBase region server talking to DN on the same node via unix domain socket. You want to be able differentiate OLTP traffic from compaction traffic and set io priority on the fds accordingly (assuming underlying io scheduler supports it, e.g. cfq).

          The RPC field is also useful for layer-7 switches (application proxies, load balancers) to implement QoS.

          Show
          Luke Lu added a comment - Is there a reason we need our own fields, if the same information is present in DiffServ? The field is needed in all layers. IP DS is for IP switches (layer-2/3), but our RPC can use none IP network like IB/RDMA, loopback, shared memory and unix domain socket (cf. your own work on HDFS-347 ). One specific example would be HBase region server talking to DN on the same node via unix domain socket. You want to be able differentiate OLTP traffic from compaction traffic and set io priority on the fds accordingly (assuming underlying io scheduler supports it, e.g. cfq). The RPC field is also useful for layer-7 switches (application proxies, load balancers) to implement QoS.
          Hide
          Philip Zeyliger added a comment -

          In a previous life, I used systems which has multiple ports open for the same protocols, and relied on the both hardware and OS queueing to make one port a higher priority than the other. Sure was easy to reason about.

          Show
          Philip Zeyliger added a comment - In a previous life, I used systems which has multiple ports open for the same protocols, and relied on the both hardware and OS queueing to make one port a higher priority than the other. Sure was easy to reason about.
          Hide
          Eli Collins added a comment -

          This is why HDFS has a separate port for "service" IPC, allows you do to port-based QOS (see HDFS-599)

          Show
          Eli Collins added a comment - This is why HDFS has a separate port for "service" IPC, allows you do to port-based QOS (see HDFS-599 )
          Hide
          Luke Lu added a comment -

          You can use port to differentiate services for IP connections, which essentially communicates the service class out of band by convention. This is a reasonable hack for internal use (a la HDFS-599) due to lack of support in the RPC itself. Things quickly get out of hand if we have more service classes and/or different transport mechanisms without ports (say, again, unix domain socket (use another file naming convention?)) let alone support for proxy, load balancing and firewalls.

          If we want use Hadoop RPC as general purpose DFS (or computing) client protocols, it needs to support QoS natively. Having well defined QoS semantics in the RPC also lends to common libraries of QoS algorithms that can be easily adopted at every necessary layer of our software stack.

          Show
          Luke Lu added a comment - You can use port to differentiate services for IP connections, which essentially communicates the service class out of band by convention. This is a reasonable hack for internal use (a la HDFS-599 ) due to lack of support in the RPC itself. Things quickly get out of hand if we have more service classes and/or different transport mechanisms without ports (say, again, unix domain socket (use another file naming convention?)) let alone support for proxy, load balancing and firewalls. If we want use Hadoop RPC as general purpose DFS (or computing) client protocols, it needs to support QoS natively. Having well defined QoS semantics in the RPC also lends to common libraries of QoS algorithms that can be easily adopted at every necessary layer of our software stack.
          Hide
          Eli Collins added a comment -

          Agree, though QoS requires more than just RPC support.

          Show
          Eli Collins added a comment - Agree, though QoS requires more than just RPC support.
          Hide
          Colin Patrick McCabe added a comment -

          Interesting stuff, guys. many good points have been brought up.

          The code path for UNIX domain sockets is like this now:
          1. server calls accept(), gets a socket, hands it off to worker thread
          2. worker thread reads the RPC header to find the type of message and length
          3. worker thread reads the message
          4. worker thread processes the message

          Having QoS information in the header would allow us to prioritize the message after step #2.
          Having QoS information in the protobuf would allow us to prioritize the message after step #3.

          Since messages are normally just a few bytes, I'm not sure that this would be a big win.

          In general, I think using a separate UNIX domain socket would probably make more sense. It would also allow us to use operating system features like the accept backlog to our advantage-- when using a single socket, we have to implement all that ourselves, and we don't really have the tools in userspace to do a good job.

          Show
          Colin Patrick McCabe added a comment - Interesting stuff, guys. many good points have been brought up. The code path for UNIX domain sockets is like this now: 1. server calls accept(), gets a socket, hands it off to worker thread 2. worker thread reads the RPC header to find the type of message and length 3. worker thread reads the message 4. worker thread processes the message Having QoS information in the header would allow us to prioritize the message after step #2. Having QoS information in the protobuf would allow us to prioritize the message after step #3. Since messages are normally just a few bytes, I'm not sure that this would be a big win. In general, I think using a separate UNIX domain socket would probably make more sense. It would also allow us to use operating system features like the accept backlog to our advantage-- when using a single socket, we have to implement all that ourselves, and we don't really have the tools in userspace to do a good job.
          Hide
          Luke Lu added a comment -

          For specific use cases, especially for configurations you can control, a separate unix domain socket could be a reasonable hack. That said if we have a nonblocking RPC reader implementation we can do a better job than OS accept backlog. In general, we don't want to have any queues that we cannot control/influence.

          This actual brings up a serious security issue with the current RPC implemenation, it's trivial for any (low bandwidth) client to DoS any Hadoop RPC service (even with unlimited bandwidth) either deliberately or by accident. In order to fix this critical issue we need to have nonblocking readers. As Binglin pointed out on HADOOP-9151, the current protobuf RPC protocol is not amenable to nonblocking implementations.

          I propose that we fix this here once for all as well:

          request ::= <request-envelope> <request-protobuf-payload>
          request-envelope ::= 'HREQ' <service-class-int8> <request-protobuf-payload-length-vint32>
          
          response ::= <response-envelope> <response-protobuf-payload>
          response-envelope ::= 'HRES' <service-class-int8> <response-protobuf-payload-length-vint32>
          

          The new envelopes make nonblocking network IO trivial for rpc server/proxy/switches. The 'magic' 4-bytes makes debugging tcpdump and/or adding support to wireshark easier as well.

          Show
          Luke Lu added a comment - For specific use cases, especially for configurations you can control, a separate unix domain socket could be a reasonable hack. That said if we have a nonblocking RPC reader implementation we can do a better job than OS accept backlog. In general, we don't want to have any queues that we cannot control/influence. This actual brings up a serious security issue with the current RPC implemenation, it's trivial for any (low bandwidth) client to DoS any Hadoop RPC service (even with unlimited bandwidth) either deliberately or by accident. In order to fix this critical issue we need to have nonblocking readers. As Binglin pointed out on HADOOP-9151 , the current protobuf RPC protocol is not amenable to nonblocking implementations. I propose that we fix this here once for all as well: request ::= <request-envelope> <request-protobuf-payload> request-envelope ::= 'HREQ' <service-class-int8> <request-protobuf-payload-length-vint32> response ::= <response-envelope> <response-protobuf-payload> response-envelope ::= 'HRES' <service-class-int8> <response-protobuf-payload-length-vint32> The new envelopes make nonblocking network IO trivial for rpc server/proxy/switches. The 'magic' 4-bytes makes debugging tcpdump and/or adding support to wireshark easier as well.
          Hide
          Luke Lu added a comment -

          We can probably make the response payload length optional (0 means streaming).

          Show
          Luke Lu added a comment - We can probably make the response payload length optional (0 means streaming).
          Hide
          Sanjay Radia added a comment -

          As Binglin pointed out on HADOOP-9151, the current protobuf RPC protocol is not amenable to nonblocking implementations.

          Luke has corrected his statement in Hadoop-9151 - the protocol does not prevent non-blocking implementations.

          Show
          Sanjay Radia added a comment - As Binglin pointed out on HADOOP-9151 , the current protobuf RPC protocol is not amenable to nonblocking implementations. Luke has corrected his statement in Hadoop-9151 - the protocol does not prevent non-blocking implementations.
          Hide
          Luke Lu added a comment -

          We still need generic length prefixes for responses to implement non-blocking clients (instead of having to use a thread for each connection).

          Show
          Luke Lu added a comment - We still need generic length prefixes for responses to implement non-blocking clients (instead of having to use a thread for each connection).
          Hide
          Luke Lu added a comment -

          After HADOOP-9380, RPC will look like this:

          request ::= <connection-header> (<request-payload-length-in32> <request-payload>)+
          connection-header ::= 'hrpc' <version-int8> <auth-int8> <serialization-type-int8> 
          response ::= (<response-payload-length-int32> <response-payload>)+
          

          How about we add the service-class byte after the version byte? so the connection-header length is 8

          Show
          Luke Lu added a comment - After HADOOP-9380 , RPC will look like this: request ::= <connection-header> (<request-payload-length-in32> <request-payload>)+ connection-header ::= 'hrpc' <version-int8> <auth-int8> <serialization-type-int8> response ::= (<response-payload-length-int32> <response-payload>)+ How about we add the service-class byte after the version byte? so the connection-header length is 8
          Hide
          Suresh Srinivas added a comment -

          +1 for adding service-class byte.
          BTW can you please review HADOOP-9151 and HADOOP-9163 (some are already committed. I want to make sure it is going in the right direction).

          Show
          Suresh Srinivas added a comment - +1 for adding service-class byte. BTW can you please review HADOOP-9151 and HADOOP-9163 (some are already committed. I want to make sure it is going in the right direction).
          Hide
          Luke Lu added a comment -

          Thanks for the support Suresh. I've been watching the tasks for HADOOP-8990. They all lgtm to me.

          Show
          Luke Lu added a comment - Thanks for the support Suresh. I've been watching the tasks for HADOOP-8990 . They all lgtm to me.
          Hide
          Colin Patrick McCabe added a comment -

          Hi Suresh,

          Can you explain how this would be used? It's still unclear to me (and it seems like a bunch of other people in this thread feel the same way.)

          Related: is there a reason not to just put this as an optional field in the protobuf messages themselves? After all, being able to have optional fields is a big part of why we switched to protobufs in the first place.

          The rest of the changes proposed in HADOOP-8990 look good, btw

          Show
          Colin Patrick McCabe added a comment - Hi Suresh, Can you explain how this would be used? It's still unclear to me (and it seems like a bunch of other people in this thread feel the same way.) Related: is there a reason not to just put this as an optional field in the protobuf messages themselves? After all, being able to have optional fields is a big part of why we switched to protobufs in the first place. The rest of the changes proposed in HADOOP-8990 look good, btw
          Hide
          Luke Lu added a comment -

          Can you explain how this would be used?

          At RPC layer it could be used to dispatch request into different queues with different service priority. The service class can be carried from end-to-end for QoS in any layer that can be a bottleneck. I think that something similar to CoDel at RPC layer would suffice in common cases without tuning as it incorporates RTT.

          is there a reason not to just put this as an optional field in the protobuf messages themselves?

          As I mentioned the description, put the field in a fixed header simplify/speed up (soft) switch implementation, regardless of serialization type. Besides, we should do this when we're doing compatibility breaking changes mentioned in HADOOP-8990 before it's too late.

          Show
          Luke Lu added a comment - Can you explain how this would be used? At RPC layer it could be used to dispatch request into different queues with different service priority. The service class can be carried from end-to-end for QoS in any layer that can be a bottleneck. I think that something similar to CoDel at RPC layer would suffice in common cases without tuning as it incorporates RTT. is there a reason not to just put this as an optional field in the protobuf messages themselves? As I mentioned the description, put the field in a fixed header simplify/speed up (soft) switch implementation, regardless of serialization type. Besides, we should do this when we're doing compatibility breaking changes mentioned in HADOOP-8990 before it's too late.
          Hide
          Junping Du added a comment -

          Agree with Luke that putting service level in a fixed RPC head can shorten the latency for service level info being handled. Attached a quick patch on it.

          Show
          Junping Du added a comment - Agree with Luke that putting service level in a fixed RPC head can shorten the latency for service level info being handled. Attached a quick patch on it.
          Hide
          Hadoop QA added a comment -

          -1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12574359/HADOOP-9194.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          -1 tests included. The patch doesn't appear to include any new or modified tests.
          Please justify why no new tests are needed for this patch.
          Also please list what manual steps were performed to verify this patch.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2342//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2342//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - -1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12574359/HADOOP-9194.patch against trunk revision . +1 @author . The patch does not contain any @author tags. -1 tests included . The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2342//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2342//console This message is automatically generated.
          Hide
          Luke Lu added a comment -

          Thanks for the patch, Junping! A few things:

          1. Service level and service class are orthogonal concepts. Different QoS algos (our current one is basically best effort) can achieve different service level even for the same service class. The field is really CoS (class of service) analogous to the DiffServ field in IP see http://www.linktionary.com/q/qos.html and http://www.linktionary.com/d/diffserv.html for details.
          2. Make the service class a connection member variable at the server side as well.
          3. Add getter/setter for the service class to facilitate testing and future usage.
          4. Add a basic test to make sure the service class is being serialized/transmitted correctly.
          Show
          Luke Lu added a comment - Thanks for the patch, Junping! A few things: Service level and service class are orthogonal concepts. Different QoS algos (our current one is basically best effort) can achieve different service level even for the same service class. The field is really CoS (class of service) analogous to the DiffServ field in IP see http://www.linktionary.com/q/qos.html and http://www.linktionary.com/d/diffserv.html for details. Make the service class a connection member variable at the server side as well. Add getter/setter for the service class to facilitate testing and future usage. Add a basic test to make sure the service class is being serialized/transmitted correctly.
          Hide
          Junping Du added a comment -

          Address above review comments in v2 patch. Luke, can you review it again? Thx!

          Show
          Junping Du added a comment - Address above review comments in v2 patch. Luke, can you review it again? Thx!
          Hide
          Hadoop QA added a comment -

          +1 overall. Here are the results of testing the latest attachment
          http://issues.apache.org/jira/secure/attachment/12575148/HADOOP-9194-v2.patch
          against trunk revision .

          +1 @author. The patch does not contain any @author tags.

          +1 tests included. The patch appears to include 1 new or modified test files.

          +1 tests included appear to have a timeout.

          +1 javac. The applied patch does not increase the total number of javac compiler warnings.

          +1 javadoc. The javadoc tool did not generate any warning messages.

          +1 eclipse:eclipse. The patch built with eclipse:eclipse.

          +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

          +1 release audit. The applied patch does not increase the total number of release audit warnings.

          +1 core tests. The patch passed unit tests in hadoop-common-project/hadoop-common.

          +1 contrib tests. The patch passed contrib unit tests.

          Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2357//testReport/
          Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2357//console

          This message is automatically generated.

          Show
          Hadoop QA added a comment - +1 overall . Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12575148/HADOOP-9194-v2.patch against trunk revision . +1 @author . The patch does not contain any @author tags. +1 tests included . The patch appears to include 1 new or modified test files. +1 tests included appear to have a timeout. +1 javac . The applied patch does not increase the total number of javac compiler warnings. +1 javadoc . The javadoc tool did not generate any warning messages. +1 eclipse:eclipse . The patch built with eclipse:eclipse. +1 findbugs . The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit . The applied patch does not increase the total number of release audit warnings. +1 core tests . The patch passed unit tests in hadoop-common-project/hadoop-common. +1 contrib tests . The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2357//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2357//console This message is automatically generated.
          Hide
          Luke Lu added a comment -

          The v2 patch lgtm. +1. RPC version bump (to 9) will be part of HADOOP-8990.

          Show
          Luke Lu added a comment - The v2 patch lgtm. +1. RPC version bump (to 9) will be part of HADOOP-8990 .
          Hide
          Luke Lu added a comment -

          Committed to trunk. Thanks Junping!

          Show
          Luke Lu added a comment - Committed to trunk. Thanks Junping!
          Hide
          Hudson added a comment -

          Integrated in Hadoop-trunk-Commit #3531 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3531/)
          HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

          Result = SUCCESS
          llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-trunk-Commit #3531 (See https://builds.apache.org/job/Hadoop-trunk-Commit/3531/ ) HADOOP-9194 . RPC Support for QoS. (Junping Du via llu) (Revision 1461370) Result = SUCCESS llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Yarn-trunk #168 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/168/)
          HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

          Result = SUCCESS
          llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Yarn-trunk #168 (See https://builds.apache.org/job/Hadoop-Yarn-trunk/168/ ) HADOOP-9194 . RPC Support for QoS. (Junping Du via llu) (Revision 1461370) Result = SUCCESS llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Hdfs-trunk #1357 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1357/)
          HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

          Result = FAILURE
          llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Hdfs-trunk #1357 (See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1357/ ) HADOOP-9194 . RPC Support for QoS. (Junping Du via llu) (Revision 1461370) Result = FAILURE llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Hide
          Hudson added a comment -

          Integrated in Hadoop-Mapreduce-trunk #1385 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1385/)
          HADOOP-9194. RPC Support for QoS. (Junping Du via llu) (Revision 1461370)

          Result = SUCCESS
          llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370
          Files :

          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
          • /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Show
          Hudson added a comment - Integrated in Hadoop-Mapreduce-trunk #1385 (See https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1385/ ) HADOOP-9194 . RPC Support for QoS. (Junping Du via llu) (Revision 1461370) Result = SUCCESS llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461370 Files : /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
          Hide
          Luke Lu added a comment -

          Update target version as it's part of RPC version 9 with HADOOP-8990

          Show
          Luke Lu added a comment - Update target version as it's part of RPC version 9 with HADOOP-8990
          Hide
          Suresh Srinivas added a comment -

          Luke, please commit this to branch-2, as the goal of current release in flight is to avoid future incompatible changes.

          Show
          Suresh Srinivas added a comment - Luke, please commit this to branch-2, as the goal of current release in flight is to avoid future incompatible changes.
          Hide
          Luke Lu added a comment -

          In branch-2 now. Was planning on merge into branch-2 after HADOOP-8990.

          Show
          Luke Lu added a comment - In branch-2 now. Was planning on merge into branch-2 after HADOOP-8990 .
          Hide
          Suresh Srinivas added a comment -

          Luke, please update the trunk CHANGES.txt once a change gets ported to 2.x, to move the jira to appropriate section. I have made that change for this jira description in trunk.

          Show
          Suresh Srinivas added a comment - Luke, please update the trunk CHANGES.txt once a change gets ported to 2.x, to move the jira to appropriate section. I have made that change for this jira description in trunk.
          Hide
          Suresh Srinivas added a comment -

          I also move this change from NEW FEATURE section to INCOMPATIBLE CHANGES in branch-2 CHANGES.txt.

          Show
          Suresh Srinivas added a comment - I also move this change from NEW FEATURE section to INCOMPATIBLE CHANGES in branch-2 CHANGES.txt.

            People

            • Assignee:
              Junping Du
              Reporter:
              Luke Lu
            • Votes:
              0 Vote for this issue
              Watchers:
              30 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved:

                Development