Hadoop YARN
  1. Hadoop YARN
  2. YARN-2140

Add support for network IO isolation/scheduling for containers

    Details

    • Type: New Feature New Feature
    • Status: Open
    • Priority: Major Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: None
    • Component/s: None
    • Labels:
      None

      Issue Links

        Activity

        Hide
        haosdent added a comment -

        How to implement this? Cgroup?

        Show
        haosdent added a comment - How to implement this? Cgroup?
        Hide
        Wei Yan added a comment -

        haosdent, cgroups can help, but also have its own limitation. Will update once having progress.

        Show
        Wei Yan added a comment - haosdent , cgroups can help, but also have its own limitation. Will update once having progress.
        Hide
        Bikas Saha added a comment -

        Wei Yan For this and YARN-2139 my suggestion would be to first post a design sketch and discuss some alternatives. You may prototype some approach to get supporting data for that design doc. This will help get community interaction and understanding for your proposal and enable quicker progress.

        Show
        Bikas Saha added a comment - Wei Yan For this and YARN-2139 my suggestion would be to first post a design sketch and discuss some alternatives. You may prototype some approach to get supporting data for that design doc. This will help get community interaction and understanding for your proposal and enable quicker progress.
        Hide
        Wei Yan added a comment -

        Thanks, Bikas Saha. Yes, I'm working on that part.

        Show
        Wei Yan added a comment - Thanks, Bikas Saha . Yes, I'm working on that part.
        Hide
        Sandy Ryza added a comment -

        +1 to a design sketch with alternatives

        Show
        Sandy Ryza added a comment - +1 to a design sketch with alternatives
        Hide
        haosdent added a comment -

        Cool !

        Show
        haosdent added a comment - Cool !
        Hide
        Beckham007 added a comment -

        I think it could use the net_cls subsystem of cgroup to handle this.
        Firstly, it need to refactor org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler to support various of resource, not only cpu.

        Show
        Beckham007 added a comment - I think it could use the net_cls subsystem of cgroup to handle this. Firstly, it need to refactor org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler to support various of resource, not only cpu.
        Hide
        haosdent added a comment -

        net_cls just classify the package. So cgroup is not enough to do network IO isolation/scheduling. And I have tried tc and net_cls, but them don't do well in network IO isolation/scheduling even couldn't have any effects on package in flow.

        Show
        haosdent added a comment - net_cls just classify the package. So cgroup is not enough to do network IO isolation/scheduling. And I have tried tc and net_cls, but them don't do well in network IO isolation/scheduling even couldn't have any effects on package in flow.
        Hide
        Beckham007 added a comment -

        tc class add dev $

        {net_dev}

        parent $

        {parent_classid}

        classid $

        {classid}

        htb rate $

        {guaranteed_bandwidth}

        kbps ceil $

        {max_bandwidth}

        kbps
        It could be used to control the min and max bandwidth of each container.

        Show
        Beckham007 added a comment - tc class add dev $ {net_dev} parent $ {parent_classid} classid $ {classid} htb rate $ {guaranteed_bandwidth} kbps ceil $ {max_bandwidth} kbps It could be used to control the min and max bandwidth of each container.
        Hide
        Wei Yan added a comment -

        haosdent, Beckham007, net_cls can be used to limit the network bandwidth used for each task per device. One problem here is that it is not easy for users to specify the accurate network bandwidth requirement for the application. I'm still working on the design.

        Show
        Wei Yan added a comment - haosdent , Beckham007 , net_cls can be used to limit the network bandwidth used for each task per device. One problem here is that it is not easy for users to specify the accurate network bandwidth requirement for the application. I'm still working on the design.
        Hide
        haosdent added a comment -

        Thx Wei Yan Looking forward your work.

        Show
        haosdent added a comment - Thx Wei Yan Looking forward your work.
        Hide
        Robert Joseph Evans added a comment -

        We are working on similar things for storm. I am very interested in your design, because for any streaming system to truly have a chance on YARN soft guarantees on network I/O are critical. There are several big problems with network I/O even if the user can effectively estimate what they will need. The first is that the resource is not limited to a single node in the cluster. The network has a topology and a bottlekneck can show up at any point in that topology. So you may think you are fine because each node in a rack is not scheduled to be using the full bandwidth that the network card(s) can support. But you can easily have saturated the top of rack switch without knowing it. To solve this problem you effectively have to know the topology of the application itself. So that you can schedule the node to node network connections within that application. if users don't know how much network they are going to use at a high level, they will never have any idea at a low level. But then you also have the big problem of batch being very bursty in its network usage. The only way to solve this is going to require network hardware support for prioritizing packets.

        But I'll wait for your design before writing too much more.

        Show
        Robert Joseph Evans added a comment - We are working on similar things for storm. I am very interested in your design, because for any streaming system to truly have a chance on YARN soft guarantees on network I/O are critical. There are several big problems with network I/O even if the user can effectively estimate what they will need. The first is that the resource is not limited to a single node in the cluster. The network has a topology and a bottlekneck can show up at any point in that topology. So you may think you are fine because each node in a rack is not scheduled to be using the full bandwidth that the network card(s) can support. But you can easily have saturated the top of rack switch without knowing it. To solve this problem you effectively have to know the topology of the application itself. So that you can schedule the node to node network connections within that application. if users don't know how much network they are going to use at a high level, they will never have any idea at a low level. But then you also have the big problem of batch being very bursty in its network usage. The only way to solve this is going to require network hardware support for prioritizing packets. But I'll wait for your design before writing too much more.
        Hide
        Wei Yan added a comment -

        Thanks for the comments, Robert Joseph Evans.

        Show
        Wei Yan added a comment - Thanks for the comments, Robert Joseph Evans .
        Hide
        Sidharta Seethana added a comment -

        Hi Wei Yan ,

        Varun Vasudev and I have been thinking about how we would approach supporting network bandwidth as a resource in YARN. We have a design doc that we'll post here shortly.

        Do you mind if we take over this JIRA?

        Thank you,
        -Sidharta

        Show
        Sidharta Seethana added a comment - Hi Wei Yan , Varun Vasudev and I have been thinking about how we would approach supporting network bandwidth as a resource in YARN. We have a design doc that we'll post here shortly. Do you mind if we take over this JIRA? Thank you, -Sidharta
        Hide
        Sidharta Seethana added a comment -

        Network Bandwidth as a resource in YARN.

        Show
        Sidharta Seethana added a comment - Network Bandwidth as a resource in YARN.
        Hide
        Sidharta Seethana added a comment -

        I have attached the design doc.

        Thanks!
        -Sidharta

        Show
        Sidharta Seethana added a comment - I have attached the design doc. Thanks! -Sidharta
        Hide
        Robert Joseph Evans added a comment -

        Network traffic for batch jobs is inherently bursty, but also typically quite robust in the face of slow networking and network hiccups. The SLAs are not tight enough and the retries cover up enough problems that we typically don't need to worry about it. However not all applications have the same pattern nor the same robustness to network issues, and we want to be able to cleanly support as many of these patterns as possible.

        I see that there are several areas to think about and some of them counter each other and need to be balanced.
        Some applications are very sensitive to network latency where as others are not.
        Some applications are very sensitive to having a consistent or guaranteed network throughput where as others are not.
        We want to avoid saturation of the network (primarily to be able to support the above two) but we also want to have high resource utilization (and networking typically degrades gracefully when over allocated).
        We want to have data spread across multiple failure domains but also want to colocate containers near each other to reduce network traffic.
        There are also two different forms of traffic that we need to consider, external traffic and internal traffic. External traffic cannot be reduced by colocating containers, internal traffic typically can.

        Lets take a static web server as one extreme. All of the traffic is external traffic, the servers don't talk to each other. For robustness I would want to have them spread out across multiple racks so if one rack goes down for some reason I have a backup and my web sight stays up, although possibly degraded until I can launch more containers. The web servers are sensitive to network latency, and typically want a minimum amount of available bandwidth so they can achieve the desired network latency in a cost effective manner.

        If the scheduler does what you initially described, and tries to always place containers on the same rack, we will be at risk of having a single rack bring down my entire web site.

        For most applications it is a balancing act between spreading data across failure domains and increasing the locality to reduce internal communication overhead. Storm tends to be the other extreme because it stores no data internally, so it can ignore failure domains, but in most cases has a significant amount of internal network traffic. I would much rather see scheduler APIs added that allow the App Master to help give hints on how it wants particular containers scheduled.

        Then there are a number of general scheduler questions around locality that this brings up, because we are really just extending the definition of locality from close to this hdfs block/machine, to be close to these other containers, preferably not all other containers in the application, because almost all applications have different types of containers.
        To be able to have that type of an API it feels like we would want to support gang scheduling, but perhaps not.
        Then there is the question of how long should the scheduler try to get the best container. Right now it is cluster wide, but for storm I would really like to be able to say it is OK to spend 10 seconds to get these containers, but for this high priority one, because something just crashed, I want it ASAP.

        The current design seems to only handle avoiding saturating the local nodes' outbound network connection and attempts to reduce internal traffic by locating containers for the same application close to one another. It seemed to imply that it would do something with the top of rack switch to avoid saturating that, but was not explicitly called out how that would work. And if we cannot separate out internal vs external traffic I fear the top of rack switch will either become saturated in some cases or cause other resources in the rack to not be utilized because we think the top of rack switch is saturated. To me the outbound traffic shaping seems like a step in the right direction, but I honestly don't know if it is going to be enough for me to feel safe running production storm topologies on a YARN cluster. Think of a storm topology reading a real firehose of data from kafka 2 Gigbit/sec something that would saturate the inbound link of a typical node two times over. This is completely ignored by the current design simply because there is no easy way to do traffic shaping on it. If I look at how cloud providers like Amazon and Google handle networking they are doing things very similar to OpenFlow. I am not saying that we need to recreate this, but any design that we come up with should really think about supporting something similar long term, and doing something good enough short term, so that the user facing APIs don't need to change, even if the enforcement of it, or our ability to get better utilization from it does change over time with improved hardware support.

        Show
        Robert Joseph Evans added a comment - Network traffic for batch jobs is inherently bursty, but also typically quite robust in the face of slow networking and network hiccups. The SLAs are not tight enough and the retries cover up enough problems that we typically don't need to worry about it. However not all applications have the same pattern nor the same robustness to network issues, and we want to be able to cleanly support as many of these patterns as possible. I see that there are several areas to think about and some of them counter each other and need to be balanced. Some applications are very sensitive to network latency where as others are not. Some applications are very sensitive to having a consistent or guaranteed network throughput where as others are not. We want to avoid saturation of the network (primarily to be able to support the above two) but we also want to have high resource utilization (and networking typically degrades gracefully when over allocated). We want to have data spread across multiple failure domains but also want to colocate containers near each other to reduce network traffic. There are also two different forms of traffic that we need to consider, external traffic and internal traffic. External traffic cannot be reduced by colocating containers, internal traffic typically can. Lets take a static web server as one extreme. All of the traffic is external traffic, the servers don't talk to each other. For robustness I would want to have them spread out across multiple racks so if one rack goes down for some reason I have a backup and my web sight stays up, although possibly degraded until I can launch more containers. The web servers are sensitive to network latency, and typically want a minimum amount of available bandwidth so they can achieve the desired network latency in a cost effective manner. If the scheduler does what you initially described, and tries to always place containers on the same rack, we will be at risk of having a single rack bring down my entire web site. For most applications it is a balancing act between spreading data across failure domains and increasing the locality to reduce internal communication overhead. Storm tends to be the other extreme because it stores no data internally, so it can ignore failure domains, but in most cases has a significant amount of internal network traffic. I would much rather see scheduler APIs added that allow the App Master to help give hints on how it wants particular containers scheduled. Then there are a number of general scheduler questions around locality that this brings up, because we are really just extending the definition of locality from close to this hdfs block/machine, to be close to these other containers, preferably not all other containers in the application, because almost all applications have different types of containers. To be able to have that type of an API it feels like we would want to support gang scheduling, but perhaps not. Then there is the question of how long should the scheduler try to get the best container. Right now it is cluster wide, but for storm I would really like to be able to say it is OK to spend 10 seconds to get these containers, but for this high priority one, because something just crashed, I want it ASAP. The current design seems to only handle avoiding saturating the local nodes' outbound network connection and attempts to reduce internal traffic by locating containers for the same application close to one another. It seemed to imply that it would do something with the top of rack switch to avoid saturating that, but was not explicitly called out how that would work. And if we cannot separate out internal vs external traffic I fear the top of rack switch will either become saturated in some cases or cause other resources in the rack to not be utilized because we think the top of rack switch is saturated. To me the outbound traffic shaping seems like a step in the right direction, but I honestly don't know if it is going to be enough for me to feel safe running production storm topologies on a YARN cluster. Think of a storm topology reading a real firehose of data from kafka 2 Gigbit/sec something that would saturate the inbound link of a typical node two times over. This is completely ignored by the current design simply because there is no easy way to do traffic shaping on it. If I look at how cloud providers like Amazon and Google handle networking they are doing things very similar to OpenFlow. I am not saying that we need to recreate this, but any design that we come up with should really think about supporting something similar long term, and doing something good enough short term, so that the user facing APIs don't need to change, even if the enforcement of it, or our ability to get better utilization from it does change over time with improved hardware support.
        Hide
        Sidharta Seethana added a comment -

        You are right - there are several areas to think about here and we definitely need to put in more thought w.r.t scheduling. In order to be able to do effective scheduling for network resources, we would need to understand a) the overall network topology in place for the cluster in question - characteristics of the ‘route’ between any two nodes in the cluster - number of hops required and the available/max bandwidth at each point in the route. b) application characteristics w.r.t network utilization - internal/external traffic, latency vs. bandwidth sensitivities etc. With regards to inbound traffic, we currently do not have a good way to do effectively manage traffic - when inbound packets are being ‘examined’ on a given node, they have already consumed bandwidth along the way - and the only option we have is to drop it immediately (we cannot queue on the inbound side) or let it through - the design document mentions these limitations. One possible approach here could be to let the application provide ‘hints’ for inbound network utilization (not all applications might be able to do this) and use this information purely for scheduling purposes. This, of course, adds more complexity to scheduling.

        Needless to say, there are hard problems to solve here - and the (network) scheduling requirements (and potential approaches for implementation) will need further looking into. As a first step, though, I think it makes sense to focus on classification of outbound traffic (net_cls) and maybe basic isolation/enforcement + collection of metrics. Once we have this in place - we could look at real utilization patterns and decide what the next steps should be.

        Show
        Sidharta Seethana added a comment - You are right - there are several areas to think about here and we definitely need to put in more thought w.r.t scheduling. In order to be able to do effective scheduling for network resources, we would need to understand a) the overall network topology in place for the cluster in question - characteristics of the ‘route’ between any two nodes in the cluster - number of hops required and the available/max bandwidth at each point in the route. b) application characteristics w.r.t network utilization - internal/external traffic, latency vs. bandwidth sensitivities etc. With regards to inbound traffic, we currently do not have a good way to do effectively manage traffic - when inbound packets are being ‘examined’ on a given node, they have already consumed bandwidth along the way - and the only option we have is to drop it immediately (we cannot queue on the inbound side) or let it through - the design document mentions these limitations. One possible approach here could be to let the application provide ‘hints’ for inbound network utilization (not all applications might be able to do this) and use this information purely for scheduling purposes. This, of course, adds more complexity to scheduling. Needless to say, there are hard problems to solve here - and the (network) scheduling requirements (and potential approaches for implementation) will need further looking into. As a first step, though, I think it makes sense to focus on classification of outbound traffic (net_cls) and maybe basic isolation/enforcement + collection of metrics. Once we have this in place - we could look at real utilization patterns and decide what the next steps should be.
        Hide
        Bikas Saha added a comment -

        This paper may have useful insights into the network sharing issues.
        http://research.microsoft.com/en-us/um/people/srikanth/data/nsdi11_seawall.pdf

        Show
        Bikas Saha added a comment - This paper may have useful insights into the network sharing issues. http://research.microsoft.com/en-us/um/people/srikanth/data/nsdi11_seawall.pdf
        Hide
        Sidharta Seethana added a comment -

        Bikas Saha Thanks. I ran into this paper (and a couple of others) when looking at YARN-3

        Show
        Sidharta Seethana added a comment - Bikas Saha Thanks. I ran into this paper (and a couple of others) when looking at YARN-3
        Hide
        Do Hoai Nam added a comment -

        For the case of ingress traffic you can check our solution in YARN-2618 (Support bandwidth enforcement for containers while reading from HDFS) https://issues.apache.org/jira/browse/YARN-2681 and the related paper (http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf)

        Show
        Do Hoai Nam added a comment - For the case of ingress traffic you can check our solution in YARN-2618 (Support bandwidth enforcement for containers while reading from HDFS) https://issues.apache.org/jira/browse/YARN-2681 and the related paper ( http://www.hit.bme.hu/~do/papers/EnforcementDesign.pdf )
        Hide
        Dheeren Beborrtha added a comment -

        How do you support port level isolation for Docker containers?
        For example, lets say I would like to run multiple docker containers on the same Datanode. If each of the conatiners needs to be long running and need to advertise their ports, what is the mechanism for doing so?

        Show
        Dheeren Beborrtha added a comment - How do you support port level isolation for Docker containers? For example, lets say I would like to run multiple docker containers on the same Datanode. If each of the conatiners needs to be long running and need to advertise their ports, what is the mechanism for doing so?
        Hide
        Sidharta Seethana added a comment -

        Hi Dheeren Beborrtha , we only address network bandwidth resource isolation in the design doc that is attached, not isolating the network stack itself. I recommend taking a look at YARN-3611 for new docker related functionality and please file a JIRA with requirements that you have.

        Show
        Sidharta Seethana added a comment - Hi Dheeren Beborrtha , we only address network bandwidth resource isolation in the design doc that is attached, not isolating the network stack itself. I recommend taking a look at YARN-3611 for new docker related functionality and please file a JIRA with requirements that you have.
        Hide
        Junping Du added a comment -

        Hi Sidharta Seethana, I noticed all sub jiras are resolved. Do we have any work left to do? If not, we should mark this umbrella as resolved.

        Show
        Junping Du added a comment - Hi Sidharta Seethana , I noticed all sub jiras are resolved. Do we have any work left to do? If not, we should mark this umbrella as resolved.
        Hide
        Sidharta Seethana added a comment -

        Hi Junping Du, The sub-tasks address tagging/shaping for outbound network traffic only. There has been no work done from an inbound traffic perspective.

        Show
        Sidharta Seethana added a comment - Hi Junping Du , The sub-tasks address tagging/shaping for outbound network traffic only. There has been no work done from an inbound traffic perspective.

          People

          • Assignee:
            Sidharta Seethana
            Reporter:
            Wei Yan
          • Votes:
            0 Vote for this issue
            Watchers:
            64 Start watching this issue

            Dates

            • Created:
              Updated:

              Development