Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-11095

Simplify Netty RPC implementation by using a separate thread pool for each endpoint

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Won't Do
    • None
    • None
    • Spark Core

    Description

      The dispatcher class and the inbox class of the current Netty-based RPC implementation is fairly complicated. It uses a single, shared thread pool to execute all the endpoints. This is similar to how Akka does actor message dispatching. The benefit of this design is that this RPC implementation can support a very large number of endpoints, as they are all multiplexed into a single thread pool for execution. The downside is the complexity resulting from synchronization and coordination.

      An alternative implementation is to have a separate message queue and thread pool for each endpoint. The dispatcher simply routes the messages to the appropriate message queue, and the threads poll the queue for messages to process.

      If the endpoint is single threaded, then the thread pool should contain only a single thread. If the endpoint supports concurrent execution, then the thread pool should contain more threads.

      Two additional things we need to be careful with are:

      1. An endpoint should only process normal messages after OnStart is called. This can be done by having the thread that starts the endpoint processing OnStart.

      2. An endpoint should process OnStop after all normal messages have been processed. I think this can be done by having a busy loop to spin until the size of the message queue is 0.

      Attachments

        Issue Links

          Activity

            People

              zsxwing Shixiong Zhu
              rxin Reynold Xin
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: