Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21408

Default RPC dispatcher thread pool size too large for small executors

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.3.0
    • Fix Version/s: 2.3.0
    • Component/s: Spark Core
    • Labels:
      None

      Description

      This is the code that sizes the RPC dispatcher thread pool:

        private val threadpool: ThreadPoolExecutor = {
          val numThreads = nettyEnv.conf.getInt("spark.rpc.netty.dispatcher.numThreads",
            math.max(2, Runtime.getRuntime.availableProcessors()))
          val pool = ThreadUtils.newDaemonFixedThreadPool(numThreads, "dispatcher-event-loop")
      

      That is based on the number of available cores on the host, instead of the number of cores the executor was told to use. Meaning if you start an executor with a single "core" on a host with 64 CPUs, you'll get 64 threads, which is kinda overkill.

      Using the allocated cores + a lower bound is probably a better approach.

        Attachments

          Activity

            People

            • Assignee:
              vanzin Marcelo Vanzin
              Reporter:
              vanzin Marcelo Vanzin
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: