Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-4352

Incorporate locality preferences in dynamic allocation requests

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 1.2.0
    • 1.5.0
    • Spark Core, YARN
    • None

    Description

      Currently, achieving data locality in Spark is difficult unless an application takes resources on every node in the cluster. preferredNodeLocalityData provides a sort of hacky workaround that has been broken since 1.0.

      With dynamic executor allocation, Spark requests executors in response to demand from the application. When this occurs, it would be useful to look at the pending tasks and communicate their location preferences to the cluster resource manager.

      Attachments

        Issue Links

          Activity

            People

              jerryshao Saisai Shao
              sandyr Sandy Ryza
              Votes:
              4 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: