Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27637

If exception occured while fetching blocks by netty block transfer service, check whether the relative executor is alive before retry

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.3, 2.4.3
    • 3.0.0
    • Shuffle, Spark Core
    • None

    Description

      There are several kinds of shuffle client, blockTransferService and externalShuffleClient.

      For the externalShuffleClient, there are relative external shuffle service, which guarantees the shuffle block data and regardless the state of executors.

      For the blockTransferService, it is used to fetch broadcast block, and fetch the shuffle data when external shuffle service is not enabled.

      When fetching data by using blockTransferService, the shuffle client would connect relative executor's blockManager, so if the relative executor is dead, it would never fetch successfully.

      When spark.shuffle.service.enabled is true and spark.dynamicAllocation.enabled is true, the executor will be removed while it has been idle for more than idleTimeout.

      If a blockTransferService create connection to relative executor successfully, but the relative executor is removed when beginning to fetch broadcast block, it would retry (see RetryingBlockFetcher), which is Ineffective.

      If the spark.shuffle.io.retryWait and spark.shuffle.io.maxRetries is big, such as 30s and 10 times, it would waste 5 minutes.

      So, I think we should judge whether the relative executor is alive before retry.

      Attachments

        Activity

          People

            feiwang Fei Wang
            hzfeiwang feiwang
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: