Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-31179

Fast fail the connection while last shuffle connection failed in the last retry IO wait

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 3.1.0
    • Fix Version/s: 3.1.0
    • Component/s: Shuffle, Spark Core
    • Labels:
      None

      Description

      When reading shuffle data, maybe several fetch request sent to a same shuffle server.
      There is a client pool, and these request may share the same client.
      When the shuffle server is busy, it may cause the request connection timeout.
      For example: there are two request connection, rc1 and rc2.
      Especially, the io.numConnectionsPerPeer is 1 and connection timeout is 2 minutes.

      1: rc1 hold the client lock, it timeout after 2 minutes.
      2: rc2 hold the client lock, it timeout after 2 minutes.
      3: rc1 start the second retry, hold lock and timeout after 2 minutes.
      4: rc2 start the second retry, hold lock and timeout after 2 minutes.
      5: rc1 start the third retry, hold lock and timeout after 2 minutes.
      6: rc2 start the third retry, hold lock and timeout after 2 minutes.
      It wastes lots of time.

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                hzfeiwang feiwang
                Reporter:
                hzfeiwang feiwang
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: