Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-17485

Failed remote cached block reads can lead to whole job failure

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Critical
    • Resolution: Fixed
    • 1.6.2, 2.0.0
    • 1.6.3, 2.0.1, 2.1.0
    • Block Manager, Spark Core
    • None

    Description

      In Spark's RDD.getOrCompute we first try to read a local copy of a cached block, then a remote copy, and only fall back to recomputing the block if no cached copy (local or remote) can be read. This logic works correctly in the case where no remote copies of the block exist, but if there are remote copies but reads of those copies fail (due to network issues or internal Spark bugs) then the BlockManager will throw a BlockFetchException error that fails the entire job.

      In the case of torrent broadcast we really do want to fail the entire job in case no remote blocks can be fetched, but this logic is inappropriate for cached blocks because those can/should be recomputed.

      Therefore, I think that this exception should be thrown higher up the call stack by the BlockManager client code and not the block manager itself.

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            joshrosen Josh Rosen
            joshrosen Josh Rosen
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment