Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-45678

Cover BufferReleasingInputStream.available under tryOrFetchFailedException

    XMLWordPrintableJSON

Details

    Description

      We have encountered shuffle data corruption issue:

      ```
      Caused by: java.io.IOException: FAILED_TO_UNCOMPRESS(5)
      at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:112)
      at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
      at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:504)
      at org.xerial.snappy.Snappy.uncompress(Snappy.java:543)
      at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:450)
      at org.xerial.snappy.SnappyInputStream.available(SnappyInputStream.java:497)
      at org.apache.spark.storage.BufferReleasingInputStream.available(ShuffleBlockFetcherIterator.scala:1356)
      ```

      Spark shuffle has capacity to detect corruption for a few stream op like `read` and `skip`, such `IOException` in the stack trace will be rethrown as `FetchFailedException` that will re-try the failed shuffle task. But in the stack trace it is `available` that is not covered by the mechanism. So no-retry has been happened and the Spark application just failed.

      As the `available` op will also involve data decompression, we should be able to check it like `read` and `skip` do.

      Attachments

        Issue Links

          Activity

            People

              viirya L. C. Hsieh
              viirya L. C. Hsieh
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: