Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21762

FileFormatWriter/BasicWriteTaskStatsTracker metrics collection fails if a new file isn't yet visible



    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.0
    • 2.3.0
    • SQL
    • None
    • object stores without complete creation consistency (this includes AWS S3's caching of negative GET results)


      The metrics collection of SPARK-20703 can trigger premature failure if the newly written object isn't actually visible yet, that is if, after writer.close(), a getFileStatus(path) returns a FileNotFoundException.

      Strictly speaking, not having a file immediately visible goes against the fundamental expectations of the Hadoop FS APIs, namely full consistent data & medata across all operations, with immediate global visibility of all changes. However, not all object stores make that guarantee, be it only newly created data or updated blobs. And so spurious FNFEs can get raised, ones which should have gone away by the time the actual task is committed. Or if they haven't, the job is in such deep trouble.

      What to do?

      1. leave as is: fail fast & so catch blobstores/blobstore clients which don't behave as required. One issue here: will that trigger retries, what happens there, etc, etc.
      2. Swallow the FNFE and hope the file is observable later.
      3. Swallow all IOEs and hope that whatever problem the FS has is transient.

      Options 2 & 3 aren't going to collect metrics in the event of a FNFE, or at least, not the counter of bytes written.


        Issue Links



              stevel@apache.org Steve Loughran
              stevel@apache.org Steve Loughran
              1 Vote for this issue
              3 Start watching this issue