Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26052

Spark should output a _SUCCESS file for every partition correctly written

    XMLWordPrintableJSON

Details

    Description

      When writing a set of partitioned Parquet files to HDFS using dataframe.write.parquet(), a _SUCCESS file is written to hdfs://path/to/table after successful completion, though the actual Parquet files will end up in hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....

      If partitions are written out one at a time (e.g., an hourly ETL), the _SUCCESS file is overwritten by each subsequent run and information on what partitions were correctly written is lost.

      I would like to be able to keep track of what partitions were successfully written in HDFS. I think this could be done by writing the _SUCCESS files to the same partition directories where the Parquet files reside, i.e., hdfs://path/to/table/partition_key1=val1/partition_key2=val2/....

      Since https://issues.apache.org/jira/browse/SPARK-13207 has been resolved, I don't think this should break partition discovery.

      Attachments

        Activity

          People

            Unassigned Unassigned
            matmat Matt Matolcsi
            Votes:
            2 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: