Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-38166

Duplicates after task failure in dropDuplicates and repartition

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.2
    • None
    • SQL
    • Cluster runs on K8s. AQE is enabled.

    Description

      We're seeing duplicates after running the following

      def compute_shipments(shipments):
          shipments = shipments.dropDuplicates(["ship_trck_num"])
          shipments = shipments.repartition(4)
          return shipments
      

      and observing lost executors (OOMs) and task retries in the repartition stage.

      We're seeing this reliably in one of our pipelines. But I haven't managed to reproduce outside of that pipeline. I'll attach driver logs - maybe you have ideas.

      Attachments

        1. driver.log
          129 kB
          Willi Raschkowski

        Activity

          People

            Unassigned Unassigned
            rshkv Willi Raschkowski
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: