Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27560

HashPartitioner uses Object.hashCode which is not seeded

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Not A Problem
    • 2.4.0
    • None
    • Java API
    • None

    Description

      Forgive the quality of the bug report here, I am a pyspark user and not super familiar with the internals of spark, yet it seems I have a strange corner case with the HashPartitioner.

      This may already be known but repartition with HashPartitioner seems to assign everything the same partition if data that was partitioned by the same column is only partially read (say one partition). I suppose it is obvious concequence of Object.hashCode being deterministic but took some while to track down.

      Steps to repro:

      1. Get dataframe with a bunch of uuids say 10000
      2. repartition(100, 'uuid_column')
      3. save to parquet
      4. read from parquet
      5. collect()[:100] then filter using pyspark.sql.functions isin (yes I know this is bad and sampleBy should probably be used here)
      6. repartition(10, 'uuid_column')
      7. Resulting dataframe will have all of its data in one single partition

      Jupyter notebook for the above: https://gist.github.com/robo-hamburger/4752a40cb643318464e58ab66cf7d23e

      I think an easy fix would be to seed the HashPartitioner like many hashtable libraries do to avoid denial of service attacks. It also might be the case this is obvious behavior for more experienced spark users

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              amcharg Andrew McHarg
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: