Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-38677

pyspark hangs in local mode running rdd map operation

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 3.2.1, 3.3.0
    • 3.3.0, 3.2.2
    • PySpark
    • None

    Description

      In spark 3.2.1 (spark 3.2.0 doesn't show this issue), pyspark will hang when running and RDD map operations and converting to a dataframe.  Code is below to reproduce.  

      Env:
      spark 3.2.1 local mode, just run ./bin/pyspark --driver-memory XXXXG --driver-cores XXXX

      download dataset from here https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
      just 200000 rows could reproduce the issue head -n 200000 mortgage_eval_merged.csv > mortgage_eval_merged-small.csv but if the input dataset is small, such 50000 rows, it works well.{}run codes below:

      path = "/XXXX/mortgage_eval_merged-small.csv"
      src_data = sc.textFile(path).map(lambda x:x.split(","))
      column_list =['c1','c2','c3','c4','c5','c6','c7','c8','c9','c10','c11','c12','c13','c14','c15','c16','c17','c18','c19','c20','c21','c22','c23','c24','c25','c26','c27','c28']
      df = spark.createDataFrame(src_data,column_list)
      print(df.show(1))

      Attachments

        Activity

          People

            ankurd Ankur Dave
            tgraves Thomas Graves
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: