Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-41281 Feature parity: SparkSession API in Spark Connect
  3. SPARK-41746

SparkSession.createDataFrame does not support nested datatypes

Attach filesAttach ScreenshotVotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.4.0
    • 3.4.0
    • Connect
    • None

    Description

      File "/.../spark/python/pyspark/sql/connect/group.py", line 183, in pyspark.sql.connect.group.GroupedData.pivot
      Failed example:
          df2 = spark.createDataFrame([
              Row(training="expert", sales=Row(course="dotNET", year=2012, earnings=10000)),
              Row(training="junior", sales=Row(course="Java", year=2012, earnings=20000)),
              Row(training="expert", sales=Row(course="dotNET", year=2012, earnings=5000)),
              Row(training="junior", sales=Row(course="dotNET", year=2013, earnings=48000)),
              Row(training="expert", sales=Row(course="Java", year=2013, earnings=30000)),
          ])
      Exception raised:
          Traceback (most recent call last):
            File "/.../miniconda3/envs/python3.9/lib/python3.9/doctest.py", line 1336, in __run
              exec(compile(example.source, filename, "single",
            File "<doctest pyspark.sql.connect.group.GroupedData.pivot[3]>", line 1, in <module>
              df2 = spark.createDataFrame([
            File "/.../workspace/forked/spark/python/pyspark/sql/connect/session.py", line 196, in createDataFrame
              table = pa.Table.from_pandas(pdf)
            File "pyarrow/table.pxi", line 3475, in pyarrow.lib.Table.from_pandas
            File "/.../miniconda3/envs/python3.9/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in dataframe_to_arrays
              arrays = [convert_column(c, f)
            File "/.../miniconda3/envs/python3.9/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 611, in <listcomp>
              arrays = [convert_column(c, f)
            File "/.../miniconda3/envs/python3.9/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 598, in convert_column
              raise e
            File "/.../miniconda3/envs/python3.9/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 592, in convert_column
              result = pa.array(col, type=type_, from_pandas=True, safe=safe)
            File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
            File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
            File "pyarrow/error.pxi", line 123, in pyarrow.lib.check_status
          pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'int' object", 'Conversion failed for column 1 with type object')
      

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            podongfeng Ruifeng Zheng
            gurwls223 Hyukjin Kwon
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment