Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-48091

Using `explode` together with `transform` in the same select statement causes aliases in the transformed column to be ignored

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.4.0, 3.5.0, 3.5.1
    • None
    • Spark Core
    • Scala 2.12.15, Python 3.10, 3.12, OSX 14.4 and Databricks DBR 13.3, 14.3, Pyspark 3.4.0, 3.5.0, 3.5.1 

    Description

      When using an `explode` function, and `transform` function in the same select statement, aliases used inside the transformed column are ignored.

      This behavior only happens using the pyspark API and the scala API, but not when using the SQL API

       

      from pyspark.sql import functions as F
      
      # Create the df
      df = spark.createDataFrame([
          {"id": 1, "array1": ['a', 'b'], 'array2': [2,3,4]}
      ])

      Good case, where all aliases are used

       

      df.select(
          F.transform(
              'array2',
              lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias"))
          ).alias("new_array2")
      ).printSchema() 
      
      root
       |-- new_array2: array (nullable = true)
       |    |-- element: struct (containsNull = false)
       |    |    |-- some_alias: long (nullable = true)
       |    |    |-- second_alias: long (nullable = true)

      Bad case, when using explode, the alises inside the transformed column is ignored, and  `id` is kept instead of `second_alias`, and `x_17` is used instead of `some_alias`

       

       

      df.select(
          F.explode("array1").alias("exploded"),
          F.transform(
              'array2',
              lambda x: F.struct(x.alias("some_alias"), F.col("id").alias("second_alias"))
          ).alias("new_array2")
      ).printSchema()
      
      root
       |-- exploded: string (nullable = true)
       |-- new_array2: array (nullable = true)
       |    |-- element: struct (containsNull = false)
       |    |    |-- x_17: long (nullable = true)
       |    |    |-- id: long (nullable = true) 

       

       

      import org.apache.spark.sql.functions._
      var df2 = df.select(array(lit(1), lit(2), lit(3)).as("my_array"), array(lit(1), lit(2), lit(3)).as("my_array2"))
      
      df2.select(
        explode($"my_array").as("exploded"),
        transform($"my_array2", (x) => struct(x.as("data"))).as("my_struct")
      ).printSchema
      
      root
       |-- exploded: integer (nullable = false)
       |-- my_struct: array (nullable = false)
       |    |-- element: struct (containsNull = false)
       |    |    |-- x_2: integer (nullable = false)
      

       

      When using the SQL API instead, it works fine

      spark.sql(
          """
          select explode(array1) as exploded, transform(array2, x-> struct(x as some_alias, id as second_alias)) as array2 from {df}
          """, df=df
      ).printSchema()
      
      root
       |-- exploded: string (nullable = true)
       |-- array2: array (nullable = true)
       |    |-- element: struct (containsNull = false)
       |    |    |-- some_alias: long (nullable = true)
       |    |    |-- second_alias: long (nullable = true) 

       

      Workaround: for now, using F.named_struct can be used as a workaround

      Attachments

        Activity

          People

            Unassigned Unassigned
            ronserruya Ron Serruya
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: