Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27759

Do not auto cast array<double> to np.array in vectorized udf

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 3.1.0
    • None
    • PySpark, SQL
    • None

    Description

      pd_df = pd.DataFrame({'x': np.random.rand(11, 3, 5).tolist()})
      df = spark.createDataFrame(pd_df).cache()
      

      Each element in x is a list of list, as expected.

      df.toPandas()['x']
      
      # 0 [[0.08669612955959993, 0.32624430522634495, 0.... 
      # 1 [[0.29838166086156914,  0.008550172904516762, 0... 
      # 2 [[0.641304534802928, 0.2392047548381877, 0.555...
      

       

      def my_udf(x):
          # Hack to see what's inside a udf
          raise Exception(x.values.shape, x.values[0].shape, x.values[0][0].shape, np.stack(x.values).shape)
          return pd.Series(x.values)
      
      my_udf = F.pandas_udf(my_udf, returnType=DoubleType())
      df.coalesce(1).withColumn('y', my_udf('x')).show(
      
      # Exception: ((11,), (3,), (5,), (11, 3))

       

      A batch (11) of `x` is converted to pd.Series, however, each element in the pd.Series is now a numpy 1d array of numpy 1d array. It is inconvenient to work with nested 1d numpy array in practice in a udf.

       

      For example, I need a ndarray of shape (11, 3, 5) in udf, so that I can make use of the numpy vectorized operations. If I was given a list of list intact, I can simply do `np.stack(x.values)`. However, it doesn't work here as what I received is a nested numpy 1d array.

       

       

      Attachments

        Activity

          People

            Unassigned Unassigned
            colinfang colin fang
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated: