Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Won't Fix
-
None
-
None
Description
Currently DataFrame.mapPatitions is analogous to DataFrame.rdd.mapPatitions in both Spark and pySpark. The function that is applied to each partition f must operate on a list generator. This is however very inefficient in Python. It would be more logical and efficient if the apply function f operated on Pandas DataFrames instead and also returned a DataFrame. This avoids unnecessary iteration in Python which is slow.
Currently:
def apply_function(rows):
df = pd.DataFrame(list(rows))
df = df % 100 # Do something on df
return df.values.tolist()
table = sqlContext.read.parquet("")
table = table.mapPatitions(apply_function)
New apply function would accept a Pandas DataFrame and return a DataFrame:
def apply_function(df):
df = df % 100 # Do something on df
return df