I have the following code:
from pyspark import SQLContext d1 = [{'name':'bob', 'country': 'usa', 'age': 1}, {'name':'alice', 'country': 'jpn', 'age': 2}, {'name':'carol', 'country': 'ire', 'age': 3}] d2 = [{'name':'bob', 'country': 'usa', 'colour':'red'}, {'name':'carol', 'country': 'ire', 'colour':'green'}] r1 = sc.parallelize(d1) r2 = sc.parallelize(d2) sqlContext = SQLContext(sc) df1 = sqlContext.createDataFrame(d1) df2 = sqlContext.createDataFrame(d2) df1.join(df2, (df1.name == df2.name) & (df1.country == df2.country), 'left_outer').collect()
When I run it I get the following, (notice in the first row, all join keys are take from the right-side and so are blanked out):
[Row(age=2, country=None, name=None, colour=None, country=None, name=None), Row(age=1, country=u'usa', name=u'bob', colour=u'red', country=u'usa', name=u'bob'), Row(age=3, country=u'ire', name=u'carol', colour=u'green', country=u'ire', name=u'alice')]
I would expect to get (though ideally without duplicate columns):
[Row(age=2, country=u'ire', name=u'alice', colour=None, country=None, name=None), Row(age=1, country=u'usa', name=u'bob', colour=u'red', country=u'usa', name=u'bob'), Row(age=3, country=u'ire', name=u'carol', colour=u'green', country=u'ire', name=u'alice')]
The workaround for now is this rather clunky piece of code:
df2 = sqlContext.createDataFrame(d2).withColumnRenamed('name', 'name2').withColumnRenamed('country', 'country2') df1.join(df2, (df1.name == df2.name2) & (df1.country == df2.country2), 'left_outer').collect()
Also, .show() works
sqlContext = SQLContext(sc) df1 = sqlContext.createDataFrame(d1) df2 = sqlContext.createDataFrame(d2) df1.join(df2, (df1.name == df2.name) & (df1.country == df2.country), 'left_outer').show() +---+-------+-----+------+-------+-----+ |age|country| name|colour|country| name| +---+-------+-----+------+-------+-----+ | 3| ire|carol| green| ire|carol| | 2| jpn|alice| null| null| null| | 1| usa| bob| red| usa| bob| +---+-------+-----+------+-------+-----+