Description
PySpark appears to ignore timezone information when filtering on (and working in general with) datetimes.
Please see the example below. The generated filter in the query plan is 5 hours off (my computer is EST).
In [1]: df = sc.sql.createDataFrame([], StructType([StructField("dt", TimestampType())]))
In [2]: df.filter(df.dt > datetime(2000, 01, 01, tzinfo=UTC)).explain()
Filter (dt#9 > 946702800000000)
Scan PhysicalRDD[dt#9]
Note that 946702800000000 == Sat 1 Jan 2000 05:00:00 UTC