Description
There are some example code:
import java.util.TimeZone
TimeZone.setDefault(TimeZone.getTimeZone("America/Los_Angeles"))
sql("set spark.sql.session.timeZone=America/Los_Angeles")
val df = sql("select timestamp_ntz '2021-06-01 00:00:00' ts_ntz, timestamp '2021-06-01 00:00:00' ts")
df.write.mode("overwrite").orc("ts_ntz_orc")
df.write.mode("overwrite").parquet("ts_ntz_parquet")
df.write.mode("overwrite").format("avro").save("ts_ntz_avro")
val query = """
select 'orc', *
from `orc`.`ts_ntz_orc`
union all
select 'parquet', *
from `parquet`.`ts_ntz_parquet`
union all
select 'avro', *
from `avro`.`ts_ntz_avro`
"""
val tzs = Seq("America/Los_Angeles", "UTC", "Europe/Amsterdam")
for (tz <- tzs) {
TimeZone.setDefault(TimeZone.getTimeZone(tz))
sql(s"set spark.sql.session.timeZone=$tz")
println(s"Time zone is ${TimeZone.getDefault.getID}")
sql(query).show(false)
}
The output show below looks so strange.
Time zone is America/Los_Angeles
-----------------------------------------
orc | ts_ntz | ts |
-----------------------------------------
orc | 2021-06-01 00:00:00 | 2021-06-01 00:00:00 |
parquet | 2021-06-01 00:00:00 | 2021-06-01 00:00:00 |
avro | 2021-06-01 00:00:00 | 2021-06-01 00:00:00 |
-----------------------------------------
Time zone is UTC
-----------------------------------------
orc | ts_ntz | ts |
-----------------------------------------
orc | 2021-05-31 17:00:00 | 2021-06-01 00:00:00 |
parquet | 2021-06-01 00:00:00 | 2021-06-01 07:00:00 |
avro | 2021-06-01 00:00:00 | 2021-06-01 07:00:00 |
-----------------------------------------
Time zone is Europe/Amsterdam
-----------------------------------------
orc | ts_ntz | ts |
-----------------------------------------
orc | 2021-05-31 15:00:00 | 2021-06-01 00:00:00 |
parquet | 2021-06-01 00:00:00 | 2021-06-01 09:00:00 |
avro | 2021-06-01 00:00:00 | 2021-06-01 09:00:00 |
-----------------------------------------
Attachments
Attachments
Issue Links
- links to