Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-30706

TimeZone in writing pure date type in CSV output

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Incomplete
    • 2.4.3
    • None
    • Spark Shell

    Description

      If I read string date from CSV file, then cast to date type and write into CSV file again on the west of Greenwich to csv file again it writes date of one day ago. This way making this operation in loop we can get unwillingly past date.

      If the spark-shell work on the east of Greenwich all is OK.

      When writing to parquet also is OK.

      Example of code:

      //
      val test_5_load = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_load.csv"
      val test_5_save = "hdfs://192.168.44.161:8020/db/wbiernacki/test_5_save.csv"
      val test_5 = spark.read.format("csv")
        .option("header","true")
        .load( test_5_load )
        .withColumn("begin",to_date(col("begin" ),"yyyy-MM-dd"))
        .withColumn("end" ,to_date(col("end" ),"yyyy-MM-dd"))
      test_5.show()
      test_5
        .write.mode("overwrite")
        .format("csv")
        .option("header","true")
        .save( test_5_save )
      
      

       Please perform this few times.. The test_5_load.csv file looks like:

      // 
      +--------+----------+----------+----+
      | patient|     begin|       end| new|
      +--------+----------+----------+----+
      |waldemar|2015-09-22|2015-09-23|old1|
      +--------+----------+----------+----+

       

      Attachments

        1. DateZoneBug.zip
          3 kB
          Waldemar

        Activity

          People

            Unassigned Unassigned
            Biernacki Waldemar
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: