Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-23410 Unable to read jsons in charset different from UTF-8
  3. SPARK-23724

Custom record separator for jsons in charsets different from UTF-8

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.4.0
    • 2.4.0
    • SQL
    • None

    Description

      The option should define a sequence of bytes between two consecutive json records. Currently the separator is detected automatically by hadoop library:
       
      https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java#L185-L254
       
      The method is able to recognize only \r, \n and \r\n in UTF-8 encoding. It doesn't work in the cases if encoding of input stream is different from UTF-8. The option should allow to users explicitly set separator/delimiter of json records.

      Attachments

        Activity

          People

            maxgekk Max Gekk
            maxgekk Max Gekk
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: