Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-23410 Unable to read jsons in charset different from UTF-8
  3. SPARK-23724

Custom record separator for jsons in charsets different from UTF-8

    XMLWordPrintableJSON

    Details

    • Type: Sub-task
    • Status: Resolved
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: 2.4.0
    • Fix Version/s: 2.4.0
    • Component/s: SQL
    • Labels:
      None

      Description

      The option should define a sequence of bytes between two consecutive json records. Currently the separator is detected automatically by hadoop library:
       
      https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LineReader.java#L185-L254
       
      The method is able to recognize only \r, \n and \r\n in UTF-8 encoding. It doesn't work in the cases if encoding of input stream is different from UTF-8. The option should allow to users explicitly set separator/delimiter of json records.

        Attachments

          Activity

            People

            • Assignee:
              maxgekk Maxim Gekk
              Reporter:
              maxgekk Maxim Gekk
            • Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: