Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-24540

Support for multiple character delimiter in Spark CSV read

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.3.1
    • Fix Version/s: 3.0.0
    • Component/s: SQL
    • Labels:
      None

      Description

      Currently, the delimiter option Spark 2.0 to read and split CSV files/data only support a single character delimiter. If we try to provide multiple delimiters, we observer the following error message.

      eg: Dataset<Row> df = spark.read().option("inferSchema", "true")
                                                               .option("header", "false")

                                                               .option("delimiter", ", ")
                                                               .csv("C:\test.txt");

      Exception in thread "main" java.lang.IllegalArgumentException: Delimiter cannot be more than one character: , 

      at org.apache.spark.sql.execution.datasources.csv.CSVUtils$.toChar(CSVUtils.scala:111)
      at org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:83)
      at org.apache.spark.sql.execution.datasources.csv.CSVOptions.<init>(CSVOptions.scala:39)
      at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:55)
      at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
      at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$8.apply(DataSource.scala:202)
      at scala.Option.orElse(Option.scala:289)
      at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:201)
      at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:392)
      at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
      at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:596)
      at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:473)

       

      Generally, the data to be processed contains multiple character delimiters and presently we need to do a manual data clean up on the source/input file, which doesn't work well in large applications which consumes numerous files.

      There seems to be work-around like reading data as text and using the split option, but this in my opinion defeats the purpose, advantage and efficiency of a direct read from CSV file.

       

        Attachments

        Issue Links

          Activity

            People

            • Assignee:
              jeff.w.evans Jeff Evans
              Reporter:
              AshwinK Ashwin K

              Dates

              • Created:
                Updated:
                Resolved:

                Issue deployment