Description
Currently, Spark checks the header in CSV files to fields names in provided or inferred schema. The check is bypassed if the header doesn't exists and CSV content is read from files. In the case, when input CSV comes as dataset of strings, Spark always compares the first row to the user specified or inferred schema. For example, parsing the following dataset:
val input = Seq("1,2").toDS() spark.read.option("enforceSchema", false).csv(input)
throws the exception:
java.lang.IllegalArgumentException: CSV header does not conform to the schema. Header: 1, 2 Schema: _c0, _c1 Expected: _c0 but found: 1
Need to prevent comparison of the first row (if it is not a header) to specific or inferred schema.
Attachments
Issue Links
- is related to
-
SPARK-27873 Csv reader, adding a corrupt record column causes error if enforceSchema=false
- Resolved
- links to