Details
-
Bug
-
Status: Open
-
Critical
-
Resolution: Unresolved
-
3.4.1
-
None
Description
When reading a tab separated file that contains lines that only contain tabs (i.e. empty strings as values of the columns for that row), then these rows will silently be skipped (as empty lines) and the resulting dataframe will have less rows than expected.
This behavior is inconsistent with the behavior for e.g. semicolon separated files, where the resulting dataframe will have a row with only empty string values.
A minimal reproducible example would look like:
A minimal reproducible example: A file containing this
a\tb\tc\r\n \t\t\r\n 1\t2\t3
will create a dataframe with one row (a=1,b=2,c=3)
whereas this
a;b;c\r\n ;;\r\n 1;2;3
will read as two rows (first row contains empty strings)
I used the following pyspark command to read the dataframes
spark.read.option("header","true").option("sep","\t").csv("<tabseparated file>").collect() spark.read.option("header","true").option("sep",";").csv("<semicolon file>").collect()
I ran into this particularly on databricks (I assume they use the same reader), but this stack overflow post indicates, that this is an old issue that may have been taken over from databricks when their csv reader was adopted in SPARK-12420
I recommend to at least add a respective test case to the CSV reader.
Why is this behaviour a problem:
- It violates some of the core assumptions
- a properly configured roundtrip via csv write/read should result in the same set of rows
- changing the csv separator (when everything is properly esacped) should have no effect
Potential resolutions:
- When the configured delimiter consists of only whitespace
- deactivate the "skip empty line feature"
- or skip only lines that are completely empty (only a (carriage return) newline)
- Change the skip empty line feature to only skip if the line is completely empty (only contains a newlin)
- this may break some user code that relies on the current behaviour
Attachments
Issue Links
- links to