Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
4.0.0
Description
The example below portraits the issue:
>>> df=spark.read.option("multiline", "true").option("header", "true").option("escape", '"').csv("es-939111-data.csv") >>> df.count() 4 >>> df.cache() DataFrame[jobID: string, Name: string, City: string, Active: string] >>> df.count() 5