Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Cannot Reproduce
-
2.2.0
-
None
-
None
Description
I have faced an issue intermittently with certain spark jobs writing parquet files which apparently succeed but the written .parquet directory in HDFS is an empty directory (with no _SUCCESS and _metadata parts, even). Surprisingly, no errors are thrown from spark dataframe writer.
However, when attempting to read this written file, spark throws the error:
Unable to infer schema for Parquet. It must be specified manually