Details
-
Bug
-
Status: Resolved
-
Critical
-
Resolution: Duplicate
-
2.3.0
-
None
-
None
Description
Since SPARK-8501, Spark doesn't create an ORC file for empty data sets. However, SPARK-21669 is trying to get the length of the written file and fails with FileNotFoundException. This is a regression at 2.3.0 only. We had better have a test case to prevent future regression.
scala> Seq("str").toDS.limit(0).write.format("orc").save("/tmp/a") 17/10/11 19:28:59 ERROR Utils: Aborting task java.io.FileNotFoundException: File file:/tmp/a/_temporary/0/_temporary/attempt_20171011192859_0000_m_000000_0/part-00000-aa56c3cf-ec35-48f1-bb73-23ad1480e917-c000.snappy.orc does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421) at org.apache.spark.sql.execution.datasources.BasicWriteTaskStatsTracker.getFileSize(BasicWriteStatsTracker.scala:60)
Attachments
Issue Links
- blocks
-
SPARK-20901 Feature parity for ORC with Parquet
- Open
- duplicates
-
SPARK-21762 FileFormatWriter/BasicWriteTaskStatsTracker metrics collection fails if a new file isn't yet visible
- Resolved
- links to