Details
-
Bug
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
2.3.1
-
None
Description
We have a spark job that starts by reading orc files under an S3 directory and we noticed the job consumes a lot of memory when both the number of orc files and the size of the file are large. The memory bloat went away with the following workaround.
1) create a DataSet<Row> from a single orc file.
Dataset<Row> rowsForFirstFile = spark.read().format("orc").load(oneFile);
2) when creating DataSet<Row> from all files under the directory, use the schema from the previous DataSet.
Dataset<Row> rows = spark.read().schema(rowsForFirstFile.schema()).format("orc").load(path);
I believe the issue is due to the fact in order to infer the schema a FileReader is created for each orc file under the directory although only the first one is used. The FileReader creation loads the metadata of the orc file and the memory consumption is very high when there are many files under the directory.
The issue exists in both 2.0 and HEAD.
In 2.0, OrcFileOperator.readSchema is used.
In HEAD, OrcUtils.readSchema is used.