-
Type:
Bug
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 1.6.0
-
Component/s: None
-
Labels:None
Repro: run a scalding job that writes parquet files to a folder. no _metadata and _common_metadata file is created
Impact: potential performance problem if parquet metadata is read from client side, which is the case for sparkSQL
casue: the metatdata writing logic is in the mapreduce API but not the mapred API of parquet.
- is a clone of
-
PARQUET-206 MapredParquetOutputCommitter does not work in hadoop2
-
- Resolved
-