Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
1.12.2
-
None
-
None
Description
The code for returning Compressor objects to the caller goes to some lengths to achieve thread safety, including keeping Codec objects in an Apache Commons pool that has thread-safe borrow semantics. This is all undone by the BytesCompressor and BytesDecompressor Maps in org.apache.parquet.hadoop.CodecFactory which end up caching single compressor and decompressor instances due to code in CodecFactory@getCompressor and CodecFactory@getDecompressor. When the caller runs multiple threads, those threads end up sharing compressor and decompressor instances.
For compressors based on Xerial Snappy this bug has no effect because that library is itself thread safe. But when BuiltInGzipCompressor from Hadoop is selected for the CompressionCodecName.GZIP case, serious problems ensue. That class is not thread safe and sharing one instance of it between threads produces both silent data corruption and JVM crashes.
To fix this situation, parquet-mr should stop caching single compressor and decompressor instances.
Attachments
Issue Links
- fixes
-
DRILL-8139 Parquet CodecFactory thread safety bug
- Resolved