-
Type:
New Feature
-
Status: Resolved
-
Priority:
Minor
-
Resolution: Won't Fix
-
Affects Version/s: None
-
Fix Version/s: None
-
Component/s: io
-
Labels:None
Quoting from BigTable paper: "Many clients use a two-pass custom compression scheme. The first pass uses Bentley and McIlroy's scheme, which compresses long common strings across a large window. The second pass uses a fast compression algorithm that looks for repetitions in a small 16 KB window of the data. Both compression passes are very fast—they encode at 100-200 MB/s, and decode at 400-1000 MB/s on modern machines."
The goal of this patch is to integrate a similar compression scheme in HBase.
- depends upon
-
HADOOP-5793 High speed compression algorithm like BMDiff
-
- Open
-
-
HBASE-2681 Graceful fallback to NONE when the native compression algo is not found
-
- Closed
-