-
Type:
Bug
-
Status: Resolved
-
Priority:
Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 3.0.0-alpha-1, 2.3.0, 2.0.6, 2.2.1, 2.1.6
-
Component/s: Compaction
-
Labels:None
-
Hadoop Flags:Reviewed
Copy the comment from chenxu under HBASE-21879: https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
In Compactor#compact, we have the following:
protected List<Path> compact(final CompactionRequest request... ... try { ... } finally { Closeables.close(scanner, true); if (!finished && writer != null) { abortWriter(writer); } } assert finished : "We should have exited the method on all error paths"; assert writer != null : "Writer should be non-null if no error"; return commitWriter(writer, fd, request); }
should we call writer#beforeShipped() before Closeables.close(scanner, true);
In order to copy some cell's data out of the ByteBuff before it released, or commitWriter may be wrong in the following call stack
Compactor#commitWriter -> HFileWriterImpl#close -> HFileWriterImpl#writeFileInfo -> HFileWriterImpl#finishFileInfo
protected void finishFileInfo() throws IOException { if (lastCell != null) { // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean // byte buffer. Won't take a tuple. byte [] lastKey = PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell); fileInfo.append(FileInfo.LASTKEY, lastKey, false); } ... }
Because the lastCell may refer to a reused ByteBuff.
Checked the code, It's a bug and will need to fix in all 2.x & master branch.
- relates to
-
HBASE-21879 Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose
-
- Resolved
-
- links to