Details
-
Bug
-
Status: Closed
-
Blocker
-
Resolution: Fixed
-
None
-
None
-
Reviewed
Description
Testing 0.96.1rc1.
With one process incrementing a row in a table, we increment single col. We flush or do kills/kill-9 and data is lost. flush and kill are likely the same problem (kill would flush), kill -9 may or may not have the same root cause.
5 nodes
hadoop 2.1.0 (a pre cdh5b1 hdfs).
hbase 0.96.1 rc1
Test: 250000 increments on a single row an single col with various number of client threads (IncrementBlaster). Verify we have a count of 250000 after the run (IncrementVerifier).
Run 1: No fault injection. 5 runs. count = 250000. on multiple runs. Correctness verified. 1638 inc/s throughput.
Run 2: flushes table with incrementing row. count = 246875 !=250000. correctness failed. 1517 inc/s throughput.
Run 3: kill of rs hosting incremented row. count = 243750 != 250000. Correctness failed. 1451 inc/s throughput.
Run 4: one kill -9 of rs hosting incremented row. 246878.!= 250000. Correctness failed. 1395 inc/s (including recovery)
Attachments
Attachments
Issue Links
- relates to
-
HBASE-6195 Increment data will be lost when the memstore is flushed
- Closed
-
HBASE-9976 Don't create duplicated TableName objects
- Closed