Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
None
-
None
-
Docs Required, Release Notes Required
Description
How to reproduce:
1. Start a 1-node cluster
2. Create several simple tables (usually 5 is enough to reproduce):
create table failoverTest00(k1 INTEGER not null, k2 INTEGER not null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2)); create table failoverTest01(k1 INTEGER not null, k2 INTEGER not null, v1 VARCHAR(100), v2 VARCHAR(255), v3 TIMESTAMP not null, primary key (k1, k2)); ...
3. Fill every table with 1000 rows.
4. Ensure that every table contains 1000 rows:
SELECT COUNT(*) FROM failoverTest00; ...
5. Restart node (kill a Java process and start node again).
6. Check all tables again.
Expected behavior: after restart, all tables still contains the same data as before.
Actual behavior: for some tables, 1 or 2 rows may be missing, if we're fast enough on steps 3-4-5. Some contains 1000 rows, some contains 999 or 998.
This bug was first observed only near Sep 15, 2023. Most probably, it was introduced somewhere near that date. Probably, it's an another face of IGNITE-20425 (I'm not sure though). No errors in logs observed.
UPD: The problem is caused by https://issues.apache.org/jira/browse/IGNITE-20116, current issue will be solved once https://issues.apache.org/jira/browse/IGNITE-20116 will be done
Attachments
Issue Links
- is caused by
-
IGNITE-20116 Linearize storage updates with safeTime adjustment rules
- Resolved
- is cloned by
-
IGNITE-20834 SQL query may hang forerver after node restart
- Closed
- relates to
-
IGNITE-20716 Partial data loss after node restart
- Open