Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
2.9.1
-
None
-
None
-
Docs Required, Release Notes Required
Description
In our tests we have detected a deadlock when following piece of code is
executed for more than one thread on our application:
ClientTransactions transactions = client.ClientTransactions(); ClientTransaction tx = transactions.TxStart(PESSIMISTIC, READ_COMMITTED); // This call should atomically get the current value for "key" and put "value" instead, locking the "key" cache entry at the same time auto oldValue = cache.GetAndPut(key, value); // Only the thread able of locking "key" should reach this code. Others have to wait for tx.Commit() to complete cache.Put (key, newValue); // After this call, other thread waiting in GetAndPut for "key" to be released should be able of continuing tx.Commit ();
The thread reaching "cache.Put (key, newValue);" call, gets blocked in
there, concretely in the lockGuard object created at the beginning of
DataChannel::InternalSyncMessage function (data_channel.cpp:108). After
debugging, we realized that this lockGuard is owned by a different thread,
which is currently waiting on socket while executing GetAndPut function.
According to this, my guess is that data routing for C++ Thin Clients is not
multithread friendly.
Reported in http://apache-ignite-users.70518.x6.nabble.com/Multithread-transactions-in-a-C-Thin-Client-td35145.html