Details
-
Improvement
-
Status: Open
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
I have an issue with replication as it pertains to LOBs (BLOBs and CLOBs).
According to the documentation...
If the master looses connection with the slave, "transactions are
allowed to continue processing while the master tries to reconnect with
the slave. Log records generated while the connection is down are
buffered in main memory. If the log buffer reaches its size limit before
the connection can be reestablished, the master replication
functionality is stopped."
And the documentation for derby.replication.logBufferSize says the
maximum size of the buffer is 1048576 (1MB).
This seems to imply that if I have a database in which I store LOBs
which are, for example, 256K in size, and the connection between
master and slave is severed, I can perform 4 inserts or less before
the master gives up. I would like to file a request that this limit be raised
considerably or eliminated altogether.
I have two servers (master and slave) running 64-bit JVMs, 64GB of memory each,
SSD drives, connected by 10GbE fiber. I would like to dedicate as much memory
as I want to deal with a disconnect/resume scenario (to avoid the onerous failover).
At an insertion rate of 16 rows per second (~4MB), currently the setup would
tolerate a connection interruption of a fraction of a second. A 1GB buffer would
afford a connection interruption of ~250 seconds (for example, rebooting the fiber
switch).
Lastly, why does Derby even bother to buffer logs in memory? Can't it just keep
an offset/marker into the transaction log files, or better insert a special replication
log entry, and replay transactions from there, rather than buffering them in memory?