Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
10.5.3.0
-
None
-
java version "1.6.0_17"
Java(TM) SE Runtime Environment (build 1.6.0_17-b04)
Java HotSpot(TM) Client VM (build 14.3-b01, mixed mode, sharing)
Linux rocio.xxx 2.6.24-27-generic #1 SMP Fri Mar 12 01:10:31 UTC 2010 i686 GNU/Linux
-
Normal
-
Crash, Performance
Description
I have a query that generates about 2,000 rows in a summary table. It runs fast enough (~30 seconds) on MySQL. The same query on Derby runs for about 10 minutes and then fails with an OutOfMemoryError.
I have created a test case to reproduce the problem. It's not minimal because it relies on a rather large dataset to reproduce it, and it's not trivial, but I don't mind doing a bit of work trimming it if someone can point me in the necessary direction.
You can check out the test case from our Subversion server here:
http://rita.wfplogistics.org/svn/trackrita/rita/doc/derby-oom-slow-query
which includes a pre-built Derby database in "testdb.derby". If this database is deleted, "test.sh" will recreate it, but that takes about 10-15 minutes.
Just modify the script "test.sh" to point to your Derby libraries, and run it (or just execute the commands in "movement_complete.sql") to demonstrate the problem. You can view the source of that file online here:
http://rita.wfplogistics.org/trac/browser/rita/conf/movement_complete.sql
The first "insert into movement_complete" (starting around line 4) takes about 15 seconds to complete and inserts 5890 rows. The second, starting around line 54, does not complete in reasonable time on Derby. On MySQL, it runs in 28 seconds and inserts 2038 rows. On Derby, after 10 minutes I get:
JAVA ERROR: java.lang.OutOfMemoryError: Java heap space
ij> ERROR X0Y67: Cannot issue rollback in a nested connection when there
is a pending operation in the parent connection.
(process exits)
It does not output the query plan in this case.
Following the suggestion of Bryan Pendleton, I tried increasing the JVM memory limit from the default to 1024m, and this allows the query to finish executing quite quickly. I guess that means that the optimiser is just taking a lot of memory to
optimise the query, and it spends forever in GC before finally hitting OOM and giving up when using the default settings.
Attachments
Attachments
Issue Links
- relates to
-
DERBY-1397 Tuning Guide: Puzzling optimizer documentation
- Closed
-
DERBY-6096 OutOfMemoryError with Clob or Blob hash join: DataTypeDescriptor.estimatedMemoryUsage() has no case for BLOB or CLOB so would underestimate memory usage for those types at zero
- Closed